name
stringlengths 5
6
| title
stringlengths 8
144
| abstract
stringlengths 0
2.68k
| fulltext
stringlengths 1.78k
95k
| keywords
stringlengths 22
532
|
---|---|---|---|---|
350320 | A Comparison of Three Rounding Algorithms for IEEE Floating-Point Multiplication. | AbstractA new IEEE compliant floating-point rounding algorithm for computing the rounded product from a carry-save representation of the product is presented. The new rounding algorithm is compared with the rounding algorithms of Yu and Zyner [26] and of Quach et al. [17]. For each rounding algorithm, a logical description and a block diagram is given, the correctness is proven, and the latency is analyzed. We conclude that the new rounding algorithm is the fastest rounding algorithm, provided that an injection (which depends only on the rounding mode and the sign) can be added in during the reduction of the partial products into a carry-save encoded digit string. In double precision format, the latency of the new rounding algorithm is $12$ logic levels compared to $14$ logic levels in the algorithm of Quach et al. and $16$ logic levels in the algorithm of Yu and Zyner. | Introduction
Every modern microprocessor includes a
oating-point multiplier that complies with the IEEE
754 Standard [13]. The latency of the FP multiplier is critical to the
oating-point performance
since a large portion of the FP instructions consists of FP multiplications. For example, Oberman
reports that FP multiplications account for 37% percent of the FP instructions in benchmark
applications [17].
A lot of research has been devoted to optimizing the latency of adding the partial products to
produce the product, e.g. [1, 2, 6, 9, 15, 16, 18, 19, 20, 21, 26, 28, 29, 30]. More recently, work on
rounding the product according to the IEEE 754 Standard has been published [4, 7, 10, 22, 23, 24,
25, 31, 33, 34]. Assuming that the multiplier outputs a carry-save encoded digit string representing
the exact product, the following natural question arises: What is the fastest method to compute
the rounded product given the exact product represented by carry-save encoded digit string?
We consider and compare three rounding algorithms: (a) the algorithm of Quach et al. [23],
which we denote by the QTF algorithm; (b) the algorithm of Yu and Zyner [31], which we denote
by the YZ algorithm; and (c) a new algorithm that is based on injection based rounding [10],
which we denote by the ES algorithm. We provide block diagrams of these rounding algorithms,
optimized for speed. We measure the latency of the algorithms in logic levels to enable technology
independent comparisons. The main building blocks of these algorithms are similar, and consist of
a compound adder and the computation of a sticky and carry bits. Thus, the costs of the three
algorithms are similar, and the interesting question is nding the fastest algorithm.
We focus on double precision multiplication in which each signicand is represented by 53
bits. The algorithms assume that the signicands are normalized, namely, in the range [1; 2), and
therefore, their product is in the range [1; 4). We do not consider the cases that deal with denormal
or special values since supporting denormal values can be obtained by using an extended exponent
range [14, 25, 32] and the computation on special values can be done in parallel [12]. The three
algorithms share the following techniques:
1. The product represented by a carry-save encoded digit string of 106 digits in the case of double
precision is partitioned into a lower part and an upper part. The upper part is added by a
compound adder that computes the binary representations of sum and sum+ ulp, where ulp
denotes a unit in the last position, and sum denotes the sum of the upper part. A carry-bit,
a round-bit, and a sticky-bit is computed from the lower part.
2. The rounding decision is computed in two paths: the non-over
ow path works under the
assumption that the exact product is in the range [1; 2), and over
ow path works under the
assumption that the product is in the range [2; 4). Although the sum of the upper part,
denoted by sum, does not equal the exact product, the most signicant bit of sum controls
the selection between these two paths.
The main dierences between the three rounding algorithms are outlined, as follows.
1. The rounding decision. The QTF and ES algorithms simplify the rounding decision by an
early addition of a value (this value is called the prediction in the QTF algorithm and the
injection in the ES algorithm). In the QTF algorithm, the prediction depends on the rounding
mode and on the carry-save digit positioned 53 digits to the right of the radix point. In the
ES algorithm, the injection depends only on the rounding mode and we assume that it is
added in with the partial products, and thus the product already includes the injection. The
rounding decision in the YZ algorithm is based on customary rounding tables.
2. The position in which the carry-save encoded product is partitioned into a lower part and an
upper part diers in the three algorithms. In the YZ algorithm the lower and upper parts are
separated by a \buer" of three carry-save digits in positions [51 : 53], where the position of a
digit denotes how many digits it is to the right of the radix point. In the other two algorithms
the upper part consists of positions and the lower part consists of position [53 : 104].
The latencies of the proposed designs that implement these algorithms in terms of logic levels are
the following: The latency of the ES algorithm is 12 logic levels, the latency of the QTF algorithm
is 14 logic levels, and the latency of the YZ algorithm is 16. Note that we modied and adapted
the QTF and YZ algorithms for minimum latency.
Supporting all four rounding modes of the IEEE 754 Standard is an error prone task. We
therefore provide correctness proofs of all three algorithms which formalize and clarify the tricky
aspects. From this point of view, the YZ algorithm is easiest to prove and the QTF algorithm is
the most intricate (especially the rounding decision logic).
The paper is organized as follows. In Section 2, preliminary issues are described, such as:
notation, conventions we use regarding IEEE rounding, and the general setting. In Section 3, a
straightforward rounding algorithm is reviewed. This algorithm is described to provide an outline
the task of rounding after the exact product is computed. It does not attempt to parallelize the
task of rounding, and therefore, has a long latency. In Sections 4-6, each rounding algorithm is
described, proven, and analyzed. In Section 7, we discuss how the latencies of the algorithms
increase as the precision is increased. In Section 8, a summary and conclusion is given. Due to
space limitations some of the sections are omitted and can be found in the full version [11].
Preliminaries
Notation Let x i x binary string. By we denote the binary
string x z1 x z 1 +1 x z 2
. We also sometimes refer to x i as x[i]. Since we deal with fractions, we index
binary encoded bit strings by x so that x i is associated with the weight 2 i . The value
encoded by x[z denoted by jx[z
IEEE rounding The IEEE-754-1985 Standard denes four rounding modes: round toward 0,
round toward +1, round toward 1, and round to nearest (even). Based on the sign of the
number the rounding modes round toward +1 and round toward 1 can be reduced to the
rounding modes RZ (round to zero) and RI (round to innity) [23]. Thus leaving only three
rounding modes: RI, RZ, and RNE (round to nearest - even).
Furthermore, Quach et al. [23] suggested to implement RNE by round to nearest (up), denoted
by RNU. The rounding mode RNU is dened as follows. If x is between two successive representable
r RNU
The reason that RNE can be implemented by RNU is that r RNU (x) 6= r RNE (x)
and the least signicant bit (LSB) of the binary encoding of y 2 is 1. Therefore, obtaining r RNE (x)
from r RNU (x) can be accomplished by \pulling down" the LSB, when
For the sake of clarity, we dene rounding to zero (RZ) of signicands in the range [1; 4) in
double precision. Note, that this denition excludes the post-normalization shift that takes place
when the number is in the binade [2; 4).
Denition 1 Let x 2 [1; 4), then r RZ (x) is dened by
r RZ
where x div 2 ' is the integer q that satises:
General setting In this paper we consider a double precision multiplier. We assume that the
signicands are prenormalized, namely, that the values of the two signicands are in the range
[1; 2) and that each signicand is represented by a binary string with bits in positions [0 : 52]. The
exact product of the two signicands is in the range [1; 4) and is encoded by a binary string with
bits in positions (Note that the weight of the bit in position [ 1] is 2). For the sake of
simplicity, we ignore the exponent and sign-bit paths.
Floating point multipliers perform the computation in two phases. In the rst phase, an addition
tree reduces the partial products to a carry-save encoded digit string that represents the exact
product. In the second phase, a binary string representing the rounded product is computed from
the carry-save encoded string. This paper discusses implementations of the second phase.
3 Naive IEEE rounding
In this section we review a simple but slow IEEE compliant algorithm for rounding after multiplication
[5].
3.1 Description
The input consists of two binary strings sum and carry each having 106 bits which are indexed
from 1 to 104. The sum of the binary numbers represented by sum and carry equals the exact
product exact 2 [1; 4).
Rounding is computed as follows (the computation of the exponent string is omitted):
1. Reduce the rounding mode to one of three rounding modes based one the sign of the product.
2. 2:1-compression. The sum and carry strings are added to obtain a single binary string
namely Note that since the exact product is in the range
[1; 4), the the most signicant bit of X is in position [ 1].
3. Normalization. If jXj 2, then jX jXj. This is implemented by
a conditional shift by at most one position to the right. Note that X 0 is indexed from 0 to
4. Compute sticky. The sticky-bit equals
5. Compute rounding decision. The rounding decision rd 2 f0; 1g is based on the rounding
mode, the bits: and the sticky-bit. Note that the rounding mode at
this stage already incorporates the sign.
6. Increment. Compute be the binary string that represents
the sum.
7. Post-normalize. If jY
The signicand string of the rounded product is given by Y 0 .
3.2 Delay analysis
The latency of steps 1, 3, and 5 of the naive rounding procedure is at least logarithmic in the
length of the binary strings sum and carry. The other steps require only constant delay. If every
pipeline stage can accommodate at most one logarithmic depth circuit, then an implementation of
the naive rounding procedure requires at least 3 pipeline stages.
4 The ES rounding algorithm
In this section we review injection based rounding [10], and present an implementation for double
precision that requires (under assumptions specied in Sec. 4.5) only 12 logic levels.
4.1 Injection based rounding
Rounding by injection reduces the rounding modes RI and RNU to RZ [10]. The reduction is based
on adding an injection that depends only on the rounding mode, as follows:
The eect of adding injection is summarized in the following equation:
where mode 2 fRZ;RNU;RIg.
Figure
1 depicts the reduction of RNU and RI to RZ by injection assuming that the number to
be rounded is in the range [1; 2).
If the exact product, denoted by exact, is in the range [2; 4), then the injection must be xed
in order to make the reduction to RZ correct. The correction amount, denoted by inj correct,
is dened by
Therefore, if X is in the range [2; 4) the eect of adding the injection and the correction amount is
summarized in the following equation:
where mode 2 fRZ;RNU;RIg.
Our assumption is that injection is added in the multiplier adder array, and therefore
This completes the description of injection based rounding for numbers in the range [1; 4).
4.2 The rounding algorithm
In this section we present the new ES algorithm for rounding in
oating-point multiplication that
is based on injection based rounding.
Figure
2 depicts a block diagram of the ES rounding algorithm. The rounding algorithm works
under the assumption that the sum and carry-strings already include the injection (but not the
injection correction) and proceeds as follows:
1. The sum and carry-strings are divided into a high part and a low part. The high part
consists of positions and the low part consists of positions [53 : 104].
2. The low part is input to the box that computes the carry, round and sticky-bits, dened as
follows:
where 52 104 is the binary string that satises:
3. The higher part is input to a line of Half Adders and produces the output (X sum
and Note that the bit LX is in position 52, and that no carry is generated
to position 2, because the exact product is less than 4 (even after adding the injection).
4. are input to the Compound Adder, that outputs the sum
and the incremented sum jY
5. The Increment Decision box receives the round-bit (R), the carry-bit (C[52]), the LSB (LX ),
the MSB (Y 0[ 1]) and the rounding modes (RN , RI). The output signal inc indicates
whether Y 0 or Y 1 is to be selected.
6. The most signicant bits Y 0[ 1] and Y 1[ 1] indicate whether Y 0 and Y 1 are in the range
[2; 4). Depending on these bits, Y 0 and Y 1 are normalized as follows:
shif t right(Y
shif t right(Y
7. The rounded result (except for the least signicant bit) is selected between Z0 and Z1 according
to the increment decision inc, as follows:
8. In case the rounding mode is RNE, the least signicant bit needs to be corrected since RNE
and RNU do not always result with the same least signicant bit. The correction of the least
signicant bit is computed by two parallel paths; one path working under the assumption that
the rounded result over
ows (i.e. greater than or equal to 2), and the other path working
under the assumption that the rounded result does not over
ow.
The path that computes the correction of the LSB under the \no-over
ow" assumption is
implemented by the box called \x L (novf)". The inputs of the \x L (novf)" box are the
round bit R, the sticky bit S, and a signal RNE indicating whether the rounding mode is
round to nearest even. When the output, denoted by not(pd), equals zero the LSB should be
pulled down.
The path that computes the correction of the LSB under the \over
ow" assumption is implemented
by the box called \x L (ovf)". The inputs of the \x L (ovf)" box are the LX bit,
the carry-bit C[52], the round bit R, the sticky bit S, and the signal RNE. When the output,
denoted by not(pd'), equals zero the LSB should be pulled down.
Note that the pull down signals are inactive if the rounding mode is not RNE.
9. The least signicant bit of the rounded result before xing the LSB (in case of a discrepancy
between RNE and RNU) equals one of three values:
(a) if the rounded result does not over
ow, then the LSB equals LX C[52];
(b) if the rounded result over
ows and the increment decision is not to increment, then the
(c) if the rounded result over
ows and the increment decision is to increment, then the LSB
The xing of the LSB is implemented by combining (using AND-gates) the pull-down signals
with the corresponding candidates for the LSB signals.
The outputs of the 3 AND-gates are denoted by: L 0 (inc), L 0 (ninc), and L(inc). For the sake
of clarity, we introduce the signal L(ninc), which equals L(inc).
10. The LSB of the rounded result equals L(ninc) if no over
ow occurred and no increment took
place. The LSB of the rounded result equals L(inc) if no over
ow occurred and an increment
took place. The LSB of the rounded result equals L 0 (ninc) if an over
ow occurred and no
increment took place. The LSB of the rounded result equals L 0 (inc) if an over
ow occurred
and an increment took place.
According to the 4 cases, the LSB of the rounded result is selected depending on the over
ow
signals and the increment decision
4.3 Details
In this section we describe the functionality of three boxes in gure 2, that have not been fully
described yet.
Fix L (novf). This box belongs to the path that assumes that the product is in the range [1; 2).
Recall that there might be a discrepancy between RNE and RNU when a tie occurs, namely, when
the exact product equals the midpoint between two successive representable numbers. Let exact
denote the value of the exact product, the \Fix L (novf)" generates a signal not(pd) that satises:
exact 2 [1; occurs and RNE)
When a tie occurs there are two possibilities: (a) If RNU and RNE agree, then both yield a rounded
result with a LSB equal to zero. Pulling down the LSB in this case is not required, but causes no
damage. (b) If RNU and RNE disagree, then the LSB of the RNU result must be pulled down.
Without the addition of the injection, a tie occurs when an injection
of 2 53 is already included, a tie occurs when Therefore the not(pd) signal is
dened by:
Fix L' (ovf). This box belongs to the path that assumes that the product is in the range [2; 4).
The \Fix L' (ovf)" generates a signal not(pd') that satises:
exact 2 [2; occurs and RNE)
The dierence between not(pd') and not(pd) is that not(pd') is used under the assumption that the
product is greater than or equal to 2. Without the addition of the injection, a tie occurs (in case of
over
ow) when LX an injection of 2 53 is already included, a
tie occurs when LX Therefore the not(pd') signal is dened by:
Increment Decision. The increment decision box has two paths, depending on whether an
over
ow occurs. The path working under the assumption that no over
ow occurs (i.e. Y 0[
produces an increment decision, if LX 2. The path working under the assumption that an
over
ow occurs (i.e. Y 0[ needs to take into account the correction of the injection, denoted
by inj correct. It produces an increment decision, if LX 2.
Therefore the inc signal is dened by:
4.4 Correctness proof
The tricky part in our algorithm is the correctness of the inc signal. As long as the bit Y 0[ 1]
indicates correctly whether the exact product is greater than or equal to 2, Equations (1) and (2)
imply that the inc signal is correct. But one should also consider the cases that Y 0[ 1] fails to
indicate correctly the binade of the exact product. Namely: (a) Y 0[ and the exact product
is greater than or equal to 2; and (b) Y 0[ and the exact product (without the injection) is
less than 2.
The source of such errors is due to the fact that jY does not always equal the the 53
most-signicant bits of the exact product. Recall that
The lower part of the product (corresponding to positions [53 : 104] in the registers sum and
carry) as well as LX do not eect the value of Y 0[ 1]. However, the injection might have an
eect on Y 0[ 1] since it is added-in in the multiplier array, depending on how the multiplier array
is implemented (Wallace tree, etc.
The following claim shows that when such mismatches occur, the rounded product equals 2.
Moreover, in these cases both paths: the one working under the assumption that no over
ow occurs,
and the one working under the assumption that over
ow occurs, yield the result 2. Therefore,
correct rounding is obtained even when Y 0[ 1] fails to indicate correctly the binade of the exact
product.
exact denote the exact product, and let sum and carry satisfy jsumj
exact injection. Then correct rounding of exact can be computed as follows:
r mode
r RZ (exact
r RZ (exact
Proof: We consider two main cases: (a) Y 0[
(a) Suppose Y 0[ exact < 2 then the claim follows from Eq. (1). If exact 2, then
exact
. The reason for this is the possible contribution of LX 2 52 2 f0; 2 52 g and
have Therefore,
exact
The correction of the injection satises 0 inj correct 2 52 , therefore:
exact
According to Eq. (2), in this case r mode
However, in this case
r RZ (exact
because rounding to zero maps both intervals: 2.
(b) Suppose Y 0[ exact 2 then the claim follows from Eq. (2). If exact < 2, then
since injection 2 [0; 2 52 ), it follows that
exact
The proof now follows the proof in case (a). 2
proves that Y 0[ 1] can be used for controlling the selection of the right rounded result.
The following claim proves that our implementation of the computation of r mode (exact) is correct.
Note, that the claim does not deal with xing the LSB to obtain RNE from RNU.
r RZ (exact
2. If Y 0[
r RZ (exact
exact
where tail 2 [0; 2 52 ). This implies that
r RZ (exact
The inc signal in this case equals 1 i the addition of LX and C[52] generates a carry to position
51. If inc = 0, then simple addition takes place:
r RZ (exact
If inc = 1, there are two cases: in the rst case, the increment does not cause an over
ow, and
again simple addition takes place. If an over
ow is caused, then since only 53 bits are output, the
bit L x C[52] is discarded. This completes the proof of the rst part of the lemma.
Suppose Y 0[
Therefore,
exact
This implies that
r RZ (exact
and the lemma follows. 2
4.5 Delay analysis
In the section we present a delay analysis of the rounding algorithm depicted in Fig. 3. Our analysis
is based on the following assumptions:
1. Consider a carry look-ahead adder, and let dCLA denote the delay of the 53-bit adder measured
in logic levels. We assume, that the MSB of the sum has a delay of at most dCLA 1 logic
levels. This assumption is easy to satisfy if the carry look-ahead adder of Brent and Kung
is used [3]. Otherwise, satisfying this assumption may require arranging the parallel-prex
network so that the MSB is ready one logic level earlier.
2. The compound adder is implemented so that the delay of the sum is dCLA and the delay of
the incremented sum is dCLA + 1. This can be obtained by oring the carry-generate and
carry-propagate signals [27, Lemma 1].
3. Consider the box in which the carry, round and sticky bits are computed. According to the
rst assumption, since the widths of this box and the compound adder are similar, the delay
of the carry bit is dCLA 1 logic levels and the delay of the round bit is dCLA logic levels.
The delay of the sticky bit is estimated to be dCLA 2 logic levels, based on the fast sticky
bit computation presented in [31].
4. We assume that the delay associated with buering a fan-out of 53 is one logic level.
Figure
3 depicts the block diagram of the injection based rounding algorithm annotated with
timing estimates. We assigned dCLA the value of 8 logic levels. This implies that the sticky bit is
valid after 6 logic levels, the carry-bit C[52] is valid after 7 logic levels, and the round-bit is valid
after 8 logic levels. Similarly, the sum Y 0 is valid after 9 logic levels, the MSB Y 0[ 1] is valid after
8 logic levels, the incremented sum Y 1 is valid after 10 logic levels, and the MSB Y 1[ 1] is valid
after 9 logic levels.
Figure
4 depicts implementations of the Fix L (novf), Fix L' (ovf), and Increment Decision
boxes annotated with timing estimates. These timing estimates are used in Fig. 3 to obtain the
estimated delay of 12 logic levels for the rounded product.
5 The YZ rounding algorithm
In this section we review and analyze the rounding algorithm of Yu and Zyner, which was reported
to have been implemented in the ULTRASparc RISC microprocessor [31]. We refer to this algorithm
as the YZ rounding algorithm.
5.1 Description
Figure
5 depicts a block diagram of the YZ rounding algorithm. This description diers from the
description in [31] in two ways:
1. In [31] the sum output by the 3-bit adder has only three bits. We believe that this is a
mistake, and that the sum should have four bits (we denote this sum by Z[50 : 53]).
2. The sum and the incremented sum in [31] is fed to a 4 : 1-mux, which selects one of them
either shifted to the right or not. We propose to normalize the sum and the incremented
sum before the selection takes place. This early normalization helps reduce the delay of the
rounding circuit at the cost of two shifters rather than one.
The algorithm is described below:
1. The sum and carry-strings are divided into a high part and a low part. The high parts
consist of positions and the low parts consist of positions [54 : 104].
2. The low part is input to the box that computes the carry and sticky bits, dened as follows:
where 53 104 is the binary string that satises:
3. The higher part is input to a line of Half Adders and produces the output X sum
Note that no carry is generated to position [ 2], because the exact product
is less than 4.
4. The high part X sum divided into two parts. Positions
are fed into the Compound Adder, that outputs the sum Y and the incremented
sum jY are added with the carry bit
C[53] to produce the sum Z[50 : 53].
5. The processing of Z[50 : 53] is split into two paths; one working under the assumption that
the rounded product will not over
ow (i.e. less than 2), and the other path working under
the assumption that the rounded product will over
ow.
The no-over
ow path computes a rounding decision, rd[52], in the round dec. (novf) box.
The rounding decision rd[52] is added with Z[50 : 52] in the -novf box to produce the sum
In Claim 3 we prove that this 3 bit addition does not produce a carry bit
in position 49. The sum Z novf [50 : 52] has two roles: positions are the result bits
in positions [51 : 52] if no over
ow occurs, and position [50] is used to detect if a carry is
generated in position [50] if no over
ow occurs. The bit Z novf [50] decides whether the upper
or the incremented sum Y 1[0 : 50] should be selected in the no-over
ow case.
The over
ow path computes a rounding decision, rd 0 [51], in the round dec. (ovf) box. The
rounding decision rd 0 [51] is added with Z[50 : 51] in the -ovf box to produce the sum
In Claim 3 we prove that this 2 bit addition does not produce a carry bit
in position 49. The sum Z ovf [50 : 51] has two roles: position [51] serves as the result bit
in position [52] if over
ow occurs, and position [50] is used to decide whether an increment
should take place in the upper part
6. The decision which path should be chosen is made by the select decision box. First, an
over
ow signal ovf is computed as follows:
The over
ow signal ovf determines whether Z ovf [50] or Z novf [50] is chosen as the carry-bit
that eects position [50], and therefore, determines the increment decision inc:
Z ovf [50] if
Z novf [50] if
7. The two least signicand bits of the rounded product are computed as follows:
If no over
ow occurs
Therefore the lower mux selects these bits for result[51 : 52] when
If an over
ow occurs [51]. The bit result[51] depends on
whether an increment takes place or not:
Y 0[50] if inc = 0:
Note that inc = Z ovf [50] if ovf = 1. Since the signal Z ovf [50] is ready earlier than inc, we
use Z ovf [50] to control the selection:
Y 1[50] if Z ovf
Y 0[50] if Z ovf
The selection between Y 1[50] and Y 0[50] is done by the sel multiplexer in Fig. 5.
8. The most signicant bits Y 0[ 1] and Y 0[ 1] indicate whether Y 0 and Y 1 are in the range
[2; 4). Depending on these bits, Y 0 and Y 1 are normalized as follows:
shif t right(Y
shif t right(Y
9. The rounded result (except for the least signicant bit) is selected between Z0 and Z1 according
to the increment decision inc signal, as follows:
5.2 Correctness
In this section we provide a proof that adding the rounding decision does not generate a carry-bit
in position 49. This claim applies both to the no-over
ow path and to the over
ow path.
denote the sum that is output by the 3-bit adder as depicted in Fig. 5. Let
rd[52] denote the rounding decision for the no-over
ow path, and let rd 0 [51] denote the rounding
decision for the over
ow path. Then,
Proof: The partial compression [8] caused by the half-adder line implies that
(jX sum
This follows by the fact that X sum [i] and X carry [i cannot be both equal to one. Adding C[53]
increases the above range by 2 4 , yielding that
The contribution of rd 0 [51] 2 2 is in the range [0; 4=16], and therefore, Eq. 13 follows. The
contribution of rd[52] 2 3 is in the range [0; 2=16], and therefore, Eq. 14 follows. 2
5.3 Delay analysis
Figure
6 depicts the YZ rounding algorithm annotated with timing estimates. We use the same
assumptions on the delays of signals that are used in Sec. 4.5. We argue that at least 16 logic levels
are required. The path in which the sum and incremented sum are computed does not lie on the
critical path. The critical path consists of the carry-bit computation, the 3-bit adder, the round
dec. (novf) box, the -novf box, the select decision box, a driver, and the upper mux.
We considered the following optimizations to minimize delay for a lower bound on the required
number of logic levels:
1. The 3-bit adder is implemented by conditional sum adder; the late carry-in bit C[53] selects
between the sum and the incremented sum. This is a fast implementation because the bits
of X carry and X sum are valid after one logic level and the carry-bit C[53] is valid after 7 logic
levels.
2. The rounding decision boxes are implemented by cascading two levels of multiplexers that are
controlled by Z[52 : 53] in the no-over
ow path and by by Z[51 : 52] in the over
ow path. In
the over
ow path, Z[53] is combined with sticky-bit, and hence the rounding decision required
3 logic levels. In the no-over
ow path, only 2 logic levels are required.
3. The addition of the rounding decision bit required only one logic level using a conditional
sum adder.
4. The inc signal is valid after 3 more logic levels, due to the need to compute the signal ovf in
two logic levels (see Eq. 11), and one selection according to Eq. 12.
5. The inc signal passes through a driver due to the large fanout. This driver incurs a delay of
one logic level, and controls the upper mux to output the result after 16 logic levels.
6 The QTF rounding algorithm
Quach et al. [23] presented methods for IEEE compliant rounding. Their technique is a generalization
of the rounding algorithm of Santoro et al. [24]. In this section we present a rounding
algorithm that is based on the method of Quach et al. while aiming for minimum delay.
Apart from reducing the rounding modes to RZ;RNU and RI, the key idea used in the methods
of Quach et al. and Santoro et al. is to inject a prediction bit that is based on the rounding mode
and the values of sum[53] and carry[53]. The injection of the prediction bit reduces the number
of possibilities of the rounded result.
In this section we deviate from Quach et al. [23] in the following points:
1. The presentation in the paper of Quach et al. is separated according to the rounding mode.
Since we are investigating rounding algorithms that support all the rounding modes, we
integrated the rounding modes into one algorithm.
2. Quach et al. suggest several options for the choice of the prediction logic in RNU. Only one
possibility was suggested in modes RZ and RI. Since the prediction logic lies on the critical
path, we chose to simplify the prediction logic as much as possible by dening:
pred
3. Quach et al. separate the rounding decision and the compound adder. They use a 3-way
compound adder that computes sum; sum 2. The correct sum is selected by
the control logic. We are interested in a faster design, and therefore, we break the 3-way
adder into a Half-Adder line, a 2-way compound adder, and a mux. The control logic uses
an output of the 2-way compound adder, and the LSB (in case of no over
ow) is generated
by the control logic as well as the increment decision.
6.1 Description
Figure
7 depicts a block diagram of a rounding algorithm that we suggest based on Quach et al. [23].
There are many similarities between the rounding algorithm based on injection rounding and the
rounding algorithm based on Quach et al., so we point out the dierences and the new notations.
Before being input to the compound adder, the high part of the sum and carry pass through
two lines of Half-Adders. The rst line makes room for the prediction bit. The second pass enables
separating the bit LX 0 in position [52] (this is, in fact, part of a 3-way compound adder). The
increment decision has two paths: one for over
ow and the other for no-over
ow. The MSB Y 0[ 1]
selects which path outputs the increment decision inc. In addition the increment decision computes
the LSB (before xing for RNE) in case an over
ow does not occur.
6.2 Details
In this section we describe the details of the increment decision box and the LSB-x for RNE box.
Increment decision box. The outputs of the increment decision box are the increment decision
inc and the bit L that equals the LSB of the rounded product before xing in case no over
ow
occurs. The increment decision is partitioned into two paths. One for the case that an over
ow
occurs which computes the signal inc ovf , and the other path for the case that no over
ow occurs
which computes the signal incnovf . The following equations dene the signals inc ovf ; inc novf , and
inc.
inc
(R
pred
pred
inc
(S _ R _ pred
pred
The output inc equals inc novf or inc ovf bit according to bit Y 0[ 1].
inc novf if Y 0[
inc ovf if Y 0[
The bit L, which equals the LSB of the rounded product (before xing) in case no over
ow
occurs, is dened by:
R LX 0 C[52] if RNU
pred if RI
Note that the case of RI is complicated due to the possibility that pred 6= C[52]. If pred = C[52],
then pred 6= C[52], the the eect of the wrong prediction is reversed by
pred C[52].
LSB-x for RNE. The LSB-x for RNE box outputs two signals: not(pd) is used to pull-down
the LSB if a \tie" occurs, but no over
ow occurs, and not(pd 0 ) is used to pull-down the LSB if a
\tie" and an over
ow occur. These signals are dened as follows:
In contrast to injection based rounding no injection or prediction is contained in the LX 0 , R
and S-bit computation in RNE. If no over
ow occurs, a \tie" occurs 0, in which
case the LSB should be pulled down for RNE. Therefore,
If over
ow occurs, a \tie" occurs i which case the LSB
should be pulled down for RNE. Therefore,
6.3 Correctness
In this section we prove the correctness of the selection signal inc. The proof is divided into two
parts. In the rst part, we assume that Y 0[ the exact product is in the range [2; 4). In
the second part, we prove that even if the Y 0[ 1] signals over
ow incorrectly, then the selection
signal inc is still correct.
correctly whether the exact product is in the range [2; 4).
Then the inc signal signals correctly whether an increment is required for rounding.
Proof: We consider separately the cases of over
ow and no over
ow. For each case we consider
the three possible rounding modes. The question which we address is whether the rounding decision
in conjunction with the compression of the lower part of the carry-save representation produces a
carry into position 51. The inc signal should be 1 i a carry is generated into position 51.
Suppose no over
ow occurs, namely Y 0[
1. In rounding mode RZ, only truncation takes place, and therefore, a carry into position 51 is
generated 2.
2. In rounding mode RNU, the rounding decision is to increment (in position 52) if the round-bit
equals 1. This increment generates a carry to position 52, and hence, a carry is generated
into position 51 2.
3. In rounding mode RI, the rounding decision is to increment (in position 52) if
One needs to take into account the prediction that was already added to the product. We
consider two sub-cases:
(a) If pred, then the contributions of pred and C[52] cancel out, and therefore,
C[52] should be ignored. The rounding decision generates a carry into position 51 i
(R
(b) If C[52] 6= pred, then this implies that Therefore, the
rounding decision without the prediction would have been to increment in position 52.
pred = 1, this increment already took place, and an additional carry should not
be generated into position 51.
Suppose that over
ow occurs, namely, Y 0[
1. In rounding mode RZ, since only truncation takes place, this case is identical to the case of
no over
ow.
2. In rounding mode RNU, the rounding decision is determined by the bit in position 52 which
Therefore, there are two cases: either a carry is generated into position
51 since LX 0 or a carry is generated into position 51 by the rounding decision.
Combining these cases implies that a carry is generated into position 51 i
3. In rounding mode RI, we consider two cases: (i) If pred, then we may ignore C[52]
and the prediction since their contributions cancel out. In this case, the rounding decision is
to increment i or (L X pred, then
We consider two sub-cases:
(a) If LX then the eect of the prediction was restricted to changing LX 0 from 0 to 1.
Therefore, the rounding decision is based on R _ S. Since the rounding decision
is to increment.
(b) If LX then the eect of the prediction was to generate a carry into position 51 in
the second half-adder line and to change LX 0 from 1 to 0. This means that without the
prediction, LX 0 would have been equal to 1, which implies that the rounding decision
would have been to increment. Since an increment already took place, an additional
increment is not required.The selection between inc ovf and inc novf is controlled by Y 0[ 1], although Y 0[ 1] might not
signal correctly the case of over
ow. The following claim shows that when Y 0[ 1] does not signal
over
ow correctly, both choices are equal, and hence, the inc signal is correct.
does not signal an over
ow correctly, namely, Y 0[
exact < 2, or Y 0[ exact 2. Then, inc inc novf .
Proof: The proof is divided into two cases:
1. Y 0[ exact < 2. This case can only occur when pred
Therefore, it is restricted to rounding mode RI. Since
pred 2 52
it follows that LX This implies that in this case inc ovf = inc novf , as required.
2. Y 0[ exact 2. This discrepancy can only occur if
Therefore,
is smaller than 2 and a multiple of 2 52 , it follows that
This implies that LX 1. Consider the three rounding modes: In RZ, inc inc novf . In
RI, if excluding the possibility of this case. In RNU, since
and and the claim follows.6.4 Delay analysis
Figure
8 depicts the rounding algorithm based on Quach et al. [23] with delay annotation. The delay
assumptions that are used here are similar to those used in the two previous rounding algorithms.
The rounding algorithm depicted in Fig. 8 uses a prediction logic which lies on the critical path.
The delay of the prediction logic is two logic levels.
Following Quach et al., Fig. 8 depicts a non-optimized processing order in which the post-
normalization shift takes place after the round selection. The increment decision box is assumed
to be organized as follows: The bits S, C[52], R and Y 0[ 1] are valid after 6; 7; 8; 10 logic levels,
respectively. To minimize delay we implement the rounding equations by 4 levels of multiplexers,
so that the results can be selected conditionally the signals as they arrive. Thus, a total delay of 15
logic levels is obtained. By performing post-normalization before the round selection takes place,
one logic level can be saved to obtain a total delay of 14 logic levels.
7 Higher Precisions
How do these rounding algorithms scale when higher precisions are used? One can see that the
parts in the presented rounding algorithms that depend on the length of the signicands are:
the half-adders, the compound adder, the sticky, round, and carry-bit computation, the selection
multiplexers, and the drivers for amplifying the signals that control the wide multiplexers.
When precision is increased, the widths of upper and lower parts of the carry and save strings
grow, but they still stay almost equal to each other. This implies that our assumptions on the
relative delay of the carry-bit computation and the compound adder do not need to be changed.
Moreover, it is expected that as precision grows, the gap between the delay of computing the
carry-bit and the sticky-bit grows, so that the sticky-bit computation will not lie on the critical
path. This implies that a rst order estimate (ignoring additional delay due to increased fanout
and interconnection length) of the delays of the rounding algorithms for precision p can be stated
as follows:
1. The delay of the injection based rounding algorithm is 4 logic levels plus the delay of the sum
computation of the p-bit compound adder dCLA (p).
2. The delay of the YZ rounding algorithm is 8
3. The delay of the rounding algorithm based on Quach et al.[23] with the optimization (in
which the post-normalization takes place before the selection) is 6 levels.
8 Summary and Conclusions
A new IEEE compliant
oating-point rounding algorithm for computing the rounded product from a
carry-save representation of the product is presented. The new rounding algorithm is compared with
two previous rounding algorithms. To make the comparison as relevant as possible, we considered
optimizations of the previous algorithms which improve the delay. For each rounding algorithm,
a logical description and a block diagram is given, the correctness is proven, and the latency is
analyzed.
Our conclusion is that the new ES rounding algorithm is the fastest rounding algorithm, provided
that an injection is added in during the reduction of the partial products into a carry-save
encoded digit string. With the ES algorithm the rounded product can be computed in 12 logic
levels in double precision (i.e. when the signicands are 53 bits long). In \precision independent"
terms, the critical path consists of a compound adder and 4 additional logic levels.
If the injection is not added in during the reduction of the partial products into a carry-save
encoded digit string, then an extra step of adding in the injection is required. This step amounts
to a carry-save addition, and the latency associated with it is that of a full-adder, namely, 2 logic
levels. Thus, if the injection is added in late, then the latency of the ES rounding algorithm is 14
logic levels.
The addition of the injection during the reduction of the partial products can be accomplished
without a slowdown or with a very small slowdown. The justication for this is: (a) The partial
products are usually obtained by Booth recoding and by selecting (e.g. 5:1 multiplexer), and hence,
are valid much later than the injection; and (b) The delay of adding the partial products does not
increase strictly monotonically as a function of the number of partial products. The delay incurred
by adding in the injection, if any, depends on the length of the signicands and on the organization
of the adder tree.
The other two rounding algorithms do not require an injection, and in double precision, the
latency of the QTF rounding algorithm is 14 logic levels. The critical path consists of a compound
adder and 6 additional logic levels. The YZ rounding algorithm ranks as the slowest rounding
algorithm, with a latency of 16 logic levels, and the critical path consists of a compound adder and
8 additional logic levels.
--R
Area and Performance Optimized CMOS Multipliers.
Fast Multiplication: Algorithms and Implementation.
A Regular Layout for Parallel Adders.
Method for rounding using redundant coded multiply result.
Some schemes for parallel multipliers.
Method and apparatus for rounding in high-speed multipliers
Recoders for partial compression and rounding.
Fast multiplier bit-product matrix reduction using bit- ordering and parity generation
A Dual Mode IEEE multiplier.
A comparison of three rounding algorithms for IEEE oating- point multiplication
Parallel method and apparatus for detecting and completing oating point operations involving special operands.
IEEE standard for binary oating-point arithmetic
Multistep gradual rounding.
Design strategies for optimal multiplier circuits.
Design Issues in High Performance Floating Point Arithmetic Units.
The SNAP project: Design of oating point arithmetic units.
A method for speed optimized partial product reduction and generation of fast parallel multipliers using an algorithmic approach.
Reducing the number of counters needed for integer multiplication.
Generation of high speed CMOS multiplier- accumulators
Floating Point Multiplier Performing IEEE Rounding and Addition in Parallel.
On fast IEEE rounding.
Rounding algorithms for IEEE multipliers.
How to half the latency of IEEE compliant oating-point multiplication
A reduced-Area Scheme for Carry-Select Adders
A very fast multiplication algorithm for VLSI implementation.
A suggestion for parallel multipliers.
A new design technique for column compression multipliers.
oating point multiplier
Method and apparatus for partially suporting subnormal operands in oating point multiplication.
Shared rounding hardware for multiplier and division/square root unit using conditional sum adder.
Circuitry for rounding in a oating point multiplier.
9L 13L 13L 14L 15L 14L 13L 16L 16L 9L 8L 10L
--TR
--CTR
Peter-Michael Seidel , Guy Even, Delay-Optimized Implementation of IEEE Floating-Point Addition, IEEE Transactions on Computers, v.53 n.2, p.97-113, February 2004
Ahmet Akkas , Michael J. Schulte, Dual-mode floating-point multiplier architectures with parallel operations, Journal of Systems Architecture: the EUROMICRO Journal, v.52 n.10, p.549-562, October 2006
Nhon T. Quach , Naofumi Takagi , Michael J. Flynn, Systematic IEEE rounding method for high-speed floating-point multipliers, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, v.12 n.5, p.511-521, May 2004 | floating-point multiplication;IEEE 754 Standard;floating-point arithmetic;IEEE rounding |
350326 | Integer Multiplication with Overflow Detection or Saturation. | AbstractHigh-speed multiplication is frequently used in general-purpose and application-specific computer systems. These systems often support integer multiplication, where two $n$-bit integers are multiplied to produce a $2n$-bit product. To prevent growth in word length, processors typically return the $n$ least significant bits of the product and a flag that indicates whether or not overflow has occurred. Alternatively, some processors saturate results that overflow to the most positive or most negative representable number. This paper presents efficient methods for performing unsigned or two's complement integer multiplication with overflow detection or saturation. These methods have significantly less area and delay than conventional methods for integer multiplication with overflow detection or saturation. | Introduction
1.1 Multiplication
Multiplication is an essential arithmetic operation for general purpose computers
and digital signal processors. High-performance systems support parallel multiplication
in hardware. Various high-speed parallel multipliers have been proposed and
realized. Most parallel multiplier designs can be divided into two classes; array
multipliers and tree multipliers. Array multipliers consist of an array of similar cells
that generate and accumulate the partial products [3]. Tree multipliers generate all
partial products in parallel, use a tree of counters to reduce the partial products
to sum and carry vectors and then sum these vectors, using a fast carry-propagate
adder. Several methods have been developed for reducing the partial products [1],
[2],
The regular structure of array multipliers facilitates their implementation in
VLSI technology. The delay of array multipliers, however, is proportional to the
operand length. On the other hand, tree multipliers offer a delay proportional
to the logarithm of the operand length. The main drawback of tree multipliers is
their irregular interconnection structure, which makes them difficult to implement in
VLSI. Thus, tree multipliers are preferred for high performance systems, while array
multipliers are preferred for systems requiring less area. New implementations and
optimizations of parallel multipliers are still active research areas [5]. More detail
descriptions of array and tree multipliers are given in the next chapter.
1.2 Overflow
To avoid grow in word length, most instruction set architectures and high-level languages
require that arithmetic operation return results with the same length as their
input operands. If the result of an integer arithmetic operation on n-bit numbers
cannot be represented by n bits, overflow occurs and needs to be detected. For
integer multiplication, the method for overflow detection also depends on whether
the operands are signed or unsigned integers. For unsigned multiplication, overflow
only occurs if the result is larger than the largest unsigned n-bit number. For signed
integer multiplication overflow also occur if the result is smaller than the minimum
representable n-bit number. For two's complement multiplication there is also a
difference between fractional and integer overflow detection. Since overflow only
occurs for two's complement fractional numbers when \Gamma1 is multiplied by \Gamma1, it is
easy to detect overflow when multiplying two's complement fractions.
It is an important design issue in computer architecture is to decide what to do
when overflow occurs. Typically, overflow results in an overflow flag being set. This
overflow flag can then be used to signal an arithmetic exception [9].
1.3 Saturation
In most general purpose processors, overflow is handled by setting an exception
flag. More recent implementations for digital signal processing and multimedia
applications saturate results that overflow to the most positive or most negative
representable number [11], [12]. For two's complement integers this is \Gamma2
negative numbers and 2 positive numbers. For unsigned integers, results
that are too large saturate to 2
1.4 Thesis Overview
Previous studies have focussed on overflow detection in two's complement addition
[13], multi-operand addition [14], fractional arithmetic operations [10] and generalized
signed-digit arithmetic[15]. This thesis presents efficient techniques for integer
multiplication with overflow detection or saturation. Most existing computers detect
overflow in integer multiplication by a computing 2n-bit product and then testing
the most significant bits to see if overflow has occurred. The methods proposed in
this thesis only calculate n or n bits of the product. This leads to a significant
reduction in area and delay.
Chapter 2 presents array multipliers, tree multipliers, and conventional methods
for overflow detection. Chapter 3 introduces new methods for overflow detection and
saturation for unsigned integer multiplication. Chapter 4 focuses on overflow detection
and saturation for two's complement integer multiplication. Chapter 5 presents
the component counts and area and delay estimates for unsigned and two's complement
parallel multipliers that use either the conventional or the proposed methods
for overflow detection. Chapter 6 discusses future work and gives conclusions.
Chapter 2
Previous Research
2.1 Unsigned Parallel Multipliers
Multiplication of two n-bit unsigned numbers is shown in Figure 2.1, where
Multiplication produces a 2n-bit product. If the n least significant bits are
used for the result, then overflow occurs when the actual product uses more than n
bits. In other words, overflow occurs when the product is greater or equal to 2 n .
With conventional methods, overflow is detected after the 2n-bit result is pro-
duced. This is done by simply logically ORing together the n most significant bits,
a n-2
a n-1 n-2
a n-1
a n-2 n-2
a n-1 a n-2b b 1
a n-1 a n-2b b 0
Figure
2.1: Multiplication of A and B.
such that
where V is one if overflow occurs and + denotes logical OR. Although calculating
2n product bits and then detecting overflow leads to unnecessary area and delay,
most computers that provide integer multiplication with overflow detection use this
approach. If the system requires saturation, the saturated least significant product
bits are computed as
which sets the product to 2 n\Gamma1 when overflow occurs.
2.1.1 Unsigned Array Multipliers
In general, array multipliers are slower than tree multipliers. In spite of this speed
disadvantage, however, array multipliers are often used due their regular layout, low
area, and simplified design.
A block diagram of an unsigned 8 by 8 array multiplier is shown in Figure 2.2.
Each diagonal in Figure 2.2 corresponds to a column in the multiplication matrix,
in
Figure
2.1. A modified half adder (MHA) is a half adder with an AND gate,
and a modified full adder (MFA) is a full adder with an AND gate. The AND gates
generate the partial products. Full adders and half adders add the generated partial
products. Sum outputs are connected diagonally and carry outputs are connected
vertically. The last row of adders, which are connected from left to right, generates
the n-most significant product bits . The critical path through this multiplier is
shown with dashed lines. Since almost half of the latency is due to the bottom row
of adders, this row may be replaced by a fast carry-propagate adder. Although this
decreases the overall delay, it has a negative impact on the design's regularity. An
n by n unsigned array multiplier uses n 2 AND gates, (n HAs.
The conventional method for overflow detection requires the n most significant
product bits to be calculated. These product bits are then OR'ed together to produce
the overflow flag, as shown in Figure 2.2. The conventional method for saturating
multiplication is accomplished by ORing V with p 0 to p n\Gamma1 , as shown in
Figure
2.3.
AND AND AND AND AND
AND AND
AND
AND
AND
AND
AND
AND MHA MHA
MHA MHA MHA MHA MHA
MFA MFA MFA MFA MFA MFA MFA
MFA
MFA
MFA
MFA
MFA
MFA
MFA
MFA
MFA
MFA
MFA MFA MFA MFA MFA
MFA
MFA
MFA
MFA
MFA MFA MFA MFA
MFA
MFA
MFA
MFA MFA MFA
MFA
AND
MFA
MFA
FA
c s
FA
c s
FA
c s
AND
MFAa 7 a 6 a 5 a 4 a 3 a 2 a 1 a 0
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
MFA
MFA
HA
c s
FA
c s
FA
c s
FA
c s
Figure
2.2: Unsigned Array Multiplier with Conventional Overflow Detection.
AND
AND
AND
AND
AND
MHA MHA MHA MHA
MHA MHA MHA
MFA MFA MFA MFA MFA MFA MFA
MFA
MFA
MFA
MFA
MFA
MFA
MFA
AND
MFA
MFA
MFA MFA MFA MFA MFA
MFA
MFA
MFA
MFA
MFA MFA MFA
MFA
MFA
MFA
MFA MFA MFA
MFA
MFA
MFA
MFA
MFA
FA
c s
FA
AND
AND
AND
AND
MFA
FA
AND
AND
AND
AND
a 7 a 6 a 5 a 4 a 3 a 2 a 1 a 0
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c
c
c
c
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
MFA
MFA
HA
c
FA
c s
FA
c s
FA
c s
Figure
2.3: Unsigned Array Multiplier with Conventional Saturation.
2.1.2 Unsigned Tree Multipliers
Tree multipliers have three main parts; partial product generation, partial product
reduction, and final carry-propagate addition. Various reduction schemes have been
developed over the years. Two of the most well-known methods for multiplier tree
designs are those proposed by Wallace [2] and Dadda [1]. Wallace's strategy combines
three rows of partial product bits using (3; 2) and (2; 2) counters to produce
two rows. Dadda's strategy leads to a simpler counter tree but requires a larger
final carry-propagate adder. There are hybrid approaches between these two methods
that offers cost and speed trade-offs for VLSI implementations. These reduction
schemes differs in the number and placement of counters in the tree and the size
of the final carry propagate adder. The tree multipliers presented in this thesis use
Dadda's method, since it allows component counts to easily be determined based on
n. Since our overflow detection method does not depend on the reduction strategy,
similar savings are expected for other tree multipliers.
Figure
2.4 shows a dot diagram of an 8 by 8 unsigned array multiplier. Dot
diagrams are often used to illustrate reduction strategies in tree multipliers [2].
With this technique, a dot represents a partial product bit, a plain diagonal line
represents a full adder and a crossed diagonal line represents a half adder. The
two bottom rows of the dot diagram corresponds to sum and carry vectors that are
combined using the final carry-propagate adder to produce the product.
Dadda multipliers require n 2 AND gates, (n and a
Figure
2.4: Dadda Reduction Scheme.
CPA. The number of stages, based on n; is shown in Table 2.1.
For example a 24 by 24 Dadda multiplier requires seven reduction stages. The
worst case delay path is equal to the delay of partial product generation, plus the
delay for the reduction stages, plus the delay for the final carry-propagate addition.
With the conventional method, overflow is detected with a tree of (n \Gamma 1) 2-input
gates. The delay of the tree of OR gates is equivalent to dlog 2 ne 2-input OR
gates, as shown in Figure 2.5.
Range of n s
43
Table
2.1: Number of Stages s for n-Bit Dadda Tree Multipliers.V101214
pp
Figure
2.5: Tree of 2-input OR Gates for
2.2 Two's Complement Multipliers
Two's complement numbers A and B and their product P have the values
where
\Gammab
a
a
overflow occurs when
Multiplication of two's complement numbers generates signed partial products as
shown in Figure 2.6. Since a negative
weights, they should be subtracted, rather than added. This makes the design
difficult to implement because it requires adder and subtracter cells. Consequently,
several techniques have been proposed to handle partial products with negative
and positive weight, such as the Baugh-Wooley Algorithm[6] and its variations [7],
[8] and, Booth's Algorithm [16]. The Baugh-Wooley algorithm provides a method
a 1 a 0
- a
- a a 0
a 1
a 0 n-1
x
a n-1
a a
Figure
2.6: Two's Complement Multiplication Matrix.
for modifying the partial product matrix, so that all the partial product bits have
positive weights. This algorithm and its modified form are often used to perform
two's complement multiplication.b b 1
a n-1
a
a
a 1 b n-2 a
a
a n-2
a n-1 n-2
bn-1a n-2 n-2
a
Figure
2.7: Modified Two's Complement Multiplication Matrix.
Two's complement multiplication is often realized using a variation of the Baugh-
Wooley algorithm called the Complemented Partial Product Word Correction Al-
gorithm. With this implementation, partial product bits containing a
but not both, are complemented and ones are added to columns n and 2n \Gamma 1. This
is equivalent to taking the two's complement of the two negative terms in Equation
2.5. The multiplication matrix for this implementation is shown in Figure 2.7.
2.2.1 Two's Complement Array Multipliers
The design of an array multiplier that uses the Complemented Partial Product Word
Correction Algorithm and conventional overflow detection is shown in Figure 2.8.
The design shown in this figure is similar to the unsigned array multiplier design in
Figure
2.2. AND gates in the left-most column are replaced by NAND gates and
the last row of MFAs are replaced by Negating Modified Full Adders (NMFA). The
specialized half adder (SHA) in the bottom right corner is a half adder that takes
the sum and carry bits of the previous row and adds them with '1'. This cell has
approximately the same area and delay as a regular half adder. The last product
bit p 2n\Gamma1 is inverted to add the one in column 2n \Gamma 1. Inverting p 2n\Gamma1 has the same
effect as adding one in column 2n \Gamma 1, because the carry out from this column is
ignored.
In
Figure
2.8, the bottom two rows of cells, consisting of n XOR gates and
gates, are dedicated to overflow detection. The XOR gates identify whether
differs from any of the more significant product bits p n to outputs
from the XOR gates are combined to determine if the overflow flag V should be set.
The logic equation for the overflow detection flag is
AND
MHA MHA MHA MHA MHA
MHA
MFA MFA
AND
MFA MFA MFA MFA
MHA
MFA
MFA
MFA
MFA
MFA
MFA
AND
MFA MFA MFA MFA MFA
MFA
MFA
MFA
MFA MFA
MFA MFA
MFA
MFA MFA MFA
MFA
MFA
MFA
MFA
AND
MFA
MFA
bb AND
AND
a 7 a 6 a 5 a 4 a 3 a 2 a 1 a 0
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
FA
c s
FA
c s
FA
c s
AND
AND NMFA NMFA NMFA NMFA NMFA NMFA NMFA
FA
c s
FA
c s
FA
c s
Figure
2.8: Two's Complement Array Multiplication with Conventional Overflow
Detection.
Saturating multiplication is implemented by adding an n-bit 2-to-1 multiplexer,
as shown in Figure 2.9. For two's complement multiplication, if the product over-
flows, the saturated product is determined from the sign bits of A and B. If
and a negative overflow has occurred and the product saturates
to \Gamma2 On the other hand, if positive overflow has
occurred and the product saturates to 2 then overflow has not
MFA
MFA
MFA
MFA
MFA MFA
MFA
MFA
MFA
MFA MFA
MFA
MFA
MFA
MFA MFA
MFA
MFA
MFA
MFA
MFA
MFA
AND
MFA
c
MFA
c
c s
c
c s
MFA
MFA
MFA
MFA
MFA
MFA
MFA
MFA
MHA
MHA
MHA
MFA
c
s
MHA
MHA
MHA
c
AND
ANDpAND
AND
AND
a 7 a 6 a 5 a 4 a 3 a 2 a 1 a 0
c
c
c
c
c
c
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c
c
c
c
c
c
c
FA
c s
FA
c s
FA
c s
AND
NMFA NMFA NMFA NMFA NMFA
FA
c s
FA
c FA
c s
MFA
MFA
n-bit
Mux
XAND
Figure
2.9: Conventional Two's Complement Saturation.
occurred and the saturated product is the n least significant bits. If
the saturated product
2.2.2 Two's Complement Tree Multipliers
Several techniques are available for implementing two's complement multiplier trees
[4], [8].
Figure
2.7 shows the dot diagram of an 8 by 8 two's complement multiplier
tree that uses the Complemented Partial Product Algorithm [8] and Dadda's reduction
method [1]. Similar to the two's complement array multiplier, (2n \Gamma 2) of the
partial product bits are inverted and ones are added to columns n and (2n \Gamma 1): Although
it seems that the heights of these two columns are increased by adding ones,
this does not effect adversely the design, because the most significant product bit is
simply inverted and a SHA adds one in the column n. In Figure 2.10, a dot with a
line above it indicates a complemented partial product bit and the circled half adder
in column 8 is a (SHA) specialized half adder. An n by n two's complement Dadda
tree multiplier has
FAs, and a (2n \Gamma 2) bit CPA.
Conventional techniques for overflow detection and saturation for two's complement
tree multipliers are similar to the techniques used for two's complement array
multipliers. The only difference is that tree multipliers tend to use a tree of OR
gates, rather than a linear array of OR gates, when computing the overflow flag.
Figure
2.10: Two's Complement Dadda Tree Multiplier.
Chapter 3
Overflow Detection and
Saturation for Unsigned Integer
Multiplication
3.1 General Design Approach
Instead of computing all 2n bits of the product, the methods proposed in this thesis
only compute the n least significant product bits and have separate overflow detection
logic, as shown in Figure 3.1. Carries into column n are also used in the
overflow detection circuit.
The main idea behind the proposed unsigned overflow detection methods is that
overflow occurs if any of the partial product bits in column n to (2n \Gamma 2) are '1' or any
OVERFLOW
DETECTION
RESULT
OPERAND A OPERAND B
n by n
MULTIPLIER
Figure
3.1: Block Diagram for Unsigned Multiplication Overflow Detection.
of the carries into column n are '1'. Consequently, these ones can be detected without
adding the partial products. The logic equation for unsigned overflow detection is
a
In this expression, V is the overflow flag, c i is the i th carry into column n, bit summations
corresponds to logical ORs and bit multiplications corresponds to logical
ANDs.
3.2 Unsigned Array Multipliers with Overflow Detection
or Saturation
Figure
3.2. shows an 8 by 8 multiplication matrix to demonstrate how the partial
product bits are used to detect overflow with the proposed method.
PARTIAL PRODUCTS USED FOR OVERFLOW DETECTION
a b 0
a
a b 0
a a
a b 0
a b
a b
a b
a b
a b
a b
a b 1357 a 0 b 1111a b
a b
a b
a b
a b
a b
a b 1357 a 0 b 2222a b
a b
a b
a b
a b
a b
a b 1357
a b
a b
a b
a b
a ba
a b 1357
a b
a b
a b
a b
a b
a b
a b 1357
a b
a b
a b
a b
a b
a b
a b 1357
a b
a b
a b
a b
a b
a b
a b 1357 a 0 b
a
a
a
a
a
Figure
3.2: 8 by 8 Unsigned Multiplication Matrix.
Using Equation 3.1, with
Common terms in the logic equation for overflow detection are used to reduce the
hardware needed to detect overflow. An overflow detection circuit constructed using
AND and OR gates is shown in the Figure 3.3 for an 8 by 8 unsigned multiplication.
For an n by n multiplier, the three gates in the dashed lines are replicated
times. These three gates are combined to form an overflow detection (OVD) cell.
Overflow is detected using the following iterative equations.
is a temporary OR bit with an initial value of
and v i is a temporary overflow bit with an initial value . The
are shown in Figure 3.3.
An OVD cell takes a inputs and generates
outputs. Each OVD cell contains one AND gate, one 2-input OR gate, and one 3-
input To form an unsigned multiplier with the proposed overflow detection
method, these cells are combined with an unsigned array multiplier from which the
cells used to compute p n to p 2n\Gamma1 have been removed. This is shown in Figure 3.4
for an 8 by 8 unsigned array multiplier.
b246 a
a
a
a
a
aa 7b
c c
Figure
3.3: Proposed Overflow Detection Logic for
An n-bit unsigned array multiplier that uses the proposed method for overflow
detection requires (n 2
FAs. This corresponds to (n
gates than the conventional
method.
The worst case delay path is indicated by the dashed line in Figure 3.4. Since a
MFA has longer delay than an OVD cell, the unsigned multiplier with the proposed
overflow detection logic has a delay approximately half as long as the unsigned
multiplier with conventional overflow detection, shown in Figure 3.3.
Unsigned saturating multiplication using the proposed method is performed by
ORing the overflow bit with n least significant product bits, as shown in Figure 3.5.
If the overflow bit is '1' this produces a product with n ones, which corresponds to
the maximum representable unsigned number; otherwise the product is not changed.
This requires n more OR gates and the worst case delay increases by one
delay.
AND AND AND AND AND
AND MHA MHA MHA MHA MHA MHA
MFA MFA MFA
MHA
MFA MFA
MFA MFA MFA MFA MFA
MFA MFA MFA MFA
MFA MFA MFA
MFA
AND
MFAo 7v
AND
AND
a 7 a 6 a 5 a 4 a 3 a 2 a 1 a 0
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
MFA
OVD
OVD
OVD
OVD
OVD
cccc
OVD
c
Figure
3.4: Unsigned Array Multipliers with Proposed Overflow Detection Logic.
MFA MFA MFA MFA
MFA MFA
MFA
MFA MFA
MFA MFA MFA MFA
MFA
MFA
MFA
MFA
MFA
MFA
MHA
MHA
MHA
MHA
MHA
MHA
MHA
AND
AND
AND
AND
a 7 a 6 a 5 a 4 a 3 a 2 a 1 a 0
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
MFA
OVD
OVD
OVD
OVD
OVD
cccc
OVD
c
AND
AND
AND
AND
AND
Figure
3.5: Unsigned Array Multipliers with Proposed Saturation Logic.
3.3 Unsigned Tree Multipliers with Overflow Detection
or Saturation.
For unsigned tree multipliers, the technique of using an array of OVD cells with
linear delay does not work well, since the OVD cells would significantly increase the
multipliers' worst case delay path. Instead, all n 2 partial product bits are generated
and a tree of OR gates is used to determine if any of the partial product bits in
columns n to 2n\Gamma1 or any carries into column n are one. This method is shown for an
8 by 8 multiplier in Figure 3.6, where the symbol 'o' denotes the output of a 2-input
gate. Although this method requires more hardware for overflow detection than
the unsigned array multiplier, the overflow detection logic has logarithmic delay and
no longer contributes significantly to the critical path. An n by n unsigned array
multiplier that uses this method has (n 2
HAs, and (n FAs. Since the delay of the OR gates for overflow detection
is less than the delay of the partial product reduction stages, the worst case delay
is equal to the delay of partial product generation, plus partial product reduction,
plus an (n \Gamma 1)-bit carry-propagate addition plus one OR gate delay to include the
final carry out. Saturating multiplication is performed with the same method that
is used by the array multiplier. The overflow bit is OR'ed with the n least significant
bits of the product.
Figure
Unsigned Tree Multiplier with Proposed Overflow Detection Logic.
Chapter 4
Overflow and Saturation
Detection for Two's Complement
Integer Multiplication
4.1 General Approach
The proposed method for overflow detection in two's complement multiplication
detects the number of consecutive bits that are equal to the sign bit. Essentially,
this method counts the number of leading zeros if the operand is positive and the
number of leading ones if the operand is negative. For example, 11100101 has three
leading ones and 00001001 has four leading zeros. This method works because the
number of leading zeros or ones indicates the magnitude of the operand; operands
with more leading zeros or ones have smaller magnitudes and therefore are less likely
to cause overflow. The main issue is to determine how many of leading zeros and
leading ones are needed to guarantee that overflow will occur or that overflow will
not occur. A block diagram that shows the proposed approach is shown in Figure
4.1.
The analysis for two's complement multiplication has three cases depending on
the operands' signs; both operands positive, both operands negative, or one positive
and the other negative. Overflow regions for these three cases are discussed in the
following sections.
Case 1: Both Operands are Positive
Let ZA denote the number of leading zeros of operand A and ZB denote the number
of leading zeros of operand B. Since both operands are positive n-bit integers, they
have at least one and at most n leading zeros, which can be expressed as
The ranges for operand A and operand B in terms of the number of the leading
zeros are expressed as
OVERFLOW
DETECTION
OPERAND A OPERAND B
MULTIPLIER
n by n
Figure
4.1: Block Diagram of The Proposed Method for Two's Complement Multiplication
with Overflow Detection.
Overflow occurs for Case 1 when overflow does not occur
when . Based on (4.2) the range of P is
Using (4.3), overflow is guaranteed to occur when
To determine the number of leading zeros in A and B that guarantee overflow, (4.4)
is rewritten as
Taking the base 2 logarithm of both sides gives
or equivalently,
Thus, if A and B together have less than n leading zeros, overflow must occur.
Similarly, overflow is guaranteed not to occur when
To determine the number of leading zeros in A and B that guarantee overflow does
not occur, Equation 4.8 is rewritten as
since \Gamma2 n\GammaZ A - \Gamma1 and \Gamma2 n\GammaZ B - \Gamma1,
Consequently, overflow is guaranteed not to occur if
Taking the base 2 logarithm of both sides gives
or equivalently
Using
which is always true since ZA - 1 and ZB - 1. Therefore, 4.13 can be rewritten as
Thus, overflow is guaranteed not to occur if A and B together have more than n
leading zeros.
cannot be directly determined whether or not overflow
has occurred only by examining the number of leading zeros. Rewriting (4.3) for
This problem however can be solved by further analyzing what happens when ZA
n. The results obtained so far are summarized in Figure 4.2.
overflow no overflow
undetermined
Figure
4.2: Overflow Regions for ZA
To Determine whether or not overflow occurs when is necessary to
calculate how many product bits are needed to represent the result when ZA
n. Then by using the most significant product bits and the sign bits of the operands
the overflow flag is set. If n is even, the maximum product is
or equivalently
If n is odd the maximum product is
or equivalently
Thus, when ZA can always be represented with n bits and overflow can
be determined simply by examining the sign bit p n\Gamma1 . If p
Otherwise, overflow does not occur.
4.1.2 Case 2: Both Operands are Negative
Let I A denote the number of leading ones of operand A and I B denote the number
leading zeros of operand B. Since both operands are negative n-bit integers, they
have at least one and at most n leading ones. This can be expressed as
The ranges for A and B in terms of leading ones are expressed as
\Gamma2 n\GammaI A - A - \Gamma(1
Overflow occurs for Case 2 when overflow does not occur
when . Based on (4.22) the range of P is
Using (4.23), overflow is guaranteed to occur when
To determine the number of leading ones in A and B that guarantee overflow occurs
(4.24) is rewritten as
since
values that satisfy
also satisfy (4.25), Taking the base 2 logarithm of both sides of (4.27) gives
I A
Thus, for negative integers overflow occurs when operands have less than
leading ones.
Using (4.23) overflow is guaranteed not to occur when
Taking the base 2 logarithm of both sides of Equation 4.29 gives
Thus, overflow is guaranteed not to occur when A and B together have more than
leading ones.
When cannot be directly determined if overflow has
occurred. This is seen by using in (4.23), which gives
similarly when
The results so far are shown in Figure 4.3.
undetermined no overflow
overflow
Figure
4.3: Overflow Regions for I A
When I A is even, the maximum result is
when n is odd, the maximum product is
can be represented using only n bits,
except when occurs
when
4.1.3 Case 3: When The Signs of The Operands Differ
Let I A denote the number of leading ones in operand A and ZB denote the number
of leading zeros in operand B. Negative operand A has at least one and at most n
leading zeros and positive operand B has at least one and at most n leading ones.
This can be expressed as
The ranges for A and B in terms of leading ones and zeros are
\Gamma2 n\GammaI A - A - \Gamma(1
Overflow occurs for Case 3 when P - \Gamma2 and does not occur when P - \Gamma2
Based on (4.36) the range of P is
Using Equation 4.37, overflow is guaranteed to occur when
or equivalently
For I A +ZB , (4.39) is always true, since 2
Using 4.37 overflow is guaranteed not to occur when
\Gamma2
or equivalently
For
in this range.
When cannot be determined whether or not overflow occurred
since for
The
Figure
4.4 shows graphically the results obtained so far for Case 3.
overflow no overflow
undetermined
Figure
4.4: Overflow Regions for I A + ZB .
When I A is even, the minimum negative number is
and when is n is odd it is
For both cases, the product can be represented using only by n bits. Therefore,
overflow occurs if p
So far, the proposed method has been explained mathematically. In the next
section implementations of the overflow logic are presented.
4.1.4 Overflow Detection Logic
To allow positive and negative operands to use the same hardware for detecting
leading zeros or ones, the sign bits are XNOR'ed with the remaining bits.
This takes gates and is expressed logically as
A logic design that detects (n \Gamma 1) or fewer leading zeros or leading ones includes
gates. These AND gates are used to compute
Y
ba (n\Gammak)\Gamma2 (4.47)
Y
For 3: A preliminary overflow flag is generated, using x(i) and y(i)
as
where bit products correspond to logical ANDs, and bit summations correspond to
logical ORs. This equation is implemented by using (n \Gamma 2) 2-input NOR gates and
gates. V 1 is one when the total number of leading zeros and
leading ones is less than n: In this case, overflow is guaranteed to occur. Additional
logic is used to detect overflow for the undetermined regions for Cases 1-3.
For Case 1 (when a
is detected as
For Case 2 when a
neither A nor B is zero, or the n least significant bits are zero.
For Case 3, (when
neither A nor B is zero. This is detected as
Logic Equations 4.50 through 4.54 can be realized by 9 2-input AND gates, 4
inverters and (n gates. The final overflow flag V is generated by OR'ing all
of the previous flags.
The overflow detection circuit requires, (2n \Gamma
gates, and four
inverters. An overflow detection circuit for an 8-bit two's complement multiplier is
shown in Figures 4.5 and 4.6.
bb
a 7 a 7 a 7 a 7 a 7
a 7 a a 7 a a a a
Figure
4.5: The Logic for V 1 for 8-bit Multiplication.
a 7y
aN LEAST SIGNIFICANT
a
z
Figure
Detection Logic for 8-bit Two's Complement Multiplication.
4.1.5 An Alternative Method
An alternative method for detecting overflow in the undetermined case is, instead
of generating product bits, have the multiplier generate
product bits and detect the undetermined cases by checking if p n \Phi p
approach is shown in Figure 4.7. This approach works since for the undetermined
cases we have the following situations;
only when overflow occurs.
(2) Case 2: overflow occurs. The one
exception is when generated and then p
(3) Case 3: only when overflow occurs. For all three
cases, overflow can be detected as
4.2 Two's Complement Array Multipliers with Overflow
Detection or Saturation
The proposed method for overflow detection for array multipliers requires half as
much hardware as the conventional method. An 8-bit two's complement multiplier
n by n
MULTIPLIER
OPERAND A
DETECTION
Figure
4.7: Block Diagram of the Proposed Alternative Method for Two's Complement
Multiplication with Overflow Detection.
with the proposed method for overflow detection is shown in Figure 4.8. The X2A
cell contains one 2-input XOR gate and one 2-input AND gate, and each X3A cell
contains one 3-input XOR gate and one 2-input AND gate, an X3NA contains two
2-input XOR gate and one 2-input NAND gate.
A two's complemented array multiplier with the proposed overflow detection logic
3-input XOR gates, one 2-input XOR gate and four inverters.
The delay for this multiplier is approximately equal to the delay of (n \Gamma
plus four 2-input OR gates, plus three 3-input AND gates. The actual delay may
differ according to various design decisions and the technology used.
OVERFLOW DETECTION
MFA MFA MFA MFA
MFA MFA
MFA
MFA
MFA
b 7b MFA
a 7
a 5 a 3 a 1 aa 4
MFAa 7 a 6 a 5
4 a 3 a 2
a
MFA
MFA
MFA
MHA
MHA
a
MHA
a a
MHA
MHA
MHA
AND
AND
AND
AND
AND
AND
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
c s
NAND AND
s
s
s
s
s
s
s
Figure
4.8: Overflow Detection Logic for 8-bit Two's Complement Multiplication.
Two's complement saturating multiplication is performed by using the V and
t flags. These flags are used as inputs to an n-bit 2-to-1 multiplexer as shown
in
Figure
4.9. When negative overflow occurs, the result is saturated to \Gamma2
positive overflow occurs, result is saturated to 2
Adding saturation logic to the array multiplier with overflow detection only requires
the addition of an inverter and an n-bit 2-to-1 multiplexor. Since V and t are
already generated by the overflow detection logic, they do not require any additional
hardware. The delay increases just by the delay of a 2-to-1 multiplexer plus the delay
of an inverter.
OVERFLOW DETECTION
AND
c s
c s
c
c
c
MFA 6
c
s
s7s
s
MFAb 432
a 7
MFA
MFA
a 5 a 4 a
a 2 a 1
a 7 a 6
MFA
MFA
MFA
MFA
MFA
MFA
MFAb
MFA
MFAMFA
MHA
MHA
MHA
MHA
MHA
AND
AND
AND
AND
AND
c s
c s
c s
c s
c s
c s
c s
c s
NAND AND
MFA
n-bit
Mux
s
s
s
s
s
s
a 6 a 5 a 4 a 3 a 2 a 1 a 07 5 4 2
Figure
4.9: Saturating Two's Complement Array Multiplication.
4.3 Two's Complement Tree Multipliers with Overflow
Detection or Saturation
The alternative method is used for tree multipliers. For this method (n+1) bits of the
product are computed. V 1 is computed the approach same as the first method and
then the overflow flag is generated by using the logic equation
as explained in Section 4.1.5.
Since the detection circuit is independent of the multiplication process, only n+1
of the partial product bits needs to be generated. Consequently, the AND gates and
counters that generate and reduce the partial product bits after column n of the
multiplication matrix are no longer needed. The size of the carry-propagate adder is
reduced to n bits since only the least significant product bits are used. These
reductions are independent of the strategy used to design the tree multiplier. The
reduction scheme of a Dadda multiplier that uses the alternative method is shown
in
Figure
4.10. A diagonal line with an x at the bottom is an 3 input XOR gate and
a diagonal with a tilda on it and an x at the bottom represents a 2-input XNOR
gate. The X's are used to denote that a carry output is not required.
If the worst case delay is the main constraint of the custom design, the alternative
design method should be considered to implement the overflow detection logic. A
two's complement Dadda multiplier with proposed overflow detection has (n
gates and one n-bit CPA. The worst case delay of the multiplier is also less than the
conventional technique. With the alternative method,the worst case delay equals
the delay of the reduction stages, plus the delay of the n
adder, plus one 2-input OR gate, plus, one 2-input XOR gate, plus one 2-input AND
gate.
Two's complement saturating multiplication logic for the Dadda tree multiplier
is similar to the logic for the array multiplier. An n-bit 2-to-1 multiplexer and an
inverter are added as shown in Figure 4.9, except the partial product bits p n\Gamma2 to
are not connected to the overflow detection logic. The control signals t and V for
the 2-to-1 multiplexer are generated by the detection logic. The delay increases by
just the delay of the inverter plus the delay of the 2-to-1 multiplexer.
Figure
4.10: Dadda Dot Product Scheme after Proposed Overflow Detection.
Chapter 5
Results
5.1 Area and Delay Estimates
Theoretical component counts and worst case delays are given for various multipliers
in
Tables
5.1, 5.2 and table 5.3. In these tables, U and S denotes unsigned and
signed, A and T denote array and tree multiplier, and P and C denote proposed
and conventional. Table 5.1 and Table 5.2 gives the number of each component and
the size of the CPA based on operand length n. Table 5.3 gives the number of each
type of component on the worst case delay path. The proposed methods reduce the
number of AND gates and FAs for array multipliers and reduce the number of AND
gates and FAs, and the size of the CPAs for tree multipliers. The proposed methods
also reduces the delays of the array and tree multiplier, since the most significant
product bits are no longer calculated.
Multiplier Number of Components
Type INV AND NAND OR2 NOR OR3
Table
5.1: Component Counts for n-bit Multipliers with Overflow Detection I.
Multiplier Number of Components
Type XOR XNOR HA FA CPA
Table
5.2: Component Counts for n-bit Multipliers with Overflow Detection II.
Multiplier Number of Components on Worst Case Delay Path
Type INV AND OR2 XOR HA FA CPA
Table
5.3: Worst Case Delay for n-bit Multipliers with Overflow Detection.
Table
5.4: Unsigned Array Multipliers with Overflow Detection.
It is possible to reduce the amount of logic required to implement the detection
circuit even further. The proposed method uses a straight forward implementation
of the logic equations and structures presented in the previous chapters. Synthesis
tools are used to further optimize the design. Consequently, the values shown in
Table
5.1, 5.2, and Table 5.3 should be considered to be worst case values, before
further optimization is performed.
Gate level VHDL code for various sizes of array and Dadda tree multipliers were
generated for the conventional and proposed methods for overflow detection. The
VHDL code was synthesized and optimized for area using LSI Logic's 0.6 micron
gate array library and the Leonardo Synthesis tool from Exemplar logic.
The synthesis tool was set to a nominal operating voltage of 5.0 volts and a temperature
of 25 ffi C. Area estimates are reported in equivalent gates and delay estimates
are reported in nanoseconds.
Table
5.4 gives area and delay estimates for unsigned array multiplier. Compared
to the multipliers that use conventional overflow detection, the proposed multipliers
have between 50% and 53% less area and between 41% and 42% less delay. These
Table
5.5: Unsigned Dadda Tree Multipliers with Overflow Detection.
Conventional Proposed Reduction
n Area Delay Area Delay Area Delay
Table
Signed Array Multipliers with Overflow Detection.
gains are mainly due to the reductions in the area and the delay of FAs used to
generate the n most significant product bits.
Table
5.5 gives area and delay estimates for unsigned Dadda tree multipliers.
Compared to multipliers that use conventional overflow detection method, multipliers
that use the proposed method have approximately 47% less area and between
23% and 28% less delay. These improvements are due to the reducing the number
of FAs and reducing the size of the final carry-propagate adder from (2n \Gamma 2) to
Table
5.6 gives area and delay estimates for two's complement array multipliers.
Compared to the multipliers that use the conventional method, multipliers that use
the proposed method require between 38% and 47% less area and between 41% and
Table
5.7: Signed Dadda Tree Multipliers with Overflow Detection.
Table
5.7 gives area and delay estimates for two's complement Dadda tree mul-
tipliers. Compared to the multipliers that use the conventional method, multipliers
that use the proposed method have between 35% and 44% less area and between
24% to 32% less delay.
Chapter 6
Conclusions and Future Research
6.1 Conclusions
The overflow detection and saturation methods presented in this thesis significantly
reduce the area and delay of array and tree multipliers. For the multiplier sizes
examined, the area is reduced by about 50% for unsigned multipliers when compared
with conventional methods. The proposed methods also do not change the regularity
of the multiplier structure. For the two's complement multipliers, the proposed
methods are completely independent of the multipliers' internal structure. This
feature provides designers increased flexibility, since they can add overflow detection
logic without effecting their original design. Reduction in the multiplier hardware
will also lead to reduced power dissipation. The proposed methods reduce the delay
of array multipliers by about 40% to 50%.
6.2 Future Research
This thesis separately presented overflow detection and saturation methods for unsigned
and two's complement parallel multipliers. An important next step is to
develop a single multiplier structure that can perform both unsigned and two's
complement integer multiplication with overflow detection or saturation based on
an input control signal. Another area for future research is to investigate techniques
for further reducing the area for overflow detection in multiplier trees, without significantly
impacting the delay. This research may be able to take advantage of a hybrid
structure that has less delay than linear overflow detection structures and less area
than overflow detection trees. Another research area is to investigate reductions in
power dissipation due to the proposed techniques. It is anticipated that a significant
reduction in power dissipation can be achieved due to the reduction in multiplier
hardware. Methods similar to the proposed methods can be also used for other
arithmetic operations that needs overflow detection, such as multiply-accumulate
and squaring.
--R
" Some Schemes for Parellel Multipliers,"
"Suggestion for a Fast Multiplier,"
"A 40 ns 17-bit Array Multiplier,"
"Parallel Reduced Area Multipliers,"
"A reduction Scheme to Optimize The Wallace Multiplier,"
"A Two's Complement Parallel Array Multiplication Algorithm,"
"Comments on A Two's Complement Parallel Array Multiplication Algorithm,"
"Synthesis and Comparision of Two's Complement Parallel Multipliers,"
"Computer Architecture a Quantative Approach, Second Edi- tion,"
"Parallel Saturating Fractional Arithmetic Units,"
"Fixed-point Overflow Exception Detection,"
"Programmable High-performance IIR Filter Chip,"
"Overflow Indication In Two's Complement Arith- metic,"
"Overflow Detection in Multioperand Addition,"
"Zero, Sign, and Overflow Detection Schemes For Generalized Signed Arithmetic"
"A Signed Binary Multiplication Technique,"
--TR
--CTR
Eyas El-Qawasmeh , Ahmed Dalalah, Revisiting integer multiplication overflow, Proceedings of the 4th WSEAS International Conference on Software Engineering, Parallel & Distributed Systems, p.1-14, February 13-15, 2005, Salzburg, Austria
Eyas El-Qawasmeh , Ahmed Dalalah, Revisiting integer multiplication overflow, Proceedings of the 4th WSEAS International Conference on Software Engineering, Parallel & Distributed Systems, p.1-14, February 13-15, 2005, Salzburg, Austria | integer;computer arithmetic;saturation;two's complement;tree multipliers;array multipliers;unsigned;overflow |
350566 | Computing Functions of a Shared Secret. | In this work we introduce and study threshold (t-out-of-n) secret sharing schemes for families of functions ${\cal F}$. Such schemes allow any set of at least t parties to compute privately the value f(s) of a (previously distributed) secret s, for any $f\in {\cal F}$. Smaller sets of players get no more information about the secret than what follows from the value f(s). The goal is to make the shares as short as possible. Results are obtained for two different settings: we study the case when the evaluation is done on a broadcast channel without interaction, and we examine what can be gained by allowing evaluations to be done interactively via private channels. | Introduction
. Suppose, for example, that we are interested in sharing a
secret le among n parties in a way that will later allow any t of the parties to test
whether a particular string (not known in the sharing stage) appears in this le.
This test should be done without revealing the content of the whole le, or giving
any other information about the le. This problem can be viewed as an extension
of the traditional denition of threshold (t-out-of-n) secret sharing schemes. In the
traditional denition, sets of at least t parties are allowed to reconstruct a (previously
distributed) secret s, while any smaller set gets no information about the secret. We
introduce a more general denition of t-out-of-n secret sharing schemes for a family
of functions F . These schemes allow authorized sets of parties to compute some
information about the secret and not necessarily the secret itself. More precisely, sets
B of size at least t can evaluate f(s) for any function f 2 F ; any set C of size less
than t knows nothing (in the information theoretic sense) about the secret prior to
the evaluation of the function; and, in addition, after f(s) is computed by a set B no
set C of size less than t knows anything more about the secret s than what follows
from f(s). In other words, the parties in C might know the value f(s), but they know
nothing more than that, although they might have heard part of the communication
during the evaluation of f(s).
Clearly, if we consider a family F that includes only the identity function
then we get the traditional notion of secret sharing schemes. These schemes, which
were introduced by Blakley [8] and Shamir [34], were the subject of a considerable
amount of work (e.g., [30, 26, 28, 6, 35, 20]). They were used in many applications
(e.g., [31, 5, 15, 19]) and were generalized in various ways [22, 7, 36]. Surveys are given
in [35, 37]. The question of sharing many secrets simultaneously was considered (with
some dierences in the denitions) by several researchers [30, 26, 21, 11, 23, 10, 24].
Simultaneous sharing of many secrets is also a special case of our setting. 1 Other
Dept. of Mathematics and Computer Science, Ben-Gurion University, Beer Sheva 84105, Israel.
E-mail: beimel@cs.bgu.ac.il. http://www.cs.bgu.ac.il/beimel . This work was done when the
author was a Ph.D. student in the Dept. of Computer Science, Technion.
y Dept. of Mathematics, Royal Holloway { University of London, Egham, Surrey TW20 OEX,
U.K. E-mail: m.burmester@rhbnc.ac.uk.
z Dept. of Computer Science, PO Box 4530, 206 Love Building, Florida State University, Talla-
hassee, FL 32306-4530, USA. E-mail: desmedt@cs.fsu.edu.
x Dept. of Computer Science, Technion, Haifa 32000, Israel. E-mail: eyalk@cs.technion.ac.il.
http://www.cs.technion.ac.il/eyalk .
be the secrets we want to share simultaneously. Construct the concatenated
and the functions which can be evaluated are the functions f
similar scenarios in which sharing is viewed as a form of encryption and the security
is computationally bounded have been considered in [32, 3, 1].
Threshold cryptography [19, 18, 17] is also a special case of secret sharing for a
family of functions. 2 A typical scenario of threshold cryptography is the following:
Every set B of t parties should be able to sign any document such that any coalition
C of less than t parties cannot sign any other document (even if the coalition C knows
signatures of some documents). To achieve this goal the key is shared in such a way
that every t parties can generate a signature from their shares without revealing any
information on the key except the signature. Specically, assume we have a signature
is the domain of messages, K is the domain
of keys, and O is the domain of signatures. For every
k). The previous scenario is simply sharing the key for the family
Mg. These examples show that secret sharing for a family of functions is
a natural primitive.
Obviously one possible solution to the problem of sharing a secret for a family F
is by sharing separately each of the values f(s) (for any f 2 F) using known threshold
schemes. While this solution is valid, it is very ine-cient, in particular when the size
of F is large. Therefore, an important goal is to realize such schemes while using
\small" shares. For example, to share a single bit among n parties, the average (over
the parties) length of shares is at least log(n t
log n bits are su-cient [34]. The obvious solution for sharing ' bits simultaneously
will require ' log n bits. By [26], it can be shown that shares of at least ' bits are
necessary. We shall show that '-bit shares are also su-cient if interactive evaluation
on private channels is allowed, and O(')-bit shares are su-cient if non-interactive
evaluation (on a broadcast channel) is used. 3
We present an interactive scheme in which '-bit secrets are distributed using '-bit
shares; this scheme allows the computation of every linear combination of the bits
(and not only computation of the bits themselves). We use this scheme to construct
schemes for other families of functions. The length of the shares in these schemes can
be much longer than the length of the secret. An interesting family of functions that
we shall consider is the family ALL of all functions of the secret. For this family, we
construct a scheme in which the length of the shares is 2 ' log n (where ' is the length
of the secret and 2 ' is the length of the description of a function which is evaluated).
In this scheme the computation requires no interaction and can be held on a broadcast
channel. If we allow interaction on private channels during the computation, then the
length of the shares can be reduced to 2 ' . Note that the obvious solution of sharing
each bit separately requires in this case 2 2 '
bits.
Our work deals mainly with two models: the private channels model in which the
computation might require a few rounds of communication; and the broadcast channel
model in which the computation is non-interactive. On one hand, the broadcast model
does not require secure private channels and synchronization; hence, the computation
is more e-cient. On the other hand, in the private channels model, a coalition C that
does not intersect the evaluating set B will know nothing about s; not even the value
f(s). Interaction seems to be useful in the computation. It enables us to reduce the
The secrets s i may be dependent: our model allows some information to leak provided it is no more
than what follows from the evaluations of f(s).
2 The functions considered in [19, 18, 17] are, however, very limited and the scenario in [17] is
restricted to computational security.
3 In fact, both results require ' to be \su-ciently large": ' log n in the interactive case and
' log n log log n in the broadcast model.
length of the shares by a factor of log n in our basic scheme for linear functions.
We also study ideal threshold schemes. These are schemes in which the size of
the shares equals the size of the secrets 4 (see, e.g., [12, 13, 4, 33, 25]). We deal with
the characterization of the families of functions F which can be evaluated by an ideal
threshold scheme. For the interactive private channel model, we prove that every
boolean function that can be evaluated is a linear function. For the broadcast model,
we prove that F cannot contain any boolean function (for every family that contains
the identity function).
An example. To motivate our approach and to clarify the notion of secret sharing
for a family of functions we consider a simple example, which we discuss informally.
Suppose that a dealer shares a secret s among n parties and that at some time later
a set of at least t parties would like to answer the question: \Is the shared secret
equal to the string a?" That is, they want to evaluate the boolean function f a
which is 1 if and only if a. The string a is not known when the dealer shares
the secret. We describe a simple scheme which makes it possible for the parties to
answer such questions. Let be the secret bit-string. The dealer chooses
random strings from f0; 1g 2' , denoted by b shares each of these
strings using Shamir's t-out-of-n threshold scheme [34]. Let b i;j be the share of the
secret b i given to party P j . The dealer computes the sum (i.e., bitwise exclusive-or)
(that is s i , the value of the i-th bit of the secret, selects whether we
take b 2i or b 2i+1 ) and gives to each party. When a subset B of (at least) t parties
wants to check if s = a for some a = a 1 a computes a \new"
share
Then, the parties in B apply Shamir's secret reconstruction
procedure to compute a \new" secret from these \new" shares. Since this procedure
involves only computing a linear combination of the shares, the \new" secret is simply
. If the \new" secret diers from
then the parties learn that
s 6= a. However, they get no more information on the secret s. If the \new" secret is
equal to
then it is easy to see that with high probability (over the choice
of must have a. In this case, the parties learn the secret and there
is no additional information which should be hidden from them. Observe that in both
cases the parties learn no more than what is revealed by answering the question.
Connection to private computations. In our schemes we require that authorized
sets of parties can compute a function of the secret without leaking any other information
on the secret. This resembles the requirement of (n; k)-private protocols [5, 15]
in which the set of all n parties can evaluate a function of their inputs in a way that
no set of less than k parties will gain any additional information about the inputs of
the other parties. Indeed, in some of our schemes a set B of size t uses a (t; t)-private
protocol. However, it is not necessary to use private protocols in the computation,
since the parties are allowed to leak information about their inputs (the shares) as
long as they do not leak any additional information on the secret. Moreover, using
private protocols does not solve the problem of e-ciently sharing a secret for all func-
tions, since not all functions can be computed (t; t) privately [5, 16]. Furthermore, the
parties cannot use the (t; b(t 1)=2c)-private protocols of [5] or [15] since this means
that coalitions of size greater than t=2 (but still smaller than t) gain information.
Organization. The rest of this paper is organized as follows. In Section 2 we discuss
our model for secret sharing for families of functions and provide the denitions;
denitions which are more delicate than in the case of traditional threshold schemes.
4 The size of each share is always at least the size of the secret [26].
In Section 3 we present interactive and broadcast secret sharing schemes for the family
LIN of linear functions. In Section 4 we use these schemes to construct schemes for
other families. In Section 5 we describe the broadcast scheme for the bit functions
BIT . In Section 6 we characterize ideal schemes in the various models. Finally, in
Section 7 we discuss possible extensions of our work. For completeness we give some
background results in Appendices.
2. Denitions. This section contains a formal denition of secret sharing for a
family of functions. We start by dening the model. We consider a system with n
parties g. In addition to the parties, there is a dealer who has a
secret input s. A distribution scheme is a probabilistic mapping, which the dealer
applies to the secret to generate n pieces of information s are referred
to as the shares . The dealer gives the share s i to party P i . Formally,
Definition 2.1 (Distribution Schemes). Let S be a set of secrets,
sets of shares, and R a set of random inputs. Let be a probability
distribution on R. A distribution scheme over S is a mapping
. The i-th coordinate s i of (s; r) is the share of P i . The pair
(; ) can be thought of as a dealer who shares a secret s 2 S by choosing at random
r 2 R (according to the distribution ) and then gives each party P i as a share the
corresponding component of (s; r).
In the scenario we consider, the dealer is only active during the initialization
of the system. After this stage the parties can communicate. We next dene our
communication models.
Definition 2.2 (Communication Models). We consider two models of communication
1. The private channels model in which the parties communicate via a complete
synchronous network of secure and reliable point-to-point communication
channels. In this model a set of parties has access only to the messages
sent to the parties in the set.
2. The broadcast channel model in which the parties communicate via a (public)
broadcast channel. In this model a set of parties can obtain all the messages
exchanged by the communicating parties.
A subset of the parties may communicate in order to compute a function of their
shares. We now dene how they evaluate this function.
Definition 2.3 (Function Evaluation). A set of parties B U executes a
protocol FB;f to evaluate a function f .
At the beginning of the execution, each party P i in B has an input s i and chooses
a random input r i . To evaluate f , the parties in B exchange messages as prescribed
by the protocol FB;f . In each round every party in B sends messages to every other
party in B. A message sent by P i is determined by s i , r i , the messages received by
and the identity of the receiver. We say that a protocol FB;f evaluates f
(or computes f(s)) if each party in B can always evaluate f(s) from its input and the
communication it obtained.
We denote by MC (hs the messages that an eavesdropping coalition C
can obtain during the communication between the parties in B (this depends on the
communication model). Here hs is the vector of shares of B and hr i i B the vector
of random inputs of B.
A protocol is non-interactive if the messages sent by each party P i depend only on
its input (and not on the messages received during the execution of the protocol). Non-interactive
protocols have only one-round of communication. An interactive protocol
might have more than one round. In this case we require that the protocol terminates
after a nite number of rounds (that is, we do not allow innite runs).
The parties in the system are honest, that is, they send messages according to the
protocol.
A coalition C which eavesdrops on the execution of a protocol FB;f by the parties
in B gains no additional information on the secret s if any information that C may
get is independent of the old information. In our model we shall allow C to gain some
information, but no more than what follows from the evaluation of f . The information
that C gains is determined by the view of C.
Definition 2.4 (The View of a Coalition). Let C U be a coalition. The view
of C, denoted VIEWC , after the execution of a protocol consists of the information
that C gains. In a distribution scheme the shares of C. In the
evaluation of f(s) by a set of parties B U the view of C consists of: the inputs
the local random inputs hr i i C\B (only the parties in C \ B are involved in
the computation), and the messages that C obtains from the communication channel
during the evaluation by B, i.e., MB\C (hs That is,
We now dene t-out-of-n secret sharing schemes for a family of functions F .
Such schemes allow any set B of at least t parties to evaluate f(s), for any f 2 F ,
where s is a previously distributed secret, while any set C of less than t parties gains
no more information about the secret than the information inferred from f(s). We
distinguish between three types of schemes depending on the way that the value f(s)
is computed by B and the channels used: (1) interactive private channels schemes
{ where the parties in B engage in a protocol via private channels which computes
non-interactive private channels schemes { where the parties in B engage
in a non-interactive protocol via private channels to compute f(s); and (3) broadcast
non-interactive schemes { where each party in B broadcasts a single message (which
depends only on its share) on a broadcast channel (we do not consider interactive
broadcast schemes in this paper).
The computation of f(s) must be secure. There are two requirements to consider:
(1) the usual requirement for threshold schemes, that is, before the evaluation any set
C of size less than t knows nothing about the secret by viewing their shares; (2) after
the evaluation of f(s) by a set B, any set C of size less than t gains no information
about s that is not implied by f(s). Formally,
Definition 2.5 (Secret Sharing Schemes for a Family). A t-out-of-n secret
sharing scheme for a family of functions F is a distribution scheme (; ) which
satises the following two conditions:
Evaluation. For any set B U of size at least t and any function f 2 F the parties
in B can evaluate f(s). That is, there is a protocol FB;f which given
the shares of B as inputs will always output the correct value of f(s). The
scheme is called non-interactive if FB;f is non-interactive, and interactive
otherwise. Depending on the communication model, a broadcast channel or
private channels are used.
Security. Let X be a random variable on the set of secrets S and C U be any
coalition of size less than t.
Prior to evaluation (after the distribution of shares).
For any two secrets s; s 0 2 S and any shares hs i i C (from (s; r)):
After evaluation. For any f 2 F , any B U of size at least t, any secrets
s; shares hs i i C , any inputs hr i i B\C , and
any messages MB\C (hs from the computation of f(s) by B:
We say that C gains no information that is not implied by the function f
(or simply: gains no additional information) if in the computation of f(s) we
have security after evaluation.
A (traditional) t-out-of-n secret sharing scheme is a secret sharing scheme for the
family that includes only the identity function explained in Observation
2.1 below, the family also contains all renamings of the identity function.)
Remark 2.1. We only require that one function f 2 F can be evaluated securely.
A more desirable property is that the scheme is reusable: any number of functions
can be evaluated securely (possibly by dierent sets). All our schemes, except for the
scheme in Section 5, are reusable.
Remark 2.2. In Denition 2.5 we required that all coalitions C of size less than t
should gain no additional information on the secret. Clearly, it is su-cient to require
this only for coalitions C of size t 1, since any smaller coalition has less information
on the secret, and, hence, will gain no additional information. 5
Remark 2.3. The security in Denition 2.5 is based on the requirement that
the view of a coalition is the same for secrets which have the same evaluation. Alternatively
we could require:
That is, the probability that the secret is s, given the view of the coalition C, equals
the probability of s, given the value In Appendix A we show that the two
denitions are equivalent (see [14] for the analogue claim with respect to traditional
secret sharing schemes).
Obviously any broadcast scheme can be transformed into a scheme in which each
message is sent on private channels. On the other hand, if we take a non-interactive
private channels scheme and broadcast the messages then the security of the computation
may be violated.
A function f 0 is a renaming of f if
Suppose that f 0 is a renaming of f and that f can be evaluated securely, then f 0 can
also be evaluated securely. Indeed, to evaluate f 0 securely we rst evaluate f(s)
securely; after this evaluation s might not be known. Nevertheless, we can nd an
input y such that
Observation 2.1. A secret sharing scheme enables the secure evaluation of a
renaming of f if and only if it enables the secure evaluation of f .
Therefore, we ignore renamings of functions.
We next dene certain classes of functions over which are of particular
interest. Recall the structure of GF(2 ' ). Each element a 2 GF(2 ' ) is an '-bit string
a which may be represented by the polynomial a ' 1 x
Addition and multiplication are as for polynomials, using the structure of GF(2) for
5 This is dierent than what happens in private multi-party computation; there a smaller set has
less inputs and hence expected to learn even less.
the coe-cients and reducing products modulo a polynomial p(x) of degree ', which
is irreducible over GF(2). A function
for every linear function f . Thus,
Observation 2.2. For every linear function f , for every x 2 GF(2 ' ), and for
every a 2 GF(2) it holds that
Notice that we do not require that the
family LIN ' to be the family of all linear functions of
is the family of additive functions). Let e be the function that
returns the i-th bit of x. Then, e i linear. We call
the family fe the bit functions and denote it by BIT ' . We also consider the
family ALL ' of all possible functions of the secret (by Observation 2.1, without loss
of generality, the range of the functions in ALL ' is in GF(2 ' )). We stress that BIT '
contains only boolean functions while both ALL ' and LIN ' contain non-boolean
functions as well.
We will need the following claim concerning linear functions.
2.1. For every linear functions, there are constants c ' such that
string
Proof. Notice that b 1
. Thus, since f is linear, and by
Observation 2.2,
and the claim follows.
3. Schemes for the Linear Functions.
3.1. An Interactive Scheme. In this section we show that Shamir's scheme
over (described in Appendix B) is also a secret sharing scheme for the
family LIN ' { the family of linear functions.
Theorem 3.1 (Basic Interactive Scheme). For every ' 1 there exists an
interactive private channels t-out-of-n secret sharing scheme for the family LIN ' in
which the secrets have length ' and the shares have length ' 0 1)eg.
Proof. The dealer uses Shamir's t-out-of-n scheme over GF(2 ' 0
to distribute the
shares. We show how every function f 2 LIN ' can be evaluated securely. Consider a
set B of t 0 t parties with shares hs
that wishes to compute
f(s). By the reconstruction procedure of Shamir's scheme, for each P
exist coe-cients - j such that
simply the
sum of the x j 's. Computing this sum is done using the following interactive protocol
of Benaloh [6] (for a more detailed description of this protocol see Appendix C).
For convenience of notation, we assume that g. In the rst step
of the protocol each party P
sends r i;j to P j 2 B. Then, each party
sends y j to P 1 . Party P 1 now computes
and sends it to all the other parties. By the choice of r i;t 0 , the sum
thus, each party in B computes f(s) correctly.
Let us now prove the security of this scheme. The security prior to evaluation
follows from the security of Shamir's threshold scheme. For the security after the
evaluation it is su-cient to consider a coalition C of size t 1 (by Remark 2.2). Since
each x j is computed locally, and since f(s) is computed using a private protocol [6] (for
a formal denition of privacy see Appendix C), the parties in C gain no information
about the shares of the other parties that is not implied by f(s). That is, for any shares
we have:
(1)
Now for a secret s and shares hs i i C with there are unique shares hs i i BnC
of Shamir's scheme that are consistent with s and hs i i C . Therefore,
(2)
The last equation holds because the shares hs determine the secret s. By Equation
(1), the probability Pr[ hr is independent of
therefore of s, and by the properties of Shamir's scheme the probability
is independent of hs Therefore, by (2), for every s 0 2 S such
that
Pr[view
and the coalition C gains no information about s that is not implied by f(s).
Remark 3.1. In the special cases when
interaction by simply sending the messages x j . For the messages
must be sent via private channels. In this case, only coalitions C B can get
messages of the evaluating set B. Let g. Then, party P i gains
no additional information from the message x j of P j , because P i knows
and its input x i , and therefore can compute x j . For the messages may be sent
via a public broadcast channel. In this case, any coalition C is a subset of B, and
if C has size n 1 then gains no additional
information from x j , since C can compute x j from f(s) and
Remark 3.2. When 3 t n 1, the parties evaluate f(s) using an interactive
private channels scheme. It can be shown that for this particular scheme interactive
evaluation is essential. Otherwise, some coalitions C that intersect the evaluating set
B, but are not contained in B, will gain additional information on s when they obtain
the values of x j .
This scheme has the advantage that it is reusable { sets
coalition C does not gain any information on the
secret beside the values f i (s) for every i with C \ B i 6= ;. This is because the
evaluation is done using a private protocol. The private computation assures that C
cannot distinguish between two vectors of shares with the same value f i (s). Hence,
even if C knows some evaluations f i (s), an evaluation of another function will not
reveal extra information.
3.2. A Non-interactive Broadcast Scheme. Next, we present a broadcast
scheme for the family LIN ' of linear functions, in which the length of the shares is
Theorem 3.2 (Basic Broadcast Scheme). For every ' 1 there exists a non-interactive
broadcast t-out-of-n secret sharing scheme for the family LIN ' in which
the secrets have length ' and the shares have length ' log
Proof. Let q 1)e. The dealer shares each bit of the secret independently
using Shamir's scheme over GF(2 q ) (this is the smallest possible eld of characteristic
two for Shamir's scheme). Let ' be the secret bit-string and let s i;j 2
be the share of b j given to party P i . Then, the total length of the share of
each party is '
Clearly, every bit of the secret can be securely reconstructed. To evaluate other
linear functions of the secret, we use the homomorphic property of Shamir's scheme,
as observed by Benaloh [6], which enables the evaluation of linear combinations of
a shared secret without revealing other information on the secret. To explain how
this property is used we rst observe that it is su-cient to show the result for a
boolean linear function, since each coordinate of f(s), where f 2 LIN ' , is also a
linear function (we use the property that this scheme is reusable).
Let f be a boolean linear function and let B be a set of (at least) t parties
with shares hs i;j i P i 2B;1j' that wishes to compute the bit f(s). By Claim 2.1,
there are constants c
. Recall that for
Shamir's reconstruction procedure the secret bit b j is computed as b
are appropriate coe-cients. Let x
i;j be a \new" share
of P i . Then,
To evaluate f(s) each party P computes locally the \new" share x i and then
broadcasts it. The \new" secret f(s) is just the linear combination of these \new"
shares. We claim that a coalition C of size t 1 gains no information that is not
implied by f(s) during this process. Indeed, the probability of the view of C is,
Pr[hs
The rst equality holds because the shares of C and f(s) determine the \new" shares
of B and vice-versa. The second equality holds because s determines f(s). Now
Pr[hs is independent of s by the security requirement prior to eval-
uation. So, we get the security requirement after evaluation. Furthermore, if this
process is repeated more than once and dierent linear combinations are evaluated
then C will gain no information that is not implied by these linear combinations, and
the parties in B can evaluate each bit of a non-boolean linear function independently.
Remark 3.3. Observe that in the broadcast scheme there is no need for the
set B to be given as input for the evaluation: each party P i which is available just
outputs an appropriate linear combination of its shares, and its identity P i . Then, a
function can be evaluated so long as there are at least t parties that agree to evaluate
the function. These parties do not need to be available at the same time, or know in
advance which parties would be available.
4. Schemes for Other Families of Functions. We now show how to use
the basic scheme (for linear functions) to construct schemes for other families of
functions. Given a family of functions, we shall construct a longer secret such that
every function in the family can be evaluated as a linear function of the longer secret.
Observe that any boolean function can be represented as a
binary vector over GF(2) of length 2 ' whose i-th coordinate is f(i). Similarly, any
can be represented as an array of ' 0 binary vectors
of length 2 ' in which the j-th vector corresponds to the j-th bit function of f(x).
The rank of a family of functions F is the smallest k for which there exist boolean
functions such that for every function f 2 F there exists a renaming of it
which each of the ' 0 vectors representing f 0 is a linear combination of f
The rank of a family does not change if we add renamings of functions. So, we can
assume that F contains all renamings of its functions.
Theorem 4.1. Let F be any family of functions.
There exists a t-out-of-n interactive private channels secret sharing scheme
for F with shares of length max frank(F); log n 1g.
There exists a t-out-of-n non-interactive broadcast secret sharing scheme for
F with shares of length rank(F)(log
Proof. Let be a basis for the vector space spanned by the functions
in F . To share a secret s the dealer generates a new secret
of length k (where - denotes concatenation of strings). The dealer now shares the
secret E(s) using the basic scheme (either the interactive or broadcast version, depending
on the communication model). Then, f i
f be the function in F to be evaluated. Without loss of generality, we may assume
) is such that the ' 0 vectors representing it are spanned by
Every bit of f(s) is a linear combination of f and therefore of
the bits of E(s) (i.e., e i (E(s))). Since addition in GF(2 k ) is bitwise, concatenation of
linear functions is a linear function too. Thus, f(s) can be computed by evaluating a
linear function of E(s). Thus, Theorem 4.1 follows by Theorems 3.1 and 3.2.
We demonstrate the applicability of the above construction by considering the
family ALL of all possible functions of the secret (boolean and non-boolean).
Corollary 4.2.
There exists a t-out-of-n interactive private channels secret sharing scheme
for ALL ' with shares of length max
There exists a t-out-of-n non-interactive broadcast secret sharing scheme for
ALL ' with shares of length
Proof. By Theorem 4.1 we have to show that the rank of ALL ' is 2 ' 1. Let
1. Consider the d boolean functions f
i: To evaluate a boolean function f we notice that the functions f(x) and f(x)
are renamings of each other, so we can assume that
a basis for ALL ' . In this case, the
secret s is encoded as the vector E(s) of length d in which the s-th coordinate is 1
and all the other coordinates are zero (with the exception
Notice that the length of the secret is ', while the length of the shares is (2 ' 1).
However, there are 2 2 ' 1 dierent boolean functions with domain of cardinality 2 ' such
that every function is not a renaming of another function (leaving aside non-boolean
functions). Therefore, our scheme is signicantly better than the naive scheme in
which we share every function (up to renaming) separately. Also the length of E(s)
for ALL ' must be at least log so the representation for E(s) that we
use is the best possible for the family ALL ' in this particular scheme. It remains
an interesting open question whether there exists a better scheme for this family, or
whether one can prove that this scheme is optimal.
5. Non-interactive Broadcast Scheme for the Bit Family. In this section
we present a non-interactive broadcast scheme for the family of bit functions, whose
shares are of length O(') (compared to O(' log n) of the scheme from Theorem 3.2).
Theorem 5.1. Let '
195 n log log n). There exists a non-interactive broadcast
t-out-of-n secret sharing scheme for the family BIT ' in which the length of the
secrets is ' and the length of the shares is O(').
We rst present a meta scheme (Section 5.1). Then we show a possible implementation
of the meta scheme that satises the conditions of the Theorem.
5.1. A Meta Scheme. Let k and h be integers (to be xed later) such that
1)e and the secret s has length ' kh. We view the secret as a binary
matrix with k rows and h columns and denote its (i; entry by s[i; j]. We construct,
in a way to be specied below, a new binary matrix H with 3k rows and h columns.
Then we share every row of H using Shamir's t-out-of-n scheme over GF(2 h ). Since
the length of every row is h dlog(n 1)e, Shamir's scheme is ideal and every party
gets 3k shares of length h. The distribution stage of this scheme is illustrated in
Fig. 1. When a set B of cardinality (at least) t wants to reconstruct the bit s[i; j]
of the secret, the parties in B reconstruct a subset T i;j of rows (which depends only
on and not on B). We will guarantee that these rows do not give any additional
information on the secret.
Each party
Shamir
Fig. 1. An illustration of the meta scheme of the non-interactive broadcast scheme for the bit
family.
More specically, for every 1 i k and 1 j h, we x a set T i;j
(independently of the secret). The j-th column of H is constructed independently,
and depends only on the j-th column of the secret. It is chosen uniformly at random
among the column vectors such that:
To reconstruct s[i; j] it is enough to reconstruct the T i;j -rows of H , and compute the
sum (modulo 2) of the j-th bit of the reconstructed rows. (Actually, every party can
sum the shares of the T i;j -rows of H , and then the parties reconstruct the secret from
these shares. The j-th bit of the reconstructed secret equals to s[i; j].) That is, the
message of a party in a reconstructing set consists of the shares corresponding to the
T i;j -rows of H . The existence of a matrix H satisfying Equation (3), and the security
requirement after reconstruction depend on the choice of the sets T i;j . On the other
hand, independently of the choice of the sets T i;j , any coalition of size less than t
prior to any reconstruction does not have any information on the rows of H (by the
properties of Shamir's scheme), and, hence, does not have any information on the
secret. That is, the meta scheme is secure prior to the evaluation.
5.2. Implementing the Meta Scheme. We show how to construct sets T i;j
such that the security requirement after reconstruction will hold. Let R 1
3kg be a collection of dierent sets of size k. In particular, no set is
contained in another set (R 1 h is a Sperner family [2]). For
The number of sets of cardinality k which are contained in fk
xing k 4
su-ces. We require that h dlog
hk ', and choose the smallest possible h. The length of the shares (i.e., 3h dlog he)
is ('). The following Claims are useful for proving the security of a reconstruction:
5.1. For every secret s, the number of matrices H that satisfy Equation (3)
for all j (1 j h) is at least 1 and is independent of s.
Proof. Set H [i;
Equation (3). To show that the number of matrices satisfying Equation (3)
is independent of s, notice that H is a solution of a non-homogeneous system of linear
equations. Since the system has at least one solution, the number of such matrices is
the number of solutions to the homogeneous linear system of equations, where every
To show that no coalition gains any additional information, we rst consider the
case in which the coalition knows only the T i;j -rows of H . In this case, the coalition
can reconstruct s[i; j]. We prove that the coalition does not gain any information on
the other bits of the secret (i.e., bits s[i
5.2. Let s be a secret and x the T i;j -rows of H such that
The number of matrices H that satisfy Equation (3) for all j (1 j h) is at least
1 and is independent of s.
Proof. We rst prove that there exists a matrix H satisfying the requirements.
Since the columns of H are constructed independently, it is enough to prove the
existence of every column j 0 . Only the j 0 -th column of the T i;j -rows of H , and the
-th column of s in
uence the j 0 -th column of H , hence, while considering the j 0 -th
column we can ignore the rest of the columns.
We rst consider the j-th column of H (i.e.,
we can assign H [i 0 ; j] a value as follows: for every i 0 such that k
(this is arbitrary); for every i 0 such that 1 i 0 k set
The case j 0 6= j is similar except that we have to be careful about
there is an element . We can assign H [i a value as
follows: for every i 0 such that k
We have shown that there exists a matrix as required. By the same arguments
as in the proof of Claim 5.1, the number of possible matrices given a secret s, is
independent of s.
We are now ready to prove the security requirement (note that this scheme is not
reusable).
5.3. The above implementation of the meta scheme is secure.
Proof. Let B be any set that evaluates a bit s[i; j] of the secret, and C be any
coalition. The security requirement before reconstruction is easy (this was discussed at
the end of Section 5.1). Suppose that C has obtained the messages sent by the parties
in B. That is, the parties in C know the shares of the parties of B corresponding to
the T i;j -rows of H . Therefore, the only information that they gain is the T i;j -rows
of H . Then, given the shares of the coalition and all the messages that were sent,
every matrix H that agrees with these rows is possible. We need to prove that, given
two secrets in which the (i; j)-th bit is the same, and a matrix H in which only the
are xed: the probability that H was constructed from each secret is equally
likely. By Claim 5.1 and Claim 5.2 this probability only depends on s[i; j].
6. Characterization of Families with Ideal Schemes. In this section we
consider ideal secret sharing schemes for a family F . We say that two secrets s; s 0
are distinguishable by F if there exists a function f 2 F for which f(s) 6= f(s 0 ). A
secret sharing scheme for F is ideal if the cardinality of the domain of shares equals
the cardinality of the domain of distinguishable secrets. These are the shortest shares
possible by [26]. The following theorem gives several impossibility results for ideal
schemes for a family F which contains the bit functions.
Theorem 6.1 (Characterization Theorem).
Suppose that there is an ideal interactive t-out-of-n secret
sharing scheme for a family of functions F such that BIT ' F . Then, any
boolean function f 2 F is linear.
ng. Suppose that there is an ideal non-interactive t-out-of-n secret
sharing scheme for a family of functions F such that BIT ' F . Then,
F LIN ' .
(3) Let 3 t n 1. In an ideal t-out-of-n secret sharing the evaluation of
every non-constant boolean function requires interaction on private channels.
2. In an ideal non-interactive 2-out-of-n secret sharing every non-constant
boolean function cannot be evaluated via a broadcast channel.
It follows that for families F , with BIT ' F , if interaction is allowed then every
boolean function in F is linear. If interaction is not allowed, then either
the functions in F are linear. In particular when
must be used.
In the Section 6.1 we will discuss some basic properties of ideal schemes that will
be needed for the proof of this Theorem. In Section 6.2 we prove (1) and (2), and in
Section 6.3 we prove (3) and (4).
6.1. Properties of Ideal Schemes. In this section we discuss some simple
properties of ideal schemes which are useful in the sequel.
Proposition 6.2 ([26]). Fix the shares of any t 1 parties in an ideal t-out-of-n
secret sharing scheme. Then, the share of any other party P is a permutation of the
secret. In particular all shares are possible for P .
In our proof, we use the following proposition regarding private functions, that is
functions that can be computed without revealing any other information on the inputs.
formal denition of privacy the reader is referred to Appendix C.) Bivariate
boolean private functions were characterized by Chor and Kushilevitz [16]:
Proposition 6.3 ([16]). Let A 1 ; A 2 be nonempty sets and f : A 1 A 2 ! f0; 1g
be an arbitrary boolean function. Then, f can be computed privately if and only if
there exist boolean functions such that for every
holds that f(x;
6.1. Let f be a boolean function that can be evaluated securely in an ideal
2-out-of-2 secret sharing scheme. Then, there exist boolean functions f 1 and f 2 such
that are the shares of P 1 and P 2 respectively.
Proof. The function f is evaluated from x and y, i.e., there is some function f 0
such that Proposition 6.2 any information that a party gets on
the share of the other party is translated to information on the secret. That is, the
parties must compute f 0 (x; y) in such a way that each party receives only information
implied by f 0 (x; y). In other words, they compute the boolean function f 0 privately.
By Proposition 6.3 there exist f 1 and f 2 such that
Consider an ideal t-out-of-n secret sharing scheme for F . Without loss of gener-
ality, we may assume that all the secrets are distinguishable by F . So, the parties
can reconstruct the secret from their shares. Denote the reconstructing
function by h(s We next prove an important property of the messages
sent in a non-interactive protocol that evaluates f .
6.2. Let f be a function that can be securely evaluated in an ideal t-out-
of-n secret sharing scheme without interaction. Without loss of generality, we may
assume that all the messages sent by P i , while holding the share s i , to the other parties
are identical, and equal to
| {z }
| {z }
Proof. It is su-cient to prove the claim for P 1 . Suppose that the shares of
the parties are s Consider a possible message of P 1 while the set
computes the value f(s). Party P 1 might toss coins, so if there are
several messages choose the rst one lexicographically. When P 1 holds two shares s 1
and s 0
1 such that f(h(s 1
the messages of P 1 to P i have
to be dierent, otherwise P i will compute an incorrect value in one of the cases. On the
other hand, for two shares s 1 and s 0
1 such that f(h(s 1
the messages sent by P 1 to the parties in fP have to be the same in both
cases, otherwise the coalition fP can distinguish between the secrets
loss of
generality, the messages sent by P 1 while holding the share s 1 are identical and equal
to
6.2. Proofs of (1) and (2) of Theorem 6.1. The proofs of Items (1) and (2)
are similar, and have two stages. We consider the following two-party distribution
scheme over GF(2 ' ), denoted XOR. Given a secret s, the dealer chooses at random
The share of the rst party is x, and the share of the second party is
x. In the rst stage (Section 6.2.1) we characterize the functions that can be
evaluated from the shares of XOR. In the second stage (Section 6.2.2) we show that
if there exists an ideal t-out-of-n secret sharing scheme for F , then XOR is a secret
sharing scheme for F . The combination of the two stages implies Items (1) and (2).
6.2.1. Characterizing the Functions which can be Evaluated with XOR.
We rst characterize the functions that can be evaluated with XOR without interac-
tion. It can be seen that every linear function of the secret can be evaluated with XOR
(the details are the same as in Remark 3.1). We prove that no other functions can be
evaluated securely without interaction. Then we characterize the boolean functions
that can be evaluated with XOR with interaction and show that these functions are
exactly the boolean linear functions.
To prove the characterization without interaction, we will prove that f(s) can be
computed from f(x) and f(y) (where x and y are the shares). In the next claim we
prove that every function with this property is a renaming of a linear function.
6.3. function. If there exists a function
g such that f(x is a renaming of
a linear function.
Proof. Let f(0)g. We rst prove that
The \only if" direction follows
Similarly the \if" direction follows from the following simple
equations:
That is, if In other words the set X 0 is a linear space
over the eld GF(2). Now consider a linear transformation
whose null space is X 0 (that is, f is a linear
space, such linear transformations exists. We claim that f is a renaming of f 0 , i.e.,
Equation (4), only if
(by the denition
of f 0 ). Since f 0 is linear, f only if
6.4. Let f be any function that can be evaluated without interaction with
the scheme XOR. Then, f is a renaming of a linear function.
Proof. Assume, without loss of generality, that f :
consider the function of x dened as min fy which is a renaming of f ).
By Claim 6.3 it su-ces to prove that f(x + y) can be computed from f(x) and f(y).
Denote the share of P 1 by x and the share of P 2 by y. Recall that y. By
6.2 the message sent by P 2 while holding a share y to P 1 is f(y), and the message
sent by P 1 while holding a share x to P 2 is f(x). Hence, P 1 can compute
y) from x and f(y). Moreover, for every two shares x 1 and x 2 held by P 1 such that
every share y held by P 2 , party P 1 must compute the same value
of f(s), since P 2 receives the same message from P 1 in both cases, and therefore
computes the same value of f(s). Hence, P 1 can compute f(x + y) from f(x) and
f(y). The computed function is the desired function g with f(x
By Claim 6.3 this implies that f is a renaming of a linear function.
We now prove a similar claim for interactive evaluations. However, this claim
applies only to boolean functions.
6.5. Let f be a boolean function that can be evaluated interactively with
the scheme XOR. Then, f is a renaming of a linear function.
Proof. Since XOR is an ideal scheme, Claim 6.1 implies that there exists boolean
functions In particular
there exists a bit b such that
is a linear function which is a
renaming of f .
The exact family of functions which can be evaluated interactively with XOR
consists of the two-argument functions for which can be computed
privately from x and y (as characterized in [29]).
6.2.2. Reduction to XOR. In this section we prove that if there exists an ideal
t-out-of-n secret sharing scheme for a family F , then there exists an ideal 2-out-of-2
secret sharing scheme for F . We then prove that this implies that XOR is a secret
sharing scheme for F .
6.6. If there exists an interactive (respectively non-interactive) ideal t-
out-of-n secret sharing scheme for F , then there exists an interactive (respectively
non-interactive) ideal 2-out-of-2 secret sharing scheme for F .
Proof. Let be a set of parties. We ignore the shares distributed
to parties not in B. Therefore, we have an ideal t-out-of-t secret sharing
scheme for F . Let hs (xed) vector of shares that is dealt to
B with positive probability. To share a secret s the dealer now generates a random
vector of shares of s (according to the scheme) that agrees with s respectively,
and gives the rst two components of this vector to P 1 and P 2 . Parties P 1 and P 2
know the shares of the other parties (as they are xed) and therefore P 1 , say, can
simulate the parties P in the protocol which evaluates the function f 2 F .
Hence, P 1 and P 2 can evaluate f(s) from their shares and the messages they exchange.
On the other hand, P 1 has no more information than the information known to the
coalition of cardinality t 1 in the t-out-of-t scheme. So, P 1 does not
gain additional information on s. Similar arguments hold for P 2 . This implies that
this scheme is a 2-out-of-2 secret sharing scheme for F .
6.7. Let BIT ' F . If there exists an interactive (respectively non-
ideal t-out-of-n secret sharing scheme for F , then XOR is an interactive
(respectively non-interactive) secret sharing scheme for F .
Proof. By Claim 6.6 we can assume that there exists an ideal 2-out-of-2 secret
sharing scheme for F , say . We transform into XOR in a way that every function
that can be evaluated in can also be evaluated in XOR.
6.1 implies that for every j, where 1 j ', there exist boolean functions
2 such that e j
(y). For as the concatenation
of the values of these functions, that is m i (x) 4
every secret s and random input r of the dealer
Therefore, the scheme 0 dened by 0 (s; is a distribution
scheme equivalent to XOR. We still have to show that the two parties can
evaluate securely every function in F with 0 .
We rst prove that m 1 is invertible. Clearly, the two parties can reconstruct the
secret in 0 , while every single party knows nothing about the secret. Therefore, 0
is a 2-out-of-2 secret sharing scheme and, by [26], the cardinality of the domain of
shares is at least the cardinality of the domain of secrets. Thus, the cardinality of the
range of m 1 is at least as large as 2 ' . The domain of m 1 , which is the set of shares
of P 1 , has cardinality 2 ' . Therefore, m 1 is a bijection and, hence, invertible. So, P 1
can reconstruct the share x from m 1 (x). Similarly, P 2 can reconstruct the share y
from m 2 (y). This implies that the parties P 1 and P 2 , while holding m 1
respectively, can evaluate every function f 2 F . We have proved that every function
can be evaluated with 0 which is equivalent to XOR. Furthermore, if the
evaluation of the original scheme required no interaction, then also the evaluation
with the XOR scheme requires no interaction.
6.3. Proofs of (3) and (4) of Theorem 6.1. Next we prove Item (4). Namely,
in any ideal non-interactive t-out-of-n secret sharing scheme (2 t n 1) evaluation
of a boolean function requires private channels. The proof is by contradiction. Suppose
that there is an ideal non-interactive t-out-of-n scheme in which the evaluation
can be held on a broadcast channel. Then, by the same arguments as in Claim 6.6,
since t n 1, there is an ideal 2-out-of-3 scheme with the same property. Our rst
claim is that, without loss of generality, in the evaluation of f(s) every party sends a
one bit message such that the sum of the messages is f(s).
6.8. Assume there exists an ideal 2-out-of-3 secret sharing scheme in
which a non-constant boolean function f can be securely evaluated without interaction
via a broadcast channel. Then, P 1 and P 2 can securely compute f(s) by sending one
Proof. By Claim 6.2 we can assume that the message of P 1 while holding the
share s 1 is the bit Similarly, we assume that the message of P 2
while holding the share s 2 is Furthermore, the value of f(s) is
determined by these two messages. There are two values for f(s) and two values for
each message. Since each party knows nothing about f(s), a change of any message
will result in a change of f(s). The only boolean functions with two binary variables
satisfying this requirement are addition modulo 2, i.e.,
or loss of generality, assume that P 1 sends the message
the claim follows.
6.9. Consider an ideal non-interactive 2-out-of-3 secret sharing scheme
for a family F which contains a non-constant boolean function f . Then, f cannot be
evaluated via a broadcast channel.
Proof. Fix the share of P 3 to be z = 0. By Proposition 6.2, the share x of P 1
is a permutation of the secret. Assume, without loss of generality, that this share is
equal to the secret. (Of course P 1 does not know that z = 0 and does not know the
secret.) Since P 3 does not know f(s) although z = 0, the value of f(s) is not xed,
and the message that P 1 sends to P 2 while P 1 and P 2 evaluate f(s) cannot be constant
will not be able to evaluate f(s)). Furthermore, for any two shares
held by P 1 such that the messages of P 1 to P 2 should be the same
(otherwise P 3 will distinguish between the two secrets). Without loss of generality,
the message m 1 which P 1 sends to P 2 is f(x). By Claim 6.8,
the message that P 2 sends to P 1 is constant and P 1 will not be able to evaluate
f(s), which leads to a contradiction. Hence, f(s) cannot be evaluated on a broadcast
channel.
We conclude this section by proving Item (3). That is, for 3 t n 1,
the evaluation requires interaction via private channels. Again, we assume towards
contradiction that there exists an ideal non-interactive t-out-of-n scheme for the family
F , and construct an ideal non-interactive 3-out-of-4 scheme for the family F . We show
that this contradicts Claim 6.9.
6.10. Assume there is an ideal 3-out-of-4 secret sharing scheme for a
family F in which a function f 2 F can be evaluated via private channels without
interaction. Then, there is an ideal 2-out-of-3 secret sharing scheme in which f can
be evaluated via a broadcast channel.
Proof. By Claim 6.2 we may assume that the messages sent during the evaluation
by one party to the other two parties are identical. In particular, if fP
evaluates a function, then P i knows the messages that P To construct
a 2-out-of-3 secret sharing scheme, we x a share u for P 4 , and share the secret s
with a random vector of shares such that the share of P 4 is u. When P i and P j want
to evaluate f(s), where 1 i < j 3, then P i simulates P 4 , and all messages and
broadcasted. After this evaluation, every party P k , where k 2 f1; 2; 3g, will have the
same information as the coalition fP had in the original scheme. Hence, P k does
not gain any information that is not implied by f(s). That is, we get a 2-out-of-3
secret sharing scheme for F .
7. Discussion. In this section we consider possible extensions of our work. So
far, we considered only threshold secret sharing schemes. In [22] a general notion of
secret sharing schemes for arbitrary collections of reconstructing sets is dened. That
is, we are given a collection A of sets of parties called an access structure and require
that any set in A can reconstruct the secret, while any set not in A does not know
anything about the secret. Clearly, secret sharing schemes exist only for monotone
collections. Conversely, it is known that for every monotone collection there exists
a secret sharing scheme [22]. More e-cient schemes for general access structures
were presented (e.g., in [7, 36]). However, the length of the shares in these schemes
can be exponential in the number of parties (i.e., of length '2 (n) where n is the
number of parties in the system and ' is the length of the secret). Our denition of
secret sharing for a family F of functions can be naturally generalized for an arbitrary
access structure. To construct such schemes, observe that the schemes of [22, 7, 36]
are \linear": the share of each party is a vector of elements over some eld, and every
set in A reconstructs the secret using a linear combination of elements in their shares.
Thus, if we share every bit of the secret independently, we can evaluate every linear
function of the secret without any interaction (the details are as in Theorem 3.2). This
implies that for every access structure, there exists a scheme for the family ALL ' in
which the length of the shares is O(2 ' 2 n ). However, if the access structure has a more
e-cient linear scheme for sharing a single bit then the length of the shares can be
shorter (but at least 2 ' ).
Another possible extension of the denition is by allowing a weaker \non-perfect"
denition of security. Such extensions in the context of traditional secret sharing
schemes were considered, e.g., in [9]. As remarked, there are several related works
on simultaneous secret sharing. For example, in [23] each set of cardinality at least
t should be able to reconstruct some of the secrets (but not all). In [11] each set B
of cardinality at least t is able to reconstruct any of the distributed secrets. 6 The
security requirement considered in [11, 24, 10] is somewhat weak: after revealing one
of the secrets, limited information about other secrets may be leaked.
Our scheme for the family of functions ALL ' uses shares of size 2 ' . We can
construct an e-cient scheme if we relax the security requirement as follows: every
set of size at least t can evaluate any function f(s) in a way that every coalition of
at most b(t 1)=2c parties gains no information on the secret (but larger coalitions
may get some information). In this case, we can use Shamir's t-out-of-n scheme to
distribute the secret and the interactive protocol of [5] or [15] to securely compute
f(s). In this scenario, the length of the share is the same as the length of the secret.
However, for most functions f , the length of the communication in the evaluation of
f is exponential in '.
6 In fact [11] deals with more general access structures and not only threshold schemes.
Acknowledgment
. The authors would like to thank the referees for several
helpful comments.
--R
On hiding information from an oracle
Combinatorial Theory
Hiding instances in multioracle queries
Universally ideal secret sharing schemes
Completeness theorems for noncrypto- graphic fault-tolerant distributed computations
Keeping shares of a secret secret
in Advances in Cryptology - CRYPTO '88
The security of ramp schemes
Some ideal secret sharing schemes
On the classi
Some improved bounds on the information rate of perfect secret sharing schemes
How to share a function securely
in Advances in Cryptology - AUSCRYPT '92
Shared generation of authenticators and signatures
Communication complexity of secure computation
sharing schemes realizing general access structure
in Advances
On secret sharing systems
Generalized linear threshold scheme
Privacy and communication complexity
Communications of the ACM
On data banks and privacy homomor- phisms
How to share a secret
An introduction to shared secret and/or shared control and their application
The geometry of shared secret schemes
An explication of secret sharing schemes
--TR
--CTR
Christian S. Collberg , Clark Thomborson, Watermarking, tamper-proffing, and obfuscation: tools for software protection, IEEE Transactions on Software Engineering, v.28 n.8, p.735-746, August 2002 | secret sharing;private computations;private channels;interaction;broadcast channel |
350571 | The Online Transportation Problem. | We study the online transportation problem under the assumption that the adversary has only half as many servers at each site as the online algorithm. We show that the GREEDY algorithm is $\Theta( {\rm min}(m, \lg C))$-competitive under this assumption, where m is the number of server sites and C is the total number of servers. We then present an algorithm BALANCE, which is a simple modification of the GREEDY algorithm, that is, O(1)-competitive under this assumption. | Introduction
We consider the natural online version of the well-known transportation problem
[2, 5]. The initial setting consists of a collection of server
sites in a metric space M. Each server site s j has a positive integral capacity c j .
The online algorithm A sees over time a sequence ng of requests for
service, with each request being a point in M. In response to the request r i , A
must select a site s oe(i) to service r i . The cost for this assignment is the distance
in the metric space between s oe(i) and r i . Each site s j can service at
most c j requests. The dilemma faced by the online algorithm A is that, at the
time of the request r i , A is not aware of the location of the future requests. The
goal for the online algorithm is to minimize 1
cost
to service the requests. Note that this is equivalent to minimizing the total cost
For concreteness, consider the following two examples of online transportation
problems. In the fire station problem, the site s j is a fire station that
contains c j fire crews. Each request is a fire that must be handled by a fire crew.
The problem is to assign the crews to the fire so as to minimize the average
distance traveled to get to a fire. In the school assignment problem, the site s j
is a school that can has a capacity of c j students. Each request is a new student
who moves into the school district. The problem is to assign the children to a
kalyan@cs.pitt.edu, Computer Science Dept., University of Pittsburgh, Pittsburgh, PA
15260, Supported in part by NSF under grant CCR-9202158.
y kirk@cs.pitt.edu, Computer Science Dept., University of Pittsburgh, Pittsburgh, PA 15260,
Supported in part by NSF under grant CCR-9209283.
school so as to minimize the average distance traveled by the children to reach
their schools.
The standard measure of "goodness" of an online algorithm is the competitive
ratio. For the online transportation problem, the competitive ratio for an online
algorithm A is the supremum over all possible instances I, of A(I)=OPT (I),
where A(I) is the total cost of the assignment made by A, and OPT (I) is the
total cost of the minimum cost assignment for instance I. The standard way
to interpret the competitive ratio is as a payoff of a game played by the online
algorithm A against an all powerful adversary that specifies the requests, and
services them in the optimal way. Note that the instance I specifies the metric
space as well as the values of each s j , c j , and r i .
In [1, 3] the online assignment problem, a special case of online transportation
in which each capacity c In [1], it was shown that the
competitive ratio of the intuitively appealing greedy algorithm, which assigns
the nearest available server site to the new request, has a competitive ratio of
In [1, 3], it was shown that the optimal deterministic competitive ratio
is 1. The algorithm that achieves this competitive ratio requires a shortest
augmenting path computation for each request. These results illustrate some
shortfalls of using competitive analysis, namely:
ffl The achievable competitive ratios often grow quickly with input size, and
would seem to overly pessimistic for "normal" inputs.
ffl The algorithm that achieves the optimal competitive ratio is often unnecessarily
complicated for "normal" inputs.
ffl The poor competitive ratio of an intuitive greedy algorithm may not reflect
the fact that it may perform reasonably well on "normal" inputs.
In situations where competitive analysis suffers such shortcomings, it is important
to find alternate ways to to identify online algorithms that would work
well in practice. In this paper, we adopt a modified version of competitive analysis
that we call the weak adversary model. Generally speaking, in this model
the adversary is given slightly less resources than the online algorithm. The
intuition is that for "normal" inputs, one might expect that the performance of
an offline algorithm would not degrade significantly if its resources were slightly
reduced. Hence, if we can prove that an online algorithm is competitive against
an adversary with slightly less resources, then one might argue that the online
algorithm will be competitive against an equivalently equipped offline algorithm
on "normal" inputs. One can also view this weakening of adversary as measuring
the additional resources required by the online algorithm to offset the decrease
in performance due to the online nature of the problem.
In the case of the transportation problem we compare the online algorithm
with c i servers at s i to an offline line algorithm with a servers at s i
(we assume that c i is even). Given an instance I of the online transportation
problem with n requests,
I 0 be the same instance with each
capacity c i replaced by a i . We then say the halfopt-competitive ratio of an online
algorithm A is the supremum over all instances I, with
of the ratio A(I)=OPT (I 0 ). We assume that the online algorithm has twice as
many servers as the adversary because this is the least advantage that we can
give to the online algorithm without annulling our analysis techniques.
In this paper we present the following results. In section 3, we show that
the halfopt-competitive ratio of the greedy algorithm is \Theta(min(m; log C)), where
is the sum of the capacities. If the server capacity of each site is
constant, then the halfopt-competitive ratio is logarithmic in m, a significant
improvement over the exponential bound on the traditional competitive ratio.
In section 4, we describe the algorithm Balance, which is a simple modification
of the greedy algorithm, and has a halfopt-competitive ratio that is O(1). Recall
that the traditional competitive ratio of every deterministic online algorithm is
m).
We now summarize related results. The weakened adversary model was introduced
in [6] in the context of studying paging. This model has also been used
to study variants of the k-server problem, a generalization of the paging problem
(see for example [8]). References to other other suggested variants of competitive
analysis can be found in [4]. Further ancillary results on online assignment,
which are not directly related to the results in this paper, can be found in [1].
In [7], the average competitive ratio for the greedy algorithm in the online assignment
problem is studied under the assumption that the metric space is the
Euclidean plane and the points are uniformly distributed in a unit square. The
offline transportation problem can be solved in polynomial-time[2, 5].
Preliminaries
In this section we introduce some definitions, facts, and concepts that are common
to the remaining sections. We generally begin by assuming the simplifying
condition that the online capacity c i of each server site is two. We will think
of s i as containing two online servers s 1
i that move to service requests.
We also think of s i as containing one adversary server s a
i . We assume, without
loss of generality, that the adversary services request r i with s a
i . We use s oe(i) to
denote the site that the online algorithm uses to service request r i . We define
a weighted bipartite graph E), which we call the response graph,
by including an online edge (r and an adversary edge (r
request r i . The weight of each edge (r G is the distance d(r
the r i and s j in the underlying metric space. In figure 1, the online edges are
the solid edges, the adversary edges are the dashed edges, the server sites are
the filled circles, and the requests are the question marks. The notation shown
in figure 1 will be used throughout this paper.
request vertex that is in a cycle in G. Let T be the connected
component of G \Gamma (s contains r i . Then T is a tree.
Proof: Consider a breadth first search of T starting from r i . Such a search
divides T into levels L where the vertices in level L k are k hops from
r a
s a s b
r c
r
s
r r
s
s
d
d
e
e
f
f
Figure
1: An Example Connected Component of the Response Graph
r i . The vertices in the odd levels are server vertices, and the vertices in even
levels are request vertices. Edges that go from an even level to an odd level are
adversary edges, and edges that go from an odd level to an even level are online
edges. Assume to reach a contradiction that C is a cycle in T . Let y be the
vertex in C that is in the highest level L c (i.e., largest c), and let x and z be the
two vertices adjacent to y in C. Note that it may be the case that x = z. Now
it must be the case that x and z are in L c\Gamma1 or we would get a contradiction to
one of the bipartitieness of G, the definition of x, or the definitions of the levels.
If c is odd, then we get a contradiction to the fact that the adversary has only
one server per site. If c is even, we get a contradiction to the fact that the online
algorithm only uses one server to service each request.
Let T be a tree as described in lemma 1. If we root the tree T at r i then T
has the following structure. For each request r j 2 T , the one child of r j is s j . If
r j is not the root, then the server site s oe(j) is the parent of r j in T . The leaves of
are server sites with no incident online edges. We denote the total cost of the
adversary edges in a tree T by OPT (T ), that is, OPT
Analogously, we define ON
Note that ON (T ) includes
the cost of the online edge incident to the root of T , even though this edge is not
in T . For a vertex x 2 T , we define the leaf distance ld(x) to be the minimum
over all leaves s j in T of the distance between x and s j . If a server at site s j
serviced the root r i of T , then ld(s x is a node in T , we
define T (x) to the the subtree of T rooted at x.
In this paper, log means the logarithm base 2.
3 Analysis of the Greedy Algorithm
We begin with the upper bound on the competitive ratio for the algorithm
Greedy, which uses the nearest available server to service each request. We
first assume that the online capacity of each server site is two, and then show
how to extend the proof to the general case.
Theorem 2 The halfopt-competitive ratio of Greedy for online transportation,
with and two online servers per site, is at most 2 log m.
In order to prove this theorem, we will divide the response graph G into
edge disjoint rooted trees, T l . For each such tree, we will establish the
competitive bound independently.
Our construction yields trees (T j 's) that satisfy the following tree invariants:
1. Each nonleaf server site s i in T j has two incident online edges in T j .
2. Each leaf of T j is a server site that had an unused server at the time of
each request in T j .
Using the following iterative procedure to construct the trees.
Tree Construction Procedure: Assume that trees T
constructed. We explain how to construct T j in our next iteration. During this
construction we will modify G. The root of T j is the most latest request r ff(j)
not included in a previous tree T . The online edge incident to r ff(j)
is removed from G. Let L be the collection of server vertices s i such that s i is
reachable from r ff(j) , and s i currently has at most one incident online edge in G.
Note that an s i 2 L might have originally had two incident online edges if one
or both of these edge lead to the root of one of the trees T
T j be the edges on paths in G from r ff(j) to the server vertices in L. Note that
by lemma 1 there is a unique path from r ff(j) to each vertex in L. It is not hard
to see that T j satisfies the tree invariants. The edges and request vertices in
are then removed from G, and we proceed to our next iteration to construct
contains edges.
We now fix a particular tree, say simplicity drop the j
superscript.
Lemma 3 For each request r
Proof: The proof is by induction on the number k of request nodes in the induced
tree T (r i ). If then the child s i of r i is a leaf, and T (r i ) consists of one
adversary edge (r
by the definition of Greedy.
Now suppose k ? 1. If s i is not a leaf then it has two children in T (r i ),
say r a and r b (see figure 1). By induction, it must be the case that ld(r a
OPT (T (r a )), d(s
OPT (T (r b )). Therefore by induction and the triangle inequality,
The fact that d(s since Greedy would
only assign s oe(i) to r i if d(s
Lemma 4 Let r i be a request in T , and let k be the number of request vertices in
Proof: The proof is by induction on k. Observe that according to our tree
construction, each server node has either two children or none. Therefore, k
must be odd.
For
by the definition of Greedy.
Now consider the case 3. Let the two children of s i in T (r i ) be r a and
r b . Note that
ON (T (r
and that
By the definition of Greedy, d(s
lemma 3, d(s Hence, by substitution,
ON (T (r i
Now consider the case k ? 3. Once again let the two children of s i in T (r i )
be r a and r b . Notice that ON (T (r
By equation 1,
Hence,
ON (T (r i
loss of generality, that OPT (T (r a
y be the number of request nodes in T (r a ). We now break the proof into cases.
In the first case, we assume that both T (r a ) and T (r b ) consist of more than
one request vertex. Hence, 3 - y 4. By induction ON (T (r a
and ON (T (r b Hence, by substituting into equation
2 we get
ON (T (r i
in order to show ON (T (r i
OPT (T (r i )) it is sufficient to show
2x log y
In turn it is sufficient to show that
x log y
Let f(x; x be the left hand side of this
inequality. Notice that f(x; y) is linear in x. Hence the maximum of f(x; y)
must occur at the boundary, that is, at the point or the point
Inequality 3 follows immediately for we have to find the
value of y that maximizes f(w=2; 1). The
derivative of f(w=2; y) with respect to y is w
y
k\Gamma1\Gammay )). Hence, one can
see that the maximum f(w=2; y) occurs at 1)=2. We now must show
that
which by algebraic simplification is equivalent to log(k \Gamma 1) - log k.
We now consider the case that T (r a ) contains only one request. So
By induction ON (T (r a
by substituting into equation 2 we get
ON (T (r i
In order to show that ON (T (r i is sufficient to show
Since the left hand side is linear in x, we need only consider
The case immediately. If are left with showing
k. This is equivalent to log(2
log k, or
which one can verify holds for k - 3.
We now consider the case that T (r b ) contains only one request. So 2.
By induction ON (T (r a
by substituting into equation 2 we get
ON (T (r i
In order to show that ON (T (r i is sufficient to show
that
Since the left hand side is linear in x, one need only verify that the inequality
holds at the boundaries
Proof: (of Theorem 2.) Applying lemma 4 to each tree T i , we get the desired
result.
We now extend the result to the case that the online capacities are larger
than two. Recall that
is the total online capacity.
Theorem 5 The halfopt-competitive ratio of Greedy for online transportation is
O(min(m; log C)).
Proof: The upper bound of O(log C) is immediate by theorem 2 if we conceptually
split a server site with c i online servers into c i =2 sites with 2 arbitrary
online servers and 1 arbitrary adversary server.
To see the O(m) bound we need to be more careful about how we split the
server sites up. Assume that the tree construction procedure just constructed a
tree T k . We perform some pruning of sT k , if necessary, before we proceed to
construct T k+1 . If no root-to-leaf path in T k passes through two server sites that
are at the same location, then the number of vertices in T k is O(2 m ). Hence,
the O(m) bound follows from lemma 4.
If T k contains root-to-leaf paths that pass through two server sites that are
at the same location, we show how to modify T k to remove such paths. Assume
that T k contains a root-to-leaf path that first passes through s i and then passes
through s j , where s i and s j are at the same location. We modify T k by making
server vertex s j the child of r i in T k . See figure 2. Note that may remove edges
and vertices originally below s i from T k . We repeat this process until T k has no
root to leaf path passing through two server sites at the same location. Notice
that the resulting tree T k still satisfies the tree invariants. Now we start the
construction of T k+1 .
Figure
2: The original T k on the left, and the new T k on the right
We now prove an asymptotically matching lower bound for the halfopt-
competitive ratio for Greedy.
Theorem 6 The halfopt-competitive ratio of Greedy for the transportation problem
is
\Omega\Gamma/15 (m; log C)).
Proof: Assume without loss of generality that C). We embed m
server sites on the real line. The server site s 1 is located at the point \Gamma1. The
server site s i m) is located at 2 1. The online algorithm has
servers at s i , while the adversary has a servers at s i . Thus
the online algorithm has a total of servers, while the adversary has
a total of servers. The requests occur in m batches. The first batch
consists of 2 m\Gamma1 requests at the point 0. The ith batch (2 - i - m) contains
requests that occur at 2 the location of s i . Greedy responds to
batch m) by answering each request in batch i with a server at site s i+1 ,
thus depleting site s i+1 . Greedy responds to the mth batch by moving one
server from s 1 . Thus the total online cost is m2 . By using the servers in s i
to handle batch i (1 - i - m) it is possible to obtain a total cost of 2
4 The Algorithm Balance
In this section we present an algorithm, Balance, with a halfopt-competitive
ratio of O(1).
Algorithm Balance: At each site s h we classify half of the servers as primary
and half of the servers as secondary. Let c ? 5
2 be some constant. Define
the pseudo-distance from a request r i to a primary server at site s j to be d(s
and the pseudo-distance from r i to a secondary server at site s j to be c \Delta d(s
Balance services each request r i with an arbitrary server with minimal pseudo-distance
from r i .
Our goal is now to show that the halfopt-competitive ratio of Balance
for online transportation, with two online servers per site, is O(1). We first
break the response graph G into disjoint trees. Let C l be the connected
components of the response graph G. By lemma 1 each connected component of
the response graph contains a unique cycle. Let r ff(i) be the most recent request
in the cycle in C i . Let T j be the tree that is C j minus the online edge incident
to r ff(j) , and we set the root of T j to be r ff(i) . Each such tree T j satisfies the
following two tree invariants:
1. Each nonleaf server site s incident online edge in T ,
had the secondary server available just before the time of each request
2. Each leaf of T j is a server site s i that had both of its servers available just
before the time of each request in T j .
We now fix a particular tree, say simplicity drop the superscript
j. In order to show that the halfopt-competitive ratio of Balance is
O(1) it is sufficient to show that ON (T
Definition 7 Let s i be a generic server site in T . We say that the primary server
child s a of s i is the server site that the adversary used to service the request serviced
by s 1
, and the secondary server child s b of s i is the server site that the adversary
used to service the request serviced by s 2
. The server parent s p of s i is the server
site used by Balance to service r i . The site s i is a double if it has two server
children, and otherwise s i is a single.
Lemma 8 If Balance uses a server at site s p to handle a request r
and
Proof: Observe that there is a path in T from r i to some leaf s k with total
length at most d(r
Balance didn't use s k to service r i . If s
by the triangle inequality. Now
Lemma 9 Assume that Balance uses a secondary server s 2
p to handle a request
that is not the root of T . Then
and
Proof: Observe that there is a path in T from r i to some leaf s k with total
length at most d(r
Balance didn't use the primary server at s k to service r i . Now
Fact 11 For all nonnegative reals x and y, and for all c ? 1,
Proof: Suppose 2x - (1 + 1=c)y. It suffices to show that min(2x; (1
shows that (1
On the other hand, suppose 1=c)y. It suffices to show that
c+1 x). Simple algebra shows that
Lemma 12 Assume that s i is a double server site with server children s a and s b .
Then
Proof: Note that ld(s i by lemma 8, and that ld(s i
9. The lemma then follows by fact 11.
local if either s i is a single, or one of s i 's server
children is a single. Otherwise, s i is global. For convenience, we call a request r i
local if s i is local. Otherwise r i is global. For a server site s i we use z(s p ) to denote
We now break the accounting of the online edges in T into cases. We first
show that for every local server s i , the cost for Balance to serve r i is O(z(s p )).
local r i
Further for each parent server s p of a local server s i
Proof: We break the proof into two cases. In the first case assume that s i is a
single. It must be the case that d(s Balance didn't use
a server at s i to handle r i . Hence,
single local s i
single local s i
Further
In the second case assume that s i is double, and one of s i 's server children
is a single. For simplicity assume that s a is a single; one can verify that the
following argument also holds if s b is a single. Since s a is a single we can apply
the analysis from the previous case to get that d(s
Balance serviced r i with a server at s p instead of the unused server at s a we
get that
Hence, by substitution,
Thus X
double local s i
double local s i
double local s i
Furthermore,
The result then follows.
Now we consider the online cost of servicing the global r i 's. Observe that by
lemma 8 it is the case that
global r i 2T
global r i 2T
global s i 2T
We now will show that
global s i 2T
Definition 15 Let s i be a global server site in T . We define S(s i ) to be the set of
global server sites in the subtree of T rooted at s i with the property that, for any
server site s j 2 S(s i ), there are no global server sites in the unique path from s i to
s j in T . We define LC(s i ) to be the sum of the cost of the offline edges
the subtree rooted at s i with the property that the unique path from s i to r j in T
does not pass through a global server site.
To give an alternative explanation of LC(s i ) consider pruning T at the global
server vertices, which results in a collection of trees rooted at global server
vertices. Then LC(s i ) is the total cost of the offline edges in the tree rooted at
Lemma 16 For any global server site s i ,
Proof: If the server children s a and s b of s i are both global then
Hence, the result follows in this case since fl -
Now assume that at least one server child, say s a , of s i is local. Note that s a
and s b must be a double, since s i is global. Further assume for the moment that
s b of s i is global. Let s c and s d be the two server children of s a . Then using
equation 5 we get that
In this case we continue to expand ld(s c ) and ld(s d ) using the general expansion
method described below. Now consider the case that both s a and s b are local.
Let s e and s f be the server children of s b . Then using equation 5 we get that
In this case we continue to expand ld(s c ), ld(s d ), ld(s e ) and ld(s f ) using the
general expansion method described below.
We now describe the general expansion method. We expand each ld(s j )
using equations 4 and 6. A term of the form ld(s j ) is expanded according to the
following rules:
1. If s j is a leaf in T then ld(s j ) is set to 0.
2. If s j has a local server child s k , then ld(s j ) is set to
which is valid by equation 6. Observe that since s k is local it will be
subsequently expanded.
3. If none of s j 's server children are local then ld(s j ) is set to 2(d(r
is an arbitrary server child of s k . This is valid by equation
4. Note that in this case the term ld(s k ) is not expanded again since
The that appear in this general expansion process are all included
in LC(s i ). Hence, we get
for some ff and fi. Since each offline edge appears in this general expansion at
most twice, and in each case the coefficient is at most 1), we can conclude
that ff - 1). Note that in rule 2, the coefficient in front of the ld(s j ) term
before the expansion is the same as the coefficient in front of the ld(s k ) term
after expansion. Since the coefficient in front of each ld term is fl 2 before the
application of any of the general expansion rules, and the only way that it can
change is by application of rule 3, which is a terminal expansion, each coefficient
on a ld(s j ) term when general expansion terminates is at most 2fl 2 . The result
then follows.
Lemma
global s i 2T
Proof: First observe that if s i and s j are two different global server sites, then
there is no common offline cost in the sums LC(s i ) and LC(s j ). As a consequence
we get X
global s i 2T
Therefore, it suffices to show that
global s i 2T
global s i 2T
Applying lemma 16 we have that
global s i 2T
global s i 2T
global s i 2T
we get that
global s i 2T
global s i 2T
Now substituting back, we get
global s i 2T
global s i 2T
The result then follows.
Theorem The halfopt-competitive ratio of Balance, with two online servers
per site, in the online transportation problem is
Proof: The total online cost is (c 2 +c+1)OPT (T ) from lemma 14, plus OPT (T ),
plus
We now assume an arbitrary number of online servers per site.
Theorem 19 The halfopt-competitive ratio of Greedy for the online transportation
problem is
Proof: This follows immediately by conceptually splitting each server site s i into
sites.
5 Conclusion
The most obvious avenue for further investigation is to determine the competitive
ratio in the weakened adversary model when the adversary's capacity is more
than half of the online capacity. It seems that some new techniques will be
needed in this case since the response graph no longer has the treelike property
from lemma 1 that was so critical in our proofs.
--R
"Online weighted matching"
Algorithms for Network Programming
"On-line algorithms for weighted matchings and stable marriages"
"Beyond competitive analysis"
Networks and Matroids
"Amortized efficiency of list update and paging rules"
"Average performance of a greedy algorithm for the on-line minimum matching problem on Euclidean space"
"The k-server dual and loose competitiveness for paging,"
--TR
--CTR
Adam Meyerson , Akash Nanavati , Laura Poplawski, Randomized online algorithms for minimum metric bipartite matching, Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm, p.954-959, January 22-26, 2006, Miami, Florida
Wun-Tat Chan , Tak-Wah Lam , Hing-Fung Ting , Wai-Ha Wong, A unified analysis of hot video schedulers, Proceedings of the thiry-fourth annual ACM symposium on Theory of computing, May 19-21, 2002, Montreal, Quebec, Canada | matching;online algorithms;competitive analysis |
350710 | Superlinear Convergence of an Interior-Point Method Despite Dependent Constraints. | We show that an interior-point method for monotone variational inequalities exhibits superlinear convergence provided that all the standard assumptions hold except for the well-known assumption that the Jacobian of the active constraints has full rank at the solution. We show that superlinear convergence occurs even when the constant-rank condition on the Jacobian assumed in an earlier work does not hold. | Introduction
. We consider the following monotone variational inequality
over a closed convex set C ae
Find z 2 C such that (z
(1)
and the set C is defined by the following algebraic inequality:
. The mapping \Phi is assumed to be C 1 (continuously differentiable)
and monotone; that is,
while each component function g i (\Delta) of g(\Delta) is convex and twice continuously differentiable
By introducing g(\Delta) explicitly into the problem (1), we obtain the following mixed
nonlinear complementarity (NCP) problem: Find the vector triple (z; -; y) 2 IR n+2m
such that
y
\Gammag(z)
(2)
is the C 1 function defined by
It is well known [3] that, under suitable conditions on g such as the Slater constraint
qualification, z solves (1) if and only if there exists a multiplier - such that (z; -)
solves (2).
To show superlinear (local) convergence in methods for nonlinear programs, on
eusually makes several assumptions with regard to the solution point. Until recently,
these assumptions included (local) uniqueness of the solution (z; -; y). This uniqueness
condition was relaxed somewhat in [6] to allow for several multipliers - corresponding
Department of Mathematics, The University of Melbourne, Parkville, Victoria 3052, Australia.
The work of this author was supported by the Australian Research Council.
y Mathematics and Computer Science Division, Argonne National Laboratory, 9700 South Cass
Avenue, Argonne, Illinois 60439, U.S.A. This work was supported by the Mathematical, Information,
and Computational Sciences Division subprogram of the Office of Computational and Technology
Research, U.S. Department of Energy, under Contract W-31-109-Eng-38.
to a locally unique solution z of (1), by introducing a constant rank condition on the
gradients of the constraints g i that are active at z . The point of this article is to show
that superlinear convergence holds in the previous setting [6] even when the constant
rank condition does not hold. This result lends theoretical support to our numerical
observations [6, Section 7]. Moreover, we believe that the superlinear convergence
result can be shown for other interior-point methods whose search directions are
asymptotically the same as the pure Newton (affine-scaling) direction defined below
(6).
Briefly stated, the assumptions we make to obtain the superlinear results are as
follows: monotonicity and differentiability of the mapping from (z; -) to (f(z; -); \Gammag(z)),
such that the partial derivative with respect to z is Lipschitz near z ; a positive definiteness
condition to ensure invertibility of the linear system that is solved at each iteration
of the interior-point method; the Slater constraint qualification on g; existence
of a strictly complementary solution; and a second-order condition that guarantees
local uniqueness of the solution z of (1). A formal statement of these assumptions
and further details are given in Section 2.2. Superlinear convergence has been proved
for other methods for nonlinear programming without the strict complementarity as-
sumption, but these results typically require the Jacobian of active constraints to have
full rank (see Pang [5], Bonnans [1], and Facchinei, Fischer, and Kanzow [2]).
Possibly the best known application of (1) is the convex programming problem
defined by
min
z
OE(z) subject to z 2 C;
DOE. It is easy to show that the
formulation (2),(3) is equivalent to the standard Karush-Kuhn-Tucker (KKT)
conditions for (4). If a constraint qualification holds, then solutions of (4) correspond,
via Lagrange multipliers, to solutions of (2)-(3) and, in addition, solutions of (1) and
coincide.
We consider the solution of (1) by the interior-point algorithm of Ralph and
Wright [6], which is in turn a natural extension of the safe-step/fast-step algorithm
of Wright [7] for monotone linear complementarity problems. The algorithm is based
on a restatement of the problem (2) as a set of constrained nonlinear equations, as
\Gamma\LambdaY e5 =4 r f (z; -)
r g (z; y)
where the residuals r f and r g are defined in an obvious way. All iterates (z
satisfy the positivity conditions strictly; that is, (-
The interior-point algorithm can be viewed as a modified Newton's method applied to
the equality conditions in (5), in which search directions and step lengths are chosen
to maintain the positivity condition on (-; y). Near a solution, the algorithm takes
steps along the pure Newton direction defined by4 D z f Dg T 0
\Delta-
r g (z; y)
The solution (\Deltaz; \Delta-; \Deltay) of this system is also known as the affine-scaling direction.
The duality measure defined by
is used frequently in our analysis as a measure of nonoptimality and infeasibility.
To extend the superlinear convergence result of [6] without a constant rank condition
on the active constraint Jacobian, we show that the affine-scaling step defined
by (6) has size O(-). Hence, the superlinearity result can be extended to most algorithms
that take near-unit steps along directions that are asymptotically the same as
the affine-scaling direction.
Since we are extending our work in [6], much of the analysis in that earlier paper,
much of the analysis in the earlier work carries over without modification to the present
case, and we omit many of the details here. We focus instead on the main technical
result needed to prove fast local convergence-the estimate (\Deltaz; \Delta-;
the affine-scaling step-and restate just enough of the earlier material to make the
current note self-contained.
2. The Algorithm. In this section, we review the notation, assumptions, and
the statement of the algorithm from Ralph and Wright [6]. We also state the main
global and superlinear convergence results, which differ from the corresponding theorems
in [6] only in the absence of the constant rank assumption.
2.1. Notation and Terminology. We use S to denote the solution set for (2),
and S z;- to denote its projection onto its first
For a particular z to be defined in Assumption 4, we define
We can partition f1; into basic and nonbasic index sets B and N such
that for all solutions (z
The solution (z strictly complementary if -
for all
We use -N and -B to denote the subvectors of - that correspond to the index sets
N and B, respectively. Similarly, we use DgB (z) to denote the jBj \Theta n row submatrix
of Dg(z) corresponding to B.
Finally, if we do not specify the arguments for functions g, Dg, f , and so on, they
are understood to be the appropriate components of the current point (z; -; y). The
notation Dg refers to Dg(z ).
2.2. Assumptions. Here we give a formal statement of the assumptions needed
for global and superlinear convergence. Some motivation is given here, but we refer
the reader to the earlier paper [6] for further details.
The first assumption ensures that the mapping f defined by (3) is monotone with
respect to z and therefore that the mapping (z; -) ! (f(z; -); \Gammag(z)) is monotone.
Assumption 1. \Phi : and each component function
The second assumption requires positive definiteness of a certain matrix projec-
tion, to ensure that the coefficient matrix of the Newton-like system to be solved for
each step in the interior-point algorithm is nonsingular (see (11)).
Assumption 2. The two-sided projection of the matrix
D z f(z;
onto ker Dg(z) is positive definite for all z 2 IR n and -
that is, for any basis
Z of ker Dg(z), the matrix Z T D z f(z; -)Z is invertible.
Note that this assumption is trivially satisfied when the nonnegativity condition
z - 0 is incorporated in the constraint function g(\Delta).
We assume, too, that the Slater condition holds for the constraint function g.
Assumption 3. There is a vector -
z 2 C such that g(-z) ! 0.
Next, we assume the existence (but not uniqueness) of a strictly complementary
solution.
Assumption 4. There is a strictly complementary solution (z
The strict complementarity condition is essential for superlinear convergence in a
number of contexts besides NCP and nonlinear programming. See, for example Wright
[8, Chapter 7] for an analysis of linear programming and Monteiro and Wright [4] for
asymptotic properties of interior-point methods for monotone linear complementarity
problems.
Next, we make a smoothness assumption on \Phi and g in the neighborhood of the
first component z of the strictly complementary solution from Assumption 4. (We
show in [6, Lemma 4.2] that, under this assumption, z is the first component of all
solutions.)
Assumption 5. The matrix-valued functions D\Phi and D 2 g i , are
Lipschitz continuous in a neighborhood of z .
Finally, we make an invertibility assumption on the projection of the Hessian
onto the kernel of the active constraint Jacobian. This assumption is essentially a
second-order sufficient condition for optimality.
Assumption 6. Let z be defined as in Assumption 4, and let B, S z;- and S
- be
defined as in Section 2. Then for each - 2 S
- , the two-sided projection of D z f(z ; -)
onto ker(Dg
In the statements of our results, we refer to a set of "standing assumptions,"
which we define as follows:
Standing Assumptions: Assumptions 1-6, together with an assumption
that the algorithm of Ralph and Wright [6] applied to the problem
(2) generates an infinite sequence f(z with a limit
point.
Along with Assumptions 1-6, the superlinear convergence result in Ralph and Wright
[6] requires a constant rank constraint qualification to hold. To be specific, the analysis
of that paper requires the existence of an open neighborhood U of z such that for
all matrix sequences fH k g ae fDgB (z) T
index sets J ae we have that
However, in the analysis of [6], this assumption is not invoked until Section 5.4, so we
are justified in reusing many results from earlier sections of that paper here. Indeed,
we also reuse results from later sections of [6] by applying them to constant matrices
(which certainly satisfy the constant rank condition).
The algorithm makes use of a family of
defined for positive parameters
and fi as follows:
In particular, the kth iterate (z belongs to \Omega\Gamma
chooses the sequences ffl k g and ffi k g to satisfy
Given the notation
it is easy to see that
Since all iterates (z belong to \Omega\Gamma and since the residual norms kr f k and kr g k
are bounded in terms of - for vectors in this set, we are justified in using - alone as
an indicator of progress, rather than a merit function that also takes account of the
residual norms.
We assume that the sequence of iterates has a limit point, which we denote by
By [6, Theorem 3.2], we have that (z ; -; y S. We are particularly interested in
points
in\Omega that lie close to this limit point, so we define the near-solution neighborhood
S(ffi) by
2.3. The Algorithm. The major computational operation in the algorithm is
the repeated solution of 2m-dimensional linear systems of the form4 D z f Dg T 0
\Delta-
r g (z; y)
where the centering parameter ~ oe lies in the range [0; 1
2 ]. These equations are simply
the Newton equations for the nonlinear system of equality conditions from (2), except
for the ~ oe term. The algorithm searches along the direction (\Deltaz; \Delta-; \Deltay) obtained
from (11).
At each iteration, the algorithm performs a fast step along a direction obtained by
solving (6) (or, equivalently, (11) with ~
We choose the
neighborhood\Omega k+1 to
be strictly larger
than\Omega k (by appropriate choice of fl k+1 and fi k+1 ), thereby allowing
a nontrivial step ff k to be taken along this direction without
leaving\Omega k+1 . If the fast
step achieves at least a certain fixed decrease in -, it is accepted as the new iterate.
Otherwise, we
reset\Omega k+1
/\Omega k and defne a safe step by solving (11) with ~
oe chosen in
the range [-oe; 1) for some constant -
We perform a backtracking line search
along this direction, stopping when we identify a value of ff k that achieves a "sufficient
decrease" in - without leaving the
set\Omega k+1 .
The algorithm is parametrized by the following quantities whose roles are explained
more fully in [6].
where exp(\Delta) is the exponential function. The constants fi min and fl max are related to
the starting point (z
The main algorithm is as follows.
terminate with solution (z
else
Although we may calculate both a fast step and a safe step in the same iteration,
the coefficient matrix in (11) is the same for both steps, so the coefficient matrix is
factored only once.
The safe-step procedure is defined as follows.
choose ~ oe 2 [-oe; 1], ff
solve (11) to find (\Deltaz; \Delta-; \Deltay);
choose ff to be the first element in the sequence ff
such that the following conditions are satisfied:
return (z(ff); -(ff); y(ff)).
The fast step routine is described next.
solve (11) with ~
to find (\Deltaz; \Delta-; \Deltay);
set ~
define
choose ff to be the first element in the sequence ff
such that the following conditions are satisfied:
return (z(ff); -(ff); y(ff)).
2.4. Convergence of the Algorithm. The algorithm converges globally according
to the following theorem.
Theorem 2.1. (Ralph and Wright [6, Theorem 3.2]) Suppose that Assumptions 1
and 2 hold. Then either
limit points of f(z belong to S.
Here, however, our focus is on the following local superlinear convergence theorem.
It is simply a restatement of [6, Theorem 3.3] without the constant rank condition on
the active constraint Jacobian matrix [6, Assumption 7].
Theorem 2.2. Suppose that Assumptions 1, 2, 3, 4, 5, and 6 are satisfied and
that the sequence f(z is infinite, with a limit point (z ; -; y S. Then the
algorithm eventually always takes fast steps, and
(i) the sequence f- k g converges superlinearly to zero with Q-order at least 1
and
(ii) the sequence f(z converges superlinearly to (z ; -; y ) with R-order
at least 1
- .
The proof of this result follows that of the earlier paper in all respects except for
the estimate
for the affine-scaling step calculated from (6). The remainder of this section is devoted
to proving that this estimate holds under the given assumptions.
3. An O(-) Estimate for the Affine-Scaling Step. Our strategy for proving
the estimate (12) for the step (6) is based on a partitioning of the right-hand side in
(6). The following vectors are useful in defining the partition.
(13a)
(13c)
(13d)
where z is defined in Assumption 4 and (z ; -) is the projection of the current point
(z; -) onto the set S z;- of (z; -) solution components. The right-hand side of (6) can
be partitioned as4 r f
r g
\Gamma\LambdaY e5 =4 j f
\Gamma\LambdaY e5 +4 ffl f
We define a corresponding splitting of the affine-scaling step:
the following linear systems:4 D z f (Dg) T 0
We define a third variant on (6) as follows:4 D z f (Dg
\Gamma(Dg
c
\Deltaz
c
\Delta-
\Deltay7 5 =4 -
and split the step ( c
\Deltaz; c
\Delta-; c
\Deltay) as
\Deltaz; c
\Delta-; c
t; ~ u; ~ v)
v) and ( ~ t
\Gamma(Dg
~ u
\Gamma(Dg
~
Because of Assumption 2, the matrices in (15), (16), (17), (19), and (20) are all
invertible, so all these systems have unique solutions.
Our basic strategy for proving the estimate (12) is as follows. From [6, Section 5.3],
we have without assuming the constant rank condition that
positive constant. The constant rank assumption
is, however, needed in [6] to prove that the other step component (t; u; v) is also O(-).
In this article, we obtain the same estimate without the constant rank assumption, by
proving that
\Deltaz; c
\Delta-; c
\Deltay
for all (z; -; y) 2 S(ffi).
Our first result, proved in the earlier paper [6], collects some bounds that are
useful throughout this section.
Lemma 3.1. [6, Lemma 5.1] Suppose that the standing assumptions hold. Then
there is a constant C 1 such that the following bounds hold for all (z; -; y) 2 S(1):
(22a)
(22c)
Lemma 3.1 implies that the limit point (z ; -
defined in (9) has
The second result is as follows.
Lemma 3.2. (cf. [6, Lemma 5.2]) Suppose that the standing assumptions are
satisfied. Then there are constants - such that for all (z; -; y) 2
\Deltaz; c
\Delta-; c
\Deltay) of (17) and ( ~ t; ~ u; ~ v) of (19) satisfy
and
respectively.
Proof. We claim first that the right-hand-side components of (17) and (19) are
O(-). From [6, Equation (78)], we have that
for some positive constants C 2;1 and 1). By Lipschitz continuity (Assump-
tion 5), the definitions (8) and (10), and the fact that f(z ; -
-) is
defined in (13), there are constants
where L denotes the Lipschitz constant of Assumption 5. (The radius ffi 3 is chosen so
that lies inside the neighborhood of Assumption 5.) For the second right-hand
side component, we have simply that
after a possible adjustment of C 2;2 . For the remaining right-hand-side component in
(17), we have trivially that
Consider now the system (17). As in the proof of [6, Lemma 5.2], we have that
eliminating the c
\Deltay component we obtain
\Deltaz
c
\Delta-
We reduce the system further by eliminating the vector c
\Delta- N to obtain
(D z f)
(Dg
\Gamma(Dg
\Deltaz
c
One can easily see that the right-hand-side vector in this expression is O(-), because
of (27), (28) and the bounds (22), which imply that y B , -N , and
N are all O(-)
for (z; -; y) 2
By using the estimate \Gamma1
3.1 again) and recalling the
notation for the limit point of the sequence, we have for (z; -; y) 2 S(ffi 3 ) that
D z f(z ; -) (Dg
\Gamma(Dg
\Deltaz
c
f
O(-k c
O(-k c
denotes the right-hand-side vector in (29). By partitioning c
\Deltaz into
its components in ker Dg
B and ran (Dg
we have from Assumption 6 that c
\Deltaz is
bounded in norm by the size of the right-hand side in (30). Hence, there is a constant
C 2;3 such that
for all (z; -; y) 2 S(ffi 3 ). By choosing -
enough that C 2;3 - :5 for
the result (24) follows from some simple manipulation of the inequality
above.
The proof of (25) is similar.
The next result and others following make use of the positive diagonal matrix D
defined by
From Lemma 3.1, there is a constant C 3 such that
for all (z; -; y) 2 S(1).
Lemma 3.3. (cf. [6, Lemma 5.3]) Suppose that the standing assumptions are
satisfied. Then for -
defined in Lemma 3.2, there is C 4 ? 0 such that
for all (z; -; y) 2 S( -
Proof. First, let -
ffi be defined in Lemma 3.2, but adjusted if necessary to ensure
that
The proof closely follows that of [6, Lemma 5.3], but we spell out the details here
because the analytical techniques are also needed in a later result (Theorem 3.8).
Recall the splitting (18) of the step ( c
\Deltaz; c
\Delta-; c
\Deltay) into components ( ~ t; ~
defined by (19) and (20), respectively. By multiplying the last block row in
(20) by \Gamma1=2 Y \Gamma1=2 and using (31), we find that
e:
Using (20) again, we obtain
(D z f) ~
since D z f is positive semidefinite by Assumption 1. Hence, by taking inner products
of both sides in (35), we obtain
and therefore
For ( ~ t; ~ u; ~ v), the third block row in (19) implies that Therefore,
we have
\Gamma(Dg ) ~
(D z f) ~
where again we have used monotonicity of D z f . Define the constant C 4;1 as
From (25), (27), (32), and (34), we have for (z; -; y) 2 S( - ffi ) that
From (28) and (32), we have
By substituting the last two bounds into (37), we obtain
It follows from this inequality by a standard argument that
for some constant C 4;2 depending only on C 4;1 , and -
. By combining this bound with
(18) and (36), we obtain
\Delta-k
and the first part of (33) follows if we define C
the second part of (33) follows likewise.
Bounds on some of the components of c
\Delta- and c
\Deltay follow easily from Lemma 3.3.
Theorem 3.4. (cf. [6, Theorem 5.4]) Suppose that the standing assumptions are
satisfied. Then there are positive constants - ffi and C 5 such that
\Delta-
\Deltay
Proof. Let - ffi be as defined in Lemma 3.3. From the definition (31) and the bounds
(33), we have
c
for any i 2 N . Hence, by using (22), we obtain
min
which proves that k c
\Delta- for an obvious choice of C 5 . The bound on c
\Deltay B is
derived similarly.
Lemma 3.5. (cf. [6, Lemma 5.10]) Let ; 6= J ae B and ; 6= K ae N . If the
two-sided projection of D z f(z; -) onto ker Dg
B is positive definite, then for t 2 IR n
and -J 2 IR jJ j , we have that
(D z f) (Dg
\GammaDg
if and only if
In addition, we have that
dimker
(D z f) (Dg
\GammaDg 0 \GammaI \DeltaK
Proof. This result differs from [6, Lemma 5.10] only in that z replaces z as the
argument of Dg(\Delta). The proof is essentially unchanged.
By Assumptions 5 and 6, the two-sided projection of D z f(z; -) onto the kernel of
(Dg
positive definite for all (z; -; y) sufficiently close to the limit point (z ; -; y )
defined in (9). It follows from Lemma 3.5 and (23) that the set
ae- D z f(z; -) (Dg
\Gamma(Dg
oe
has constant column rank for some -
Theorem 3.6. Suppose that the standing assumptions hold. Then there is a
positive constant ~
ffi such that for all (z; -; y) 2 S( ~ ffi ), we have that ( c
\Deltaz; c
\Deltay N ) is
the solution of the following convex quadratic program:
min
subject to
\Gamma(Dg
uB
\Delta- N
I \DeltaB
c
\Deltay B
Moreover, there is a constant C 6 such that
\Deltay
\Deltay B )k:
Proof. The value ~
ffi from (39) and -
ffi from Theorem 3.4, suffices
to prove this result. The technique of proof is by now familiar (it follows the proof of
[6, Theorem 5.12] closely), and we omit the details.
At this point, we have proved the first estimate in (21), as we summarize in the
following theorem.
Theorem 3.7. Suppose that the standing assumptions hold. Then there are
constants ~
7 such that for any (z; -; y) 2 S( ~ ffi ) we have
\Deltaz; c
\Delta-; c
Proof. Let ~
ffi be as defined in Theorem 3.6. From Theorem 3.4, (27), and (28), we
have for (z; -; y) 2 S( ~
\Deltay
Hence, from (41) we have also that
\Deltay
and it follows from (24) that k c
Our last result is concerned with the second estimate in (21) involving the relationship
between (t; u; v) and ( c
\Deltaz; c
\Delta-; c
\Deltay).
Theorem 3.8. Suppose that the standing assumptions hold. Then there are
positive constants ffi ? 0 and C 8 such that
\Deltay
for all (z; -; y) 2 S(ffi).
Proof. By taking differences of (15) and (17), we obtain4 D z f (Dg) T 0
c
c
c
\Deltay
\Delta-
\Deltaz
We have from (26), Lipschitz continuity of Dg(\Delta) (Assumption 5), and Theorem 3.7
that there is a radius ffi 4 2 (0; ~
\Delta-
\Deltaz3
The remainder of the proof follows that of [6, Lemma 5.7]. By applying the
technique used in Lemma 3.2 to the system (42), and using the estimate (43), we
have that there are constants
Next, we note that the technique used in the second half of the proof of Lemma 3.3
can be used to prove that there is
\Deltay
where D is the diagonal scaling matrix defined in (31). Modifications are needed only
to account for the different right-hand side estimate (43) and the different estimate
(44) of k c
we omit the details. From (32) and (45), it follows immediately that
\Deltay
The final estimate for ( c
obtained by substituting these expressions into (44).
Corollary 3.9. Suppose that the standing assumptions hold. Then there are
constants 9 such that the affine-scaling step defined by (6) satisfies
Proof. We have from Theorems 3.7 and 3.8 that (t; u; defined as in
Theorem 3.8. Moreover, it follows directly from [6, Section 5.3] that
possibly after some adjustment of ffi . Hence, the result follows from (14).
4. Conclusions. The result proved here explains the numerical experience reported
in Section 7 of Ralph and Wright [6], in which the convergence behavior of
our test problems seemed to be the same regardless of whether the active constraint
Jacobian satisfied the constant rank condition. We speculated in [6] about possible
relaxation of the constant rank condition and have verified in this article that, in fact,
this condition can be dispensed with altogether.
Our results are possibly the first proofs of superlinear convergence in nonlinear
programming without multiplier nondegeneracy or uniqueness.
--R
Local study of Newton type algorithms for constrained problems
On the accurate identification of active constraints
A survey of theory
Local convergence of interior-point algorithms for degenerate monotone LCP
Convergence of splitting and Newton methods for complementarity problems: An application of some sensitivity results
Superlinear convergence of an interior-point method for monotone variational inequalities
--TR
--CTR
Hiroshi Yamashita , Hiroshi Yabe, Quadratic Convergence of a Primal-Dual Interior Point Method for Degenerate Nonlinear Optimization Problems, Computational Optimization and Applications, v.31 n.2, p.123-143, June 2005
Lus N. Vicente , Stephen J. Wright, Local Convergence of a Primal-Dual Method for Degenerate Nonlinear Programming, Computational Optimization and Applications, v.22 n.3, p.311-328, September 2002
Huang , Defeng Sun , Gongyun Zhao, A Smoothing Newton-Type Algorithm of Stronger Convergence for the Quadratically Constrained Convex Quadratic Programming, Computational Optimization and Applications, v.35 n.2, p.199-237, October 2006 | interior-point method;superlinear convergence;monotone variational inequalities |
350715 | An Inexact Hybrid Generalized Proximal Point Algorithm and Some New Results on the Theory of Bregman Functions. | We present a new Bregman-function-based algorithm which is a modification of the generalized proximal point method for solving the variational inequality problem with a maximal monotone operator. The principal advantage of the presented algorithm is that it allows a more constructive error tolerance criterion in solving the proximal point subproblems. Furthermore, we eliminate the assumption of pseudomonotonicity which was, until now, standard in proving convergence for paramonotone operators. Thus we obtain a convergence result which is new even for exact generalized proximal point methods. Finally, we present some new results on the theory of Bregman functions. For example, we show that the standard assumption of convergence consistency is a consequence of the other properties of Bregman functions, and is therefore superfluous. | Introduction
In this paper, we are concerned with proximal point algorithms for solving
the variational inequality problem. Specifically, we consider the methods
which are based on Bregman distance regularization. Our objective is two-
fold. First of all, we develop a hybrid algorithm based on inexact solution of
proximal subproblems. The important new feature of the proposed method is
that the error tolerance criterion imposed on inexact subproblem solution is
constructive and easily implementable for a wide range of applications. Sec-
ond, we obtain a number of new results on the theory of Bregman functions
and on the convergence of related proximal point methods. In particular, we
show that one of the standard assumptions on the Bregman function (con-
vergence consistency), as well as one of the standard assumptions on the
operator defining the problem (pseudomonotonicity, in the paramonotone
operator case), are extraneous.
Given an operator T on R n (point-to-set, in general) and a closed convex
subset C of R n , the associated variational inequality problem [12], from now
on VIP(T; C), is to find a pair x and v such that
where h\Delta; \Deltai stands for the usual inner product in R n . The operator
stands for the family of subsets of R n , is monotone if
for any x; y 2 R n and any u 2 T (x), v 2 T (y). T is maximal monotone if
it is monotone and its graph G(T
contained in the graph of any other monotone operator. Throughout this
paper we assume that T is maximal monotone.
It is well known that VIP(T; C) is closely related to the problem of finding
a zero of a maximal monotone operator -
Recall that we assume that T is maximal monotone. Therefore, (2) is a
particular case of VIP(T; C) for On the other hand, define NC as
the normal cone operator, that is NC
The operator T +NC is monotone and x solves VIP(T; C) (with some v 2
only if
Additionally, if the relative interiors of C and of the domain of T intersect,
then T +NC is maximal monotone [31], and the above inclusion is a particular
case of (2), i.e., the problem of finding a zero of a maximal monotone operator.
Hence, in this case, VIP(T; C) can be solved using the classical proximal
point method for finding a zero of the operator -
. The proximal
point method was introduced by Martinet [26] and further developed by
Rockafellar [34]. Some other relevant papers on this method, its applications
and modifications, are [27, 33, 3, 29, 25, 17, 18, 15]; see [24] for a survey.
The classical proximal point algorithm generates a sequence fx k g by solving
a sequence of proximal subproblems. The iterate x k+1 is the solution of
regularization parameter. For the method to be imple-
mentable, it is important to handle approximate solutions of subproblems.
This consideration gives rise to the inexact version of the method [34], which
can be written as
where e k+1 is the associated error term. To guarantee convergence, it is
typically assumed that (see, for example, [34, 8])X
Note that even though the proximal subproblems are better conditioned than
the original problem, structurally they are as difficult to solve. This observation
motivates the development of the "nonlinear" or "generalized" proximal
point method [16, 13, 11, 19, 23, 22, 20, 6].
In the generalized proximal point method, x k+1 is obtained solving the
generalized proximal point subproblem
The function f is the Bregman function [2], namely it is strictly convex, differentiable
in the interior of C and its gradient is divergent on the boundary
of C (f also has to satisfy some additional technical conditions, which we
shall discuss in Section 2). All information about the feasible set C is embedded
in the function f , which is both a regularization and a penalization term.
Properties of f (discussed in Section 2) ensure that solutions of subproblems
belong to the interior of C without any explicit consideration of constraints.
The advantage of the generalized proximal point method is that the subproblems
are essentially unconstrained. For example, if VIP(T; C) is the classical
nonlinear complementarity problem [28], then a reasonable choice of f gives
proximal subproblems which are (unconstrained!) systems of nonlinear equa-
tions. By contrast, subproblems given by the classical proximal algorithm
are themselves nonlinear complementarity problems, which are structurally
considerably more difficult to solve than systems of equations. We refer the
reader to [6] for a detailed example.
As in the case of the classical method, implementable versions of the
generalized proximal point algorithm must take into consideration inexact
solution of subproblems:
In [14], it was established that if
exists and is finite , (3)
then the generated sequence converges to a solution (provided it exists) under
basically the same assumptions that are needed for the convergence of the
exact method. Other inexact generalized proximal algorithms are [7, 23,
41]. However, the approach of [14] is the simplest and the easiest to use in
practical computation (see the discussion in [14]). Still, the error criterion
given by (3) is not totally satisfactory. Obviously, there exist many error
sequences that satisfy the first relation in (3), and it is not very clear
which e k should be considered acceptable for each specific iteration k. In this
sense, criterion (3) is not quite constructive. The second relation in (3) is
even somewhat more problematic.
In this paper, we present a hybrid generalized proximal-based algorithm
which employs a more constructive error criterion than (3). Our method is
completely implementable when the gradient of f is easily invertible, which
is a common case for many important applications. The inexact solution is
used to obtain the new iterate in a way very similar to Bregman generalized
projections. When the error is zero, our algorithm coincides with the generalized
proximal point method. However, for nonzero error, it is different from
the inexact method of [14] described above. Our new method is motivated
by [40], where a constructive error tolerance was introduced for the classical
proximal point method. This approach has already proved to be very useful
in a number of applications [38, 37, 35, 39, 36].
Besides the algorithm, we also present a theoretical result which is new
even for exact methods. In particular, we prove convergence of the method
for paramonotone operators, without the previously used assumption of pseu-
domonotonicity (paramonotone operators were introduced in [4, 5], see also
[9, 21]; we shall state this definition in Section 3, together with the definition
of pseudomonotonicity). It is important to note that the subgradient of a
proper closed convex function is paramonotone, but need not be pseudomono-
tone. Hence, among other things, our result unifies the proof of convergence
for paramonotone operators and for minimization.
We also remove the condition of convergence consistency which has been
used to characterize Bregman functions, proving it to be a consequence of
the other properties.
This work is organized as follows. In Section 2, we discuss Bregman
functions and derive some new results on their properties. In Section 3, the
error tolerance to be used is formally defined, the new algorithm is described
and the convergence result is stated. Section 4 contains convergence analysis.
A few words about our notation are in order. Given a (convex) set A,
ri(A) will denote the relative interior, -
A will denote the closure, int(A) will
denote the interior, and bdry(A) will denote the boundary of A. For an
operator T , Dom(T ) stands for its domain, i.e., all points x 2 R n such that
Bregman Function and Bregman Distance
Given a convex function f on R n , finite at x; y 2 R n and differentiable at y,
the Bregman distance [2] between x and y, determined by f , is
Note that, by the convexity of f , the Bregman distance is always nonnegative.
We mention here the recent article [1] as one good reference on Bregman
functions and their properties.
Definition 2.1 Given S, a convex open subset of R n , we say that
is a Bregman function with zone S if
1. f is strictly convex and continuous in -
2. f is continuously differentiable in S,
3. for any x 2 -
S and ff 2 R, the right partial level set
fy
is bounded,
4. If fy k g is a sequence in S converging to y, then
lim
Some remarks are in order regarding this definition. In addition to the above
four items, there is one more standard requirement for Bregman function,
namely Convergence Consistency :
If fx k g ae -
S is bounded, fy k g ae S converges to y, and
also converges to y.
This requirement has been imposed in all previous studies of Bregman functions
and related algorithms [10, 11, 13, 19, 9, 14, 1, 22, 20, 6]. In what
follows, we shall establish that convergence consistency holds automatically
as a consequence of Definition 2.1 (we shall actually prove a stronger result).
The original definition of a Bregman function also requires the left partial
level sets
to be bounded for any y 2 S. However, it has been already observed that this
condition is not needed to prove convergence of proximal methods (e.g., [14]).
And it is known that this boundedness condition is extraneous regardless,
since it is also a consequence of Definition 2.1 (e.g., see [1]). Indeed, observe
that for any y, the level set L 0 (0; so it is nonempty and bounded.
Also Definition 2.1 implies that D f (\Delta; y) is a proper closed convex function.
Because this function has one level set which in nonempty and bounded, it
follows that all of its level sets are bounded (i.e., L 0 (ff; y) is bounded for every
ff) [32, Corollary 8.7.1].
To prove convergence consistency using the properties given in Definition
2.1, we start with the following results.
Lemma 2.2 (The Restricted Triangular Inequality)
Let f be a convex function satisfying items 1 and 2 of Definition 2.1. If x 2 -
and w is a proper convex combination of x and y, i.e.,
with
Proof. We have that
Since rf is monotone,
Taking into account that w latter relation yields
Therefore
Lemma 2.3 Let f be a convex function satisfying items 1 and 2 of Definition
2.1. If fx k g is a sequence in -
S converging to x, fy k g is a sequence in S
converging to y and y 6= x, then
lim inf
Proof. Define
is a sequence in S converging to z
S. By the
convexity of f , it follows that for all k:
Therefore
Letting k !1 we obtain
Using the strict convexity of f and the hypothesis x 6= y, the desired result
follows.
We are now ready to prove a result which is actually stronger than the
property of convergence consistency discussed above. This result will be
crucial for strengthening convergence properties of proximal point methods,
carried out in this paper.
Theorem 2.4 Let f be a convex function satisfying items 1 and 2 of Definition
2.1. If fx k g is a sequence in -
fy k g is a sequence in S,
lim
and one of the sequences (fx k g or fy k g) converges, then the other also converges
to the same limit.
Proof. Suppose, by contradiction, that one of the sequences converges and
the other does not converge or does not converge to the same limit. Then
there exist some " ? 0 and a subsequence of indices fk j g satisfying
Suppose first that fy k g converges and
lim
i.e., ~ x j is a proper convex combination of x k j and y k j . Using Lemma 2.2 we
conclude that D f (~x which implies that
lim
fy k j g converges, it follows that f~x j g is bounded and
there exists a subsequence f~x j i g converging to some ~
x. Therefore we have
the following set of relations
which is in contradiction with Lemma 2.3.
If we assume that the sequence fx k g converges, then reversing the roles of
and fy k g in the argument above, we reach a contradiction with Lemma
2.3 in exactly the same manner.
It is easy to see that Convergence Consistency is an immediate consequence
of Theorem 2.4.
We next state a well-known result which is widely used in the analysis of
generalized proximal point methods.
Lemma 2.5 (Three-Point Lemma)[11]
Let f be a Bregman function with zone S as in Definition 2.1. For any
holds that
In the sequel, we shall use the following consequence of Lemma 2.5, which can
be obtained by subtracting the three-point inequalities written with
and s; x; z.
Corollary 2.6 (Four-Point Lemma)
Let f be a Bregman function with zone S as in Definition 2.1. For any
holds that
3 The Inexact Generalized Proximal Point
Method
We start with some assumptions which are standard in the study and development
of Bregman-function-based algorithms.
Suppose C, the feasible set of VIP(T; C), has nonempty interior, and we
have chosen f , an associated Bregman function with zone int(C). We also
assume that
so that T +N C is maximal monotone [31]. The solution set of VIP(T; C) is
We assume this set to be nonempty, since this is the more interesting case.
In principle, following standard analysis, results regarding unboundedness of
the iterates can be obtained for the case when no solution exists.
Additionally, we need the assumptions which guarantee that proximal
subproblem solutions exist and belong to the interior of C.
H1 For any x 2 int(C) and c ? 0, the generalized proximal subproblem
has a solution.
H2 For any x 2 int(C), if fy k g is a sequence in int(C) and
lim
then
lim
A simple sufficient condition for H1 is that the image of rf is the whole
space R n (see [6, Proposition 3]). Assumption H2 is called boundary coer-
civeness and it is the key concept in the context of proximal point methods
for constrained problems for the following reason. It is clear from Definition
2.1 that if f is a Bregman function with zone int(C) and P is any open sub-set
of int(C), then f is also a Bregman function with zone P , which means
that one cannot recover C from f . Therefore in order to use the Bregman
distance D f for penalization purposes, f has to possess an additional prop-
erty. In particular, f should contain information about C. This is precisely
the role of H2 because it implies divergence of rf on bdry(C), which makes
C defined by f :
Divergence of rf also implies that the proximal subproblems cannot have
solutions on the boundary of C. We refer the readers to [9, 6] for further
details on boundary coercive Bregman functions. Note also that boundary
coerciveness is equivalent to f being essentially smooth on int(C) [1, Theorem
4.5 (i)].
It is further worth to note that if the domain of rf is the interior of C,
and the image of rf is R n , then H1 and H2 hold automatically (see [6,
Proposition 3] and [9, Proposition 7]).
We are now ready to describe our error tolerance criterion. Take any
consider the proximal subproblem
which is to find a pair (y; v) satisfying the proximal system
The latter is in turn equivalent to
Therefore, an approximate solution of (5) (or (6) or (7)) should satisfy
We next formally define the concept of inexact solutions of (6), taking the
approach of (8).
Definition 3.1 Let x 2 int(C), c ? 0 and oe 2 [0; 1). We say that a pair
(y; v) is an inexact solution with tolerance oe of the proximal subproblem (6)
if
and z, the solution of equation
satisfies
Note that from (4) (which is a consequence of H2), it follows that
Note that equivalently z is given by
Therefore z, and hence D f (y; z), are easily computable from x; y and v whenever
rf is explicitly invertible. In that case it is trivial to check whether a
given pair (y; v) is an admissible approximate solution in the sense of Definition
3.1: it is enough to obtain z verify if
our algorithm is based on this test, it is most
easy to implement when rf is explicitly invertible. We point out that this
case covers a wide range of important applications. For example, Bregman
functions with this property are readily available when the feasible set C is
an orthant, a polyhedron, a box, or a ball (see [9]).
Another important observation is that for oe = 0, we have that
Hence, the only point which satisfies Definition 3.1 for precisely the
exact solution of the proximal subproblem. Therefore our view of inexact
solution of generalized proximal subproblems is quite natural. We note,
in the passing, that it is motivated by the approach developed in [40] for
the classical ("linear") proximal point method. In that case, Definition 3.1
(albeit slightly modified) is equivalent to saying that the subproblems are
solved within fixed relative error tolerance (see also [37]). Such an approach
seems to be computationally more realistic/constructive than the common
summable-error-type requirements.
Regarding the existence of inexact solutions, the situation is clearly even
easier than for exact methods. Since we are supposing that the generalized
proximal problem (5) has always an exact solution in int(C), this problem
will certainly always have (possibly many) inexact solutions (y; v) satisfying
also y 2 C.
Now we can formally state our inexact generalized proximal method.
Algorithm 1 Inexact Generalized Proximal Method.
Initialization: Choose some c ? 0, and the error tolerance parameter oe 2
[0; 1). Choose some x
Iteration k: Choose the regularization parameter c k - c, and find (y
an inexact solution with tolerance oe of
satisfying
repeat.
We have already discussed the possibility of solving inexactly (9) with condition
(10). Another important observation is that since for
subproblem solution coincides with the exact one, in that case Algorithm
1 produces the same iterates as the standard exact generalized proximal
method. Hence, all our convergence results (some of them are new!) apply
also to the exact method. For oe 6= 0 however, there is no direct relation
between the iterates of Algorithm 1 and
considered in [14]. The advantage of our approach is that it allows an attractive
constructive stopping criterion (given by Definition 3.1) for approximate
solution of subproblems (at least, when rf is invertible).
Under our hypothesis, Algorithm 1 is well-defined. From now on, fx k g
and f(y k ; v k )g are sequences generated by Algorithm 1. Therefore, by the
construction of Algorithm 1 and by Definition 3.1, for all k it holds that
We now state our main convergence result. First, recall that a maximal
monotone operator T is paramonotone ([4, 5], see also [9, 21]) if
Some examples of paramonotone operators are subdifferentials of proper
closed convex functions, and strictly monotone maximal monotone operators
Theorem 3.2 Suppose that VIP(T; C) has solutions and one of the following
two conditions holds :
1.
2. T is paramonotone.
Then the sequence fx k g converges to a solution of VIP(T; C).
Thus we establish convergence of our inexact algorithm under assumptions
which are even weaker than the ones that have been used, until now, for
exact algorithms. Specifically, in the paramonotone case, we get rid of the
"pseudomonotonicity" assumption on T [6] which can be stated as follows:
Take any sequence fy k g ae Dom(T ) converging to y and any sequence
for each x 2 Dom(T ) there exists and element
(y) such that
Until now, this (or some other, related) technical assumption was employed
in the analysis of all generalized proximal methods (e.g., [14, 7, 6]). Among
other things, this resulted in splitting the proof of convergence for the case of
minimization and for paramonotone operators (the subdifferential of a convex
function is paramonotone, but it need not satisfy the above condition).
And of course, the additional requirement of pseudomonotonicity makes the
convergence result for paramonotone operators weaker. Since for the tolerance
parameter our Algorithm 1 reduces to the exact generalized
proximal method, Theorem 3.2 also constitutes a new convergence result for
the standard setting of exact proximal algorithms. We note that the stronger
than convergence consistency property of Bregman functions established in
this paper is crucial for obtaining this new result.
To obtain this stronger result, the proof will be somewhat more involved
than the usual, and some auxiliary analysis will be needed. However, we think
that this is worthwhile since it allows us to remove some (rather awkward)
additional assumptions.
Convergence Analysis
Given sequences fx k g, fy k g and fv k g generated by Algorithm 1, define ~
X as
all points x 2 C for which the index set
is finite. For x 2 ~
X, define k(x) as the smallest integer such that
0:
Of course, the set ~
X and the application k(\Delta) depend on the particular sequences
generated by the algorithm. These definitions will facilitate the
subsequent analysis. Note that, by monotonicity of T ,
and in fact,
Lemma 4.1 For any s 2 ~
X and k - k(s), it holds that
Proof. Take s 2 ~
X and k - k(s). Using Lemma 2.6, we get
By (14) and (15), we further obtain
which proves the first inequality in (16). Since the Bregman distance is
always nonnegative and oe 2 [0; 1), we have
The last inequality in (16) follows directly from the hypothesis s 2 ~
k(s) and the respective definitions.
As an immediate consequence, we obtain that the sequence fD f (-x; x k )g
is decreasing for any - x 2 X .
Corollary 4.2 If the sequence fx k g has an accumulation point - x 2 X then
the whole sequence converges to - x.
Proof. Suppose that some subsequence fx k j g converges to - x 2 X . Using
Defintion 2.1 (item 4), we conclude that
lim
Since the whole sequence fD f (-x; x k )g is decreasing and it has a convergent
subsequence, it follows that it converges :
lim
Now the desired result follows from Theorem 2.4.
Corollary 4.3 Suppose that ~
;. Then the following statements hold:
1. The sequence fx k g is bounded ;
2.
3. For any s 2 ~
4. The sequence fy k g is bounded .
Proof. Take some s 2 ~
X. From Lemma 4.1 it follows that for all k greater
than k(s), D f (s; x k Therefore, D f (s; x k ) is bounded and
from Definition 2.1 (item 3), it follows that fx k g is bounded.
By Lemma 4.1, it follows that for any r 2 N
Therefore
Since r is arbitrary and the terms of both summations are nonnegative, (recall
the definition of k(s)), it follows that we can take the limit as r !1 in both
sides of the latter relation. Taking further into account that fc k g is bounded
away from zero, the second and third assertions of the Corollary easily follow.
As consequences, we also obtain that
lim
and
lim
X: (18)
Suppose now that fy k g is unbounded. Then there exists a pair of subsequences
and fy k j g such that fx k j g converges but fy k j g diverges. How-
ever, by (17) and Theorem 2.4, fy k j g must converge (to the same limit as
which contradicts the assumption. Hence, fy k g is bounded.
The next proposition establishes the first part of Theorem 3.2, namely
the convergence of the inexact generalized proximal algorithm in the case
when
Proposition 4.4 If ~
converges to some x 2 int(C)
which is a solution of VIP(T; C).
Proof. By Corollary 4.3, it follows that fx k g is bounded, so it has some
accumulation point - x 2 C, and for some subsequence fx k j g,
lim
Take any - x 2 ~
and, by H2,
lim
it follows that
lim
But the latter is impossible because D f (-x; x k ) is a decreasing sequence, at
least for k - k(-x) (by Lemma 4.1). Hence,
Next, we prove that -
x is a solution of VIP(T; C). By (17), we have that
lim
Because, by (15), D f (y
lim
converges to -
Theorem 2.4 and (19) imply that fy k j g also
converges to -
x. Applying Theorem 2.4 once again, this time with (20), we
conclude that fx k j +1 g also converges to -
x. Since -
and rf is
continuous in int(C), we therefore conclude that
lim
using (14) we get
lim
Now the fact that fy k j
together with the maximality
of T , implies that 0 2 T (-x). Thus we have a subsequence fx k j g
converging to -
. By Corollary 4.2, the whole sequence fx k g converges
to - x.
We proceed to analyze the case when T is paramonotone. By (18), we
already know that if s
the limit with respect to v k (for example, using the technical assumption of
pseudomonotonicity stated above), then we could conclude that 0 - h-v; -
x is an accumulation point of fx k g (hence
also of fy k g), and - v 2 T (-x); v s 2 T (s). By paramonotonicity, it follows
that v s 2 T (-x). Now by monotonicity, we further obtain that for any x 2 C
which means that -
However, in the absence of the assumption of pseudomonotonicity one cannot
use this well-established line of argument. To overcome the difficulty resulting
from the impossibility of directly passing onto the limit as was done above,
we shall need some auxiliary constructions.
Let A be the affine hull of the domain of T . Then there exists some V , a
subspace of R n , such that
for any x 2 Dom(T ). Denote by P the orthogonal projection onto V , and
for each k define
The idea is to show the following key facts:
k g has an accumulation point :
With these facts in hand, we could pass onto the limit in a manner similar
to the above, and complete the proof.
First, note that
This can be verified rather easily: if x 62 Dom(T ) then both sets in (21) are
empty, so it is enough to consider x
then
so that By monotonicity of T , for
any z 2 Dom(T ) and any w 2 T (z), it holds that
holds that
Therefore
which implies that u 2 T (x) by the maximality of T . Since also
follows that u 2 T
Lemma 4.5 If ~
some subsequence of fu k g
is bounded.
Proof. We assumed that Dom(T
and let P be the projection operator onto V discussed above. In particular,
Furthermore, the
operator -
defined by
is maximal monotone as an operator on the space V (this can be easily
verified using the maximal monotonicity of T on R n ). We also have that
T is bounded around zero [30]. So, P ffi T is
bounded around -
x, i.e., there exist some r ? 0 and M - 0 such that
Since ~
X. Therefore, by the definition of
~
X, there exists an infinite subsequence of indices fk j g such that
Note that u holds that
for each j,
Then for each j there exists -
Furthermore,
where the first inequality is by the monotonicity of T , and the second is
by (22). Using further the Cauchy-Schwarz and triangular inequalities, we
obtain
Since the sequence fy k g is bounded (Corollary 4.3, item 4), it follows that
We conclude the analysis by establishing the second part of Theorem 3.2.
Proposition 4.6 Suppose X 6= ; and T is paramonotone. Then fx k g converges
to some -
Proof. If ~
then the conclusion follows from Proposition 4.4.
Suppose now that
~
By Lemma 4.5, it follows that some subsequence of fu k g is bounded. Since
X, from Corollary 4.3 it follows that the whole sequence fx k g is
bounded. Hence, there exist two subsequences fx k j g, fu k j g which both converge
lim
lim
Recall from the proof of Lemma 4.5 that u 4.3
(item 2), we have that
lim
Therefore, by Theorem 2.4,
lim
and
by the maximality of T . Take now some s 2 X . There exists some v s 2 T
such that
for all x 2 C. Therefore, using also the monotonicity of T ,
Note that for any x 2 Dom(T )
Taking passing onto the limit as j !1, (18) implies that
Together with (23), this implies that
Using now the paramonotonicity of T , we conclude that
Finally, for any x 2 C, we obtain
xi
0:
Therefore -
we have a subsequence fx k j g converging to -
from Corollary 4.2 it follows that the whole sequence fx k g converges to -
x.
--R
Legendre functions and the method of random Bregman projections.
The relaxation method of finding the common points of convex sets and its application to the solution of problems in convex programming.
Produits infinis de r'esolvantes.
An iterative solution of a variational inequality for certain monotone operators in a Hilbert space.
Corrigendum to
A generalized proximal point algorithm for the variational inequality problem in a Hilbert space.
Enlargement of monotone operators with applications to variational inequalities.
A variable metric proximal point algorithm for monotone operators.
An interior point method with Bregman functions for the variational inequality problem with para- monotone operators
The proximal minimization algorithm with D-functions
Convergence analysis of proximal-like optimization algorithm using Bregman functions
Variational Inequalities and Complementarity Problems
Nonlinear proximal point algorithms using Bregman func- tions
Approximate iterations in Bregman-function-based proximal algorithms
On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone op- erators
Multiplicative iterative algorithms for convex pro- gramming
Finite termination of the proximal point algorithm.
New proximal point algorithms for convex minimization.
On some properties of generalized proximal point methods for quadratic and linear programming.
On some properties of generalized proximal point methods for the variational inequality problem.
On some properties of paramonotone operators.
Proximal minimization methods with generalized Bregman functions.
The proximal algorithm.
Asymptotic convergence analysis of the proximal point algorithm.
Regularisation d'inequations variationelles par approximations successives.
Proximit'e et dualit'e dans un espace Hilbertien.
Complementarity problems.
Weak convergence theorems for nonexpansive mappings in Banach spaces.
Local boundedness of nonlinear monotone operators.
On the maximality of sums of nonlinear monotone operators.
Convex Analysis.
Augmented Lagrangians and applications of the proximal point algorithm in convex programming.
Monotone operators and the proximal point algorithm.
A truly globally convergent Newton-type method for the monotone nonlinear complementarity problem
A comparison of rates of convergence of two inexact proximal point algorithms.
Forcing strong convergence of proximal point iterations in a Hilbert space
A globally convergent inexact Newton method for systems of monotone equations.
A hybrid approximate extragradient- proximal point algorithm using the enlargement of a maximal monotone operator
A hybrid projection - proximal point algorithm
Convergence of proximal-like algorithms
--TR
--CTR
Lev M. Bregman , Yair Censor , Simeon Reich , Yael Zepkowitz-Malachi, Finding the projection of a point onto the intersection of convex sets via projections onto half-spaces, Journal of Approximation Theory, v.124 n.2, p.194-218, October | maximal monotone operator;bregman function;variational inequality;proximal point method |
351137 | Performance Evaluation of Conservative Algorithms in Parallel Simulation Languages. | AbstractParallel discrete event simulation with conservative synchronization algorithms has been used as a high performance alternative to sequential simulation. In this paper, we examine the performance of a set of parallel conservative algorithms that have been implemented in the Maisie parallel simulation language. The algorithms include the asynchronous null message algorithm, the synchronous conditional event algorithm, and a new hybrid algorithm called Accelerated Null Message that combines features from the preceding algorithms. The performance of the algorithms is compared using the Ideal Simulation Protocol. This protocol provides a tight lower bound on the execution time of a simulation model on a given architecture and serves as a useful base to compare the synchronization overheads of the different algorithms. The performance of the algorithms is compared as a function of various model characteristics that include model connectivity, computation granularity, load balance, and lookahead. | Introduction
Parallel discrete event simulation (PDES) refers to the execution of a discrete event simulation
program on parallel or distributed computers. Several algorithms have been developed to synchronize
the execution of PDES models, and a number of studies have attempted to evaluate the performance
of these algorithms on a variety of benchmarks. A survey of many existing simulation protocols and
their performance studies on various benchmarks appears in [9, 11].
A number of parallel simulation environments have also been developed that provide the modeler
with a set of constructs to facilitate design of PDES models[4]. One of these is Maisie[3], a
parallel simulation language that has been implemented on both shared and distributed memory
parallel computers. Maisie was designed to separate the model from the underlying synchronization
protocol, sequential or parallel, that is used for its execution. Efficient sequential and parallel optimistic
execution of Maisie models have been described previously[2]. In this paper, we evaluate
the performance of a set of conservative algorithms that have been implemented in Maisie. The set
of algorithms include the null message algorithm[6], the conditional event[7] algorithm, and a new
conservative algorithm called the Accelerated Null Message (ANM) algorithm that combines the
preceding two approaches. Unlike previous performance studies which use speedup or throughput
as their metric of comparison, we use efficiency as our primary metric. We define the efficiency of
a protocol using the notion of the Ideal Simulation Protocol or ISP, as introduced in this paper.
The performance of a parallel simulation model depends on a variety of factors which include partitioning
of the model among the processors, the communication overheads of the parallel platform
including both hardware and software overheads, and the overheads of the parallel synchronization
algorithm. When a parallel model fails to yield expected performance benefits, the analyst has few
tools with which to ascertain the underlying cause. For instance, it is difficult to determine whether
the problem is due to inherent lack of parallelism in the model or due to large overheads in the
implementation of the synchronization protocol. The Ideal Simulation Protocol offers a partial solution
to this problem. ISP allows an analyst to experimentally identify the maximum parallelism
that exists in a parallel implementation of a simulation model, assuming that the synchronization
overhead is zero. In other words, for a specific decomposition of a model on a given parallel ar-
chitecture, it is possible to compute the percentage degradation in performance that is due to the
simulation algorithm, which directly translates into a measure of the relative efficiency of the synchronization
scheme. Thus, ISP may be used to compute the efficiency of a given synchronization
algorithm and provide a suitable reference point to compare the performance of different algorithms,
including conservative, optimistic, and adaptive techniques. Previous work has relied on theoretical
critical path analyses to compute lower bounds on model execution times. These bounds can be
very approximate because they ignore all overheads, including architectural and system overheads
over which the parallel simulation algorithm has no control.
The remainder of the paper is organized as follows: the next section describes the conservative
algorithms that have been used for the performance study reported in this paper. Section 3 describes
the Ideal Simulation Protocol and its use in separating protocol-dependent and independent over-
heads. Section 4 describes implementation issues of synchronization algorithms, including language
level constructs to support conservative algorithms. Section 5 presents performance comparisons of
the three conservative algorithms with the lower bound prediction of ISP. Related work in the area
is described in Section 6. Section 7 is the conclusion.
Conservative Algorithms
In parallel discrete event simulation, the physical system is typically viewed as a collection of physical
processes (PPs). The simulation model consists of a collection of Logical Processes (LPs), each of
which simulates one or more PPs. The LPs do not share any state variables. The state of an LP
is changed via messages which correspond to the events in the physical system. In this section, we
assume that each LP knows the identity of the LPs that it can communicate with. For any LP p , we
use the terms dest-set p and source-set p to respectively refer to the set of LPs to which LP p sends
messages and from which it receives messages.
The causality constraint of a simulation model is normally enforced in teh simulation algorithm
by ensuring that all messages to a Logical Process (LP) are processed in an increasing timestamp
order. Distributed simulation algorithms are broadly classified into conservative and optimistic based
on how they enforce this. Conservative algorithms achieve this by not allowing an LP to process a
message with timestamp t until it can ensure that the LP will not receive any other message with
a timestamp lower than t. Optimistic algorithms, on the other hand, allow events to be potentially
processed out of timestamp order. Causality errors are corrected by rollbacks and re-computations.
In this paper, we focus on the conservative algorithms.
In order to simplify the description of the algorithms, we define the following terms. Each term
is defined for an LP p at physical time r. We assume that the communication channels are FIFO.
Earliest Input Time (EIT p (r)): Lower bound on the (logical) timestamp of any message
that LP p may receive in the interval (r; 1).
Earliest Output Time (EOT p (r)): Lower bound on the timestamp of any message that LP p
may send in the interval (r; 1).
Earliest Conditional Output Time (ECOT p (r)): Lower bound on the timestamp of any
message that LP p may send in the interval (r; 1) assuming that LP p will not receive any more
messages in that interval.
ffl Lookahead (la p (r)): Lower bound on the duration after which the LP will send a message to
another LP.
The value of EOT and ECOT for a given LP depends on its EIT , the unprocessed messages in
its input queue, and its lookahead. Figure 1 illustrates the computation of EOT and ECOT for
an LP that models a FIFO server. The server is assumed to have a minimum service time of one
time unit, which is also its lookahead. The three scenarios in the figure respectively represent the
contents of the input message queue for the LP at three different points in its execution; messages
Simulation time
(a)
Simulation time
(b)
Simulation time
(c)
Figure
1: The computation of EOT and ECOT: The LP is modeling a server with a minimum service
time of one time unit. Shaded messages have been processed by the LP.
already processed by the LP are shown as shaded boxes. Let T next refer to the earliest timestamp
of a message in the input queue. In (a), since T next both EIT and ECOT are
equal to T next plus the minimum service time. In (b), EOT is equal to plus the minimum
service time, because the LP may still receive messages with a timestamp less than T next = 40, but
no smaller than EIT . However, the ECOT is equal to T next plus minimum service time because if
the LP does not receive any more messages, its earliest next output will be in response to processing
the message with timestamp T next . In (c), T next = 1 because there are no unprocessed messages.
Therefore, ECOT is equal to 1.
Various conservative algorithms differ in how they compute the value of EIT for each LP in the
system. By definition, in a conservative algorithm, at physical time r, LP p can only process messages
with timestamp - EIT p (r). Therefore, the performance of a conservative algorithm depends on how
efficiently and accurately each LP can compute the value of its EIT. In the following sections, we
discuss how three different algorithms compute EIT.
2.1 Null Message Algorithm
The most common method used to advance EIT is via the use of null messages. A sufficient condition
for this is for an LP to send a null message to every LP in its dest set, whenever there is a change
in its EOT. Each LP computes its EIT as the minimum of the most recent EOT received from every
LP in its source-set. Note that a change in the EIT of an LP typically implies that its EOT will
also advance. A number of techniques have been proposed to reduce the frequency with which null
messages are transmitted: for instance, null messages may be piggy-backed with regular messages
or they may be sent only when an LP is blocked, rather than whenever there is a change in its EOT.
The null message algorithm requires that the simulation model not contain zero-delay cycles: if
the model contains a cycle, the cycle must include at least one LP with positive lookahead (i.e. if the
LP accepts a message with timestamp t, any message generated by the LP must have a timestamp
that is strictly greater than t)[13].
2.2 Conditional Event Algorithm
In the conditional event algorithm [7], the LPs alternate between an EIT computation phase, and
an event processing phase. For the sake of simplicity, we first consider a synchronous version which
ensures that no messages are in transit when the LPs reach the EIT computation phase. In such
a state, the value of EIT for an LP p is equal to the minimum of ECOT over all the LPs in the
transitive closure 1 of source-set p .
The algorithm can be made asynchronous by defining the EIT to be the minimum of all ECOT
values for the LPs in the transitive closure of the source-set and the timestamps of all the messages
in transit from these LPs. Note that this definition of EIT is the same as the definition of Global
Virtual Time (GVT) in optimistic algorithms. Hence, any of the GVT computation algorithms [5]
can be used. Details of one such algorithm are described in section 4.4.
2.3 Accelerated Null Message Algorithm
We superimpose the null message protocol on an asynchronous conditional event protocol which
allows the null message protocol to perform unhindered. The EIT for any LP is computed as
the maximum of the values computed by the two algorithms. This method has the potential of
combining the efficiency of the null message algorithm in presence of good lookahead with the
ability of conditional event algorithm to execute even without lookahead - a scenario in which null
message algorithm alone will deadlock. In models with poor lookahead where it may take many
Transitive closure of source-set in many applications is almost the same as the set of all LPs in the system.
rounds of null messages to sufficiently advance the EIT of an LP, ANM could directly compute the
earliest global event considerably faster. Message piggy-backing is used extensively to reduce the
number of synchronization messages, and the global ECOT computation at a node is initiated only
when the node is otherwise blocked.
Most experimental performance studies of parallel simulations have used speedup or throughput
(i.e., number of events executed per unit time ) as the performance metric. While both metrics
are appropriate for evaluating benefits from parallel simulation, they do not shed any light on the
efficiency of a simulation protocol.
A number of factors affect the execution time of a parallel simulation model. We classify the
factors into two categories - protocol-independent factors and protocol-specific factors. The former
refer to the hardware and software characteristics of the simulation engine, like the computation
speed of the processor, communication latency, and cost of a context switch, together with model
characteristics, like partitioning and LP to processor mapping, that determine the inherent parallelism
in the model. The specific simulation protocol that is used to execute a model has relatively
little control on teh overhead contributed by these factors. In contrast, the overhead due to the
protocol-specific factors does depend on the specific simulation protocol that is used for the execution
of a model. For conservative protocols, these may include the overhead of processing and
propagating null messages[6] or conditional event messages[7], and the idle time of an LP that is
blocked because its EIT has not yet advanced sufficiently to allow it to process a message that is
available in its input queue. In the case of optimistic protocols, the protocol-specific overheads may
include state saving, rollback, and the Global Virtual Time (GVT) computation costs.
A separation of the cost of the protocol-specific factors from the total execution time of a parallel
simulation model will lend considerable insight into its performance. For instance, it will allow the
analyst to isolate a model where performance may be poor due to lack of inherent parallelism in the
model (something that is not under the control of the protocol) from one where the performance
may be poor due to a plethora of null messages (which may be addressed by using appropriate
optimizations to reduce their count). In the past, critical path analyses have been used to prove
theoretical lower bounds and related properties[12, 1] of a parallel simulation model. However, such
analyses do not include the cost of many protocol-independent factors in the computation of the
critical path time. Excluding these overheads, which are not contributed by the simulation protocol,
means that the computed bound is typically a very loose lower bound on the execution time of
the model, and is of relatively little practical utility to the simulationist either in improving the
performance of a given model or in measuringthe efficiency of a given protocol.
We introduce the notion of an Ideal Simulation Protocol (ISP), that is used to experimentally
compute a tight lower bound on the execution time of a simulation model on a given architecture.
Although ISP is based on the notion of critical path, it computes the parallel execution time by
actually executing the model on the parallel architecture. The resulting lower bound predicted by ISP
is realistic because it includes overheads due to protocol-independent factors that must be incurred
by any simulation protocol that is used for its execution, and assumes that the synchronization
overheads are zero.
The primary idea behind ISP is simple: the model is executed once, and a trace of the messages
accepted by each LP is collected locally by the LP. In a subsequent execution, an LP may simply
use the trace to locally deduce when it is safe to process an incoming message; no synchronization
protocol is necessary. As no synchronization protocol is used to execute the model using ISP,
the measured execution time will not include any protocol-specific overheads. However, unlike the
critical path analyses, the ISP-predicted bound will include the cost of all the protocol-independent
factors that are required in the parallel execution of a model. Given this lower bound, it is easy to
measure the efficiency of a protocol as described in the next section.
Besides serving as a reference point for computing the efficiency of a given protocol, a representative
pre-simulation with ISP can yield a realistic prediction of the available parallelism in a
simulation model. If the speedup potential is found to be low, the user can modify the model, which
may include changing the partitioning of the system, changing the assignment of LPs to processors,
or even moving to a different parallel platform, in order to improve the parallelism in the model.
Implementation Issues
The performance experiments were executed using the Maisie simulation language. Each algorithm,
including ISP, was implemented in the Maisie runtime system. A programmer develops a Maisie
model and selects among the available simulation algorithms as a command line option. This
separation of the algorithm from the simulation model permits a more consistent comparison of
the protocols than one where the algorithms are implemented directly into the applictaion. In this
section, we briefly describe the Maisie language and the specific constructs that have been provided
to support the design of parallel conservative simulations. The section also describes the primary
implementation issues for each algorithm.
4.1 Maisie Simulation Environment
Maisie [3] is a C-based parallel simulation language, where each program is a collection of entity
definitions and C functions. An entity definition (or an entity type) describes a class of objects. An
instance, henceforth referred to simply as an entity, represents a specific LP in the model;
entities may be created dynamically and recursively.
The events in the physical system are modeled by message communications among the corresponding
entities. Entities communicate with each other using buffered message passing. Every
entity has a unique message buffer; asynchronous send and receive primitives are provided to deposit
and remove messages from the buffer respectively. Each message carries a timestamp which
corresponds to the simulation time of the corresponding event. Specific simulation constructs provided
by Maisie are similar to those provided by other process-oriented simulation languages. In
addition, Maisie provides constructs to specify the dynamic communication topology of a model
and the lookahead properties of each entity. These constructs were used to investigate the impact
of different communictaion topologies and lookahead patterns on the performance of each of the
conservative algorithms described earlier.
Figure
2 is a Maisie model for an FCFS server. This piece of code constructs the tandem queue
system described in Section 5, and is used, with minor changes, in the experiments described there.
The entity "Server" defines a message type "Job" which is used to simulate the requests for service
to the server. The body of the entity consists of an unbounded number of executions of the wait
statement (line 12). Execution of this statement causes the entity to block until it receives a message
of the specified type (which in this case is "Job"). On receiving a message of this type, the entity
suspends itself for "JobServiceTime" simulation time units to simulate servicing the job by executing
the hold statement (line 13). Subsequently it forwards the job to the adjacent server identified by
variable "NextServer," using the invoke statement (line 14).
The performance of many conservative algorithms depends on the connectivity of the model. In
the absence of any information, the algorithm must assume that every entity belongs to the source-
set and dest-set of every other entity. However, Maisie provides a set of constructs that may be used
by an entity to dynamically add or remove elements from these sets: add source(), del source(),
add dest(), and del dest(). In Figure 2, the server entity specifies its connectivity to the other
entities using the add source() and add dest() functions (line 8-9).
Lookahead is another important determinant of performance for conservative algorithms. Maisie
provides a pre-defined function setlookahead(), that allows the user to specify its current lookahead
to the run-time system. The run-time system uses this information to compute a better estimate of
EOT and ECOT then may be feasible otherwise. For instance, when the server is idle, it is possible
clocktype MeanTime;
4 ename NextServer;
6 message Jobfg;
8 add source(PrevServer);
9 add dest(NextServer);
14 invoke NextServer with Jobfg;
Figure
2: Maisie entity code for "First Come First Served" (FCFS) server with user defined lookahead
for the server to precompute the service time of the next job (line 15), which becomes its lookahead
and is transmitted to the run-time system using the function setlookahead() in (line 11) before it
suspends itself.
4.2 Entity Scheduling
If multiple entities are mapped to a single processor, a scheduling strategy must be implemented to
choose one among a number of entities that may be eligible for execution at a given time. An entity
is eligible to be executed if its input queue contains a message with a timestamp that is lower than
the current EIT of the entity. In the Maisie runtime, once an entity is scheduled, it is allowed to
execute as long as it remains eligible. This helps reduce the overall context switching overhead since
more messages are executed each time the entity is scheduled but may result in the starvation of
other entities mapped on the same processor. This, in turn, may cause blocking of other processors
that might be waiting for messages from these entities, resulting in sub-optimal performance. Note,
however, that without prior knowledge of the exact event dependencies, no scheduling scheme can
be optimal.
An alternative scheduling strategy, used in the Global Event List algorithm, is to schedule events
across all entities mapped to a processor in the order of their timestamps. We examine and discuss
the overheads caused by these two scheduling strategies in Section 5.
4.3 Null Message Algorithm
One of the tunable parameters for any null message scheme is the frequency with which null messages
are transmitted. Different alternatives include eager null messages - an LP sends null messages to
all successors as soon as its EOT changes, lazy null messages - an LP sends null messages to all
successors only when it is idle, or demand driven - an LP sends null message to a destination only
when the destination demands to know the value of its EOT.
The performance of different null message transmission schemes including eager, lazy, and demand
driven schemes have been discussed in [17]. The experiments found that demand driven
schemes performed poorly, whereas the lazy null message scheme combined with eager event send-
ing, (i.e. regular messages sent as soon as they are generated, as is the case in our runtime), was
found to marginally outperform other schemes. We use the lazy null message scheme in our experi-
ments. This has the advantage of reducing the overall number of null messages because several null
messages may be replaced by the latest message. However, the delay in sending null messages may
delay the processing of real messages at the receiver.
4.4 Conditional Event Computation
For the conditional event and the ANM protocol, it is necessary to periodically compute the earliest
conditional event in the model. As discussed in section 2.3, an asynchronous algorithm allows the
earliest event time to be computed without the need to freeze the computation at each node.
In our runtime system, this computation takes place in phases. At the start of a new phase i,
the jth processor computes its is the ECOT value for the phase i, and M ij is the
smallest timestamp that is sent by the processor in the phase accounts for the messages
in transit; in the worst case none of the messages sent by the processor in the phase may have
been received and accounted for in E i of other processors. Thus, the global ECOT for the phase i
is calculated as the following:
min
where n is the number of processors. Messages sent during phases do not need
to be considered because a new phase starts only when all messages from the previous phase have
been received. This, along with the FIFO communication assumption, implies that E ij takes into
account all messages sent to the processor by any processor during phase 2.
During the phase i, every processor sends an ECOT message which contains the minimum of its
can compute the global ECOT to be the minimum of ECOT messages from all the
processors. Note that the ECOT message needs to carry the phase identifier with it. But, since a
new phase does not start until the previous one has finished, a boolean flag is sufficient to store the
phase identifier.
4.5 ISP
The Ideal Simulation Protocol (ISP) has been implemented as one of the available simulation protocols
in the Maisie environment. The implementation uses the same data structures, entity scheduling
strategy, message sending and receiving schemes as the conservative algorithms discussed earlier.
The primary overhead in the execution of a model with ISP is the time for reading and matching
the event trace. The implementation minimizes this overhead as follows: The entire trace is read
into an array at the beginning of the simulation in order to exclude the time of reading the trace
file during execution. To reduce the matching time, the trace is stored simply as a sequence of the
unique numeric identifiers that are assigned to each message by the runtime system to ensure FIFO
operation of the input queue. Thus the only 'synchronization' overhead in executing a simulation
model with ISP is the time required to execute a bounded number of numeric comparison operations
for each incoming message. This cost is clearly negligible when compared with other overheads of
parallel simulation and the resulting time is an excellent lower bound on the exection time for a
parallel simulation.
5 Experiments and Results
The programs used for the experiments were written in Maisie and contain additional directives for
parallel simulation, i.e. (a) explicit assignment of Maisie entities to processors, (b) code to create
the source and destination sets for each entity, and (c) specification of lookaheads. All experimental
measurements were taken on a SPARCserver 1000 with 8 SuperSPARC processors, and 512M of
shared main memory.
We selected the Closed Queuing Networks (CQN), a widely used benchmark, to evaluate parallel
simulation algorithms. This benchmark was used because it is easily reproducible and it allows
the communication topology and lookahead, the two primary determinants of performance of the
conservative protocols, to be modified in a controlled manner. The CQN model can be characterized
by the following parameters:
ffl The type of servers and classes of jobs: Two separate versions of this model were used: CQNF,
where each server is FIFO and CQNP, where each server is a the results for the CQNP experiments
Switch Server Queue
Figure
3: A Closed Queuing Network(N
ffl The number of switches and tandem queues (N ) and the number of servers in each tandem
queue (Q).
ffl Number of jobs that are initially assigned to each switch: J=N .
ffl Service time: The service time for a job is sampled from a shifted-exponential distribution
with mean value of S unit time (indicating a mean value of
ffl Topology: When a job arrives at a switch, the switch routes the job to any one of the N
tandem queues with equal probability after 1 time unit. After servicing a job, each server
forwards the job to the next server in the queue; the last server in the queue must send the
job back to its unique associated switch.
ffl The total simulation time: H
Each switch and server is programmed as a separate entity. Thus, the CQN model has a total of
entities. The entities corresponding to a switch and its associated tandem queue servers
are always mapped to the same processor. The shifted-exponential distribution is chosen as the
service time in our experiments so that the minimum lookahead for every entity is non-zero, thus
preventing a potential deadlock situation with the null message algorithm. The topology of the
network, for 4, is shown in Figure 3. The performance of parallel algorithms is
typically presented by computing speedup, where
Parallel execution time on Pprocessors
However, as we will describe in the next section, the sequential execution time of a model using
the standard Global Event List (GEL) algorithm[13] does not provide the lower bound for single
processor execution and can in fact be slower than a parallel conservative algorithm. For this reason,
the speedup metric, as defined above might show super-linear behavior for some models or become
saturated if the model does not have enough parallelism. In this paper, we suggest computing
efficiency using ISP which does provide a tight lower bound, and then compare the conservative
protocols with respect to their efficiency relative to the execution time with ISP. The efficiency of a
protocol S on architecture A is defined as follows:
using ISP on A
Execution time using Protocol S on A
The efficiency of each protocol identifies the fraction of the execution time that is devoted to
synchronization-related factors or the protocol-specific operations in the execution of a model.
We begin our experiments (section 5.1) with comparing the execution time of each of the five
protocols: GEL(global event list), NULL(null message), COND(conditional event), ANM, and ISP
on a single processor. This configuration is instructive because it identifies the unique set of 'sequen-
tial' overheads incurred by each protocol on one processor. In subsequent sections, we investigate
the impact of modifications in various model characteristics like model connectivity (section 5.2),
lookahead properties (section 5.3), and computation granularity (section 5.4).
5.1 Simulation of a CQNF Model on One Processor
Figure
4 shows the execution times for a CQNF model with
100; 000 on one processor of SPARCserver 1000. As seen in the figure, the execution times of
the protocols differ significantly even on one processor. The computation granularity associated with
each event in our model is very small. This makes the protocol-specific overheads associated with
an event relatively large, allowing us to clearly compare these overheads for the various protocols,
which is the primary purpose of the study.
In
Figure
4, ISP has the best execution time which can be regarded as a very close approximation
of the pure computation cost for the model. We first compare the ISP and GEL protocols. The major
difference between the two protocols is the entity scheduling strategy. As described in Section 4.5,
ISP employs the same entity scheduling scheme as the conservative protocols, where each entity
removes and processes as many safe messages from its input queue as possible, before being swapped
out. For ISP, this means that once an entity has been scheduled for execution, it is swapped out
only when the next message in its local trace is not available in the input queue for that entity.
In contrast, the GEL algorithm schedules entities in strict order of message timestamps, which can
cause numerous context switches. Table 1 shows the total number of context switches for each
protocol for this configuration. The number of context switches for ISP is approximately 15 times
ISP GEL NULL COND ANM
Execution
Time
Protocols
Figure
4: Execution times of a CQNF model on one processor
Table
1: Number of context switches, null messages and global ECOT computations in one processor
executions [\Theta1; 000]
ISP GEL NULL COND ANM
Context Switches 85 1,257 393 1,149 914
Global Computation
The number of regular messages processed in the simulation is 1,193 [\Theta1; 000].
less than that required by the GEL algorithm. Considering that the total messages processed during
the execution was about 1,200,000 messages, the GEL protocol does almost one context switch
per message 2 . ISP and conservative protocols also use a more efficient data structure to order the
entities than the GEL algorithm. Whereas GEL must use an ordered data structure like the splay
tree to sort entities in the order of the earliest timestamps of messages in their input queues, the
other protocols use a simple unordered list because the entities are scheduled on the basis of safe
messages. Thus, the performance difference between the ISP and GEL protocols is primarily due to
the differences in their costs for context switching and entity queue management.
We next compare the performance of ISP and the three conservative protocols. First, ISP
has fewer context switches than the other protocols because the number of messages that can be
processed by an entity using the conservative protocols is bounded by its EIT, whereas ISP has no
such upper bound. The differences in the number of context switches among the three conservative
protocols is due to differences in their computation of EIT as well as due to the additional context
switches that may be required to process synchronization messages. For instance, the COND protocol
2 Since the Maisie runtime system has a main thread which does not correspond to any entity, the number of
context switches can be larger than the number of messages.
has three times the number of context switches than the NULL protocol. This is because the COND
protocol has to switch out all the entities on every global ECOT computation, which globally updates
the EIT value of each entity, while the NULL protocol does not require a context switch to process
every null message. The CQNF model used in this experiment consists of 128 entities, and each
switch entity has a lookahead of 1 time unit, thus the window size between two global ECOT
computations is small. As shown in Table 1, the COND protocol had approximately 49,000 global
computations, which averages to one global computation for every advance of 2 units in simulation
time.
Even though the NULL protocol uses few context switches, its execution time is significantly
larger than ISP because of the large number of null messages (1732,000 from Table 1) that must
be processed and the resulting updates to EIT. Although the ANM protocol incurs overhead due to
both null messages and global ECOT computations, both the number of null messages and global
ECOT computations are significantly less than those for the NULL and COND protocols. As a
result, its performance lies somewhere between that of the NULL and COND protocols.
5.2 Communication Topology
In Section 4.1, we introduced Maisie constructs to alter the default connectivity of each entity. As
the next experiment, we examine the impact of communication topology on the performance of
parallel conservative simulations.
For each protocol, the CQNF model was executed with different values of the
total number of serevrs were kept constant among the three configurations, by keeping N (Q
128. The other parameters for the model were set to 000. The
total number of entities (N (Q + 1)) is kept constant among the different configurations to also keep
the size of the model constant. As N increases, the connectivity of the model increases dramatically
because each of the N switch entities has N outgoing channels while each of the Q server entities in
each tandem queue has only one outgoing channel. For instance, the total number of links is 368,
1120, and 4160 when respectively. We first measured the speedup with the ISP
protocol, where the speedup was calculated with respect to the one processor ISP execution. As
seen in Figure 5, the speedup for ISP is relatively independent of N , which implies that the inherent
parallelism available in each of the three configurations is not affected significantly by the number
of links. 3
Figure
6 to 8 show the efficiency of the three conservative protocols with 64g. As
3 The small reduction in speedup with N=64, is due to the dramatic increase in the number of context switches for
this configuration as seen from Table 2.
Number of Processors
Figure
5: CQNF: Speedup achieved by ISP0.20.61
Efficiency
Number of Processors
NULL
COND
Figure
CQNF: Efficiency of each protocol
Efficiency
Number of Processors
NULL
COND
Figure
7: CQNF: Efficiency of each protocol
Efficiency
Number of Processors
NULL
COND
Figure
8: CQNF: Efficiency of each protocol
Null
Message
Number of Processors
Figure
9: CQNF: NMR in the NULL protocol135
Window
Size
Number of Processors
Figure
10: CQNF: Window size in the COND protocol
seen from the three graphs, the NULL and ANM protocols have similar efficiencies except for the
sequential case, while the COND protocol performs quite differently. We examine the performance
of each protocol in the following subsections.
5.2.1 Null Message Protocol
For the NULL protocol, the EIT value for an entity is calculated using EOT values from its incoming
channels, i.e., from entities in its source-set. When the number of incoming channels to an entity
is small, its EIT value can be advanced by a few null messages. Thus, the efficiency of the NULL
protocol is relatively high in case of low connectivity, as shown in Figure 6. However, with high
connectivity, the protocol requires a large number of null messages and the performance degrades
significantly. The null message ratio (NMR) is a commonlyused performance metric for this protocol.
It is defined as the ratio of the number of null messages to the number of regular messages processed
in the model. As seen in Figure 9, the NMR for low and increases substantially
for when the connectivity of the model becomes dense. As an entity must send null
messages even to other entities on the same processor, NMR for each configuration does not change
with the number of processors. However, the efficiency decreases as the number of processors is
increased because null messages to remote processors are more expensive than the processor-local
null messages.
5.2.2 Conditional Event Protocol
Unlike the other consrvative protocols, the performance of the COND protocol is not affected by
the communictaion topology of the model because the cost of the global ECOT computation is
independent of the topology. Thus, in Figures 6 to 8, the efficiency of the COND protocol does not
degrade; rather it improves as N goes from 32 to 64. The reason for this is that relative to the
COND protocol, the performance of ISP degrades more causing the relative efficiency of the COND
protocol to improve! This effect can be explained by looking at Table 2, which presents the number
of context switches incurred in the one processor execution for each of the three model configurations
with each protocol. As seen in the table, this measure increases dramatically for ISP as N increases
from 32 to 64, while the COND protocol already has one context switch per message with
and this does not change significantly as N increases.
The average window size is another commonly used metric for this protocol, which is defined as:
Number of synchronizations
Figure
shows the average window size for each of the three experiments with the COND protocol.
Table
2: Number of context switches in one processor executions [\Theta1; 000].
Regular Messages ISP GEL NULL COND ANM
As seen from the figure, the average window size in the parallel implementations is close to 1, which
implies that the protocol requires one global computation for every simulation clock tick. The one
processor execution has a window size of 2. The smaller window size for the parallel implementation
is caused by the need to flush messages that may be in transit while computing the global ECOT
as described in Section 4.4. However, the value of the window size (W ) does not appear to have a
strong correlation with the performance of the COND protocol: first, although W drops by 50% as
we go from 1 to 2 processors, the efficiency does not drop as dramatically. Second, although the
efficiency of the protocol continues to decrease as the number of processors (P ) increases in Figures 6
Figure
shows that W does not change dramatically for P ? 2.
The performance degradation of this protocol with number of processors may be explained as
follows: first, as each processor broadcasts its ECOT messages, each global ECOT computation
requires messages in a P processor execution (P ? 1). Second, the idle time spent by an
LP while waiting for the completion of each global ECOT computation also grows significantly as
increases.
5.2.3 ANM Protocol
The efficiency of the ANM protocol shown in Figure 6 to 8 is close to that of the NULL protocol
except for the one processor execution. Figure 11 and 12 show the NMR and the average window
size respectively. The ANM protocol has almost the same NMR values as the NULL protocol. The
average window sizes vary widely for all the models although the window size becomes narrower as
N increases. This occurs because of the following reasons:
ffl ECOT computation is initiated only when no entity on a processor has any messages to process.
As N increases, since every processor must send one ECOT message for one global ECOT
computation, the EIT updates by ECOT messages can easily defer unless every processor has
few entities to schedule.
ffl Even if a global ECOT calculation is unhindered, it may have no effect if null messages have
already advanced EIT values. In such cases, the number of null messages is not decreased by
the global ECOT computations, and the ANM protocol will have higher overheads than the
NULL protocol.
Therefore, the efficiencies of the ANM protocol in Figure 6 to 8 are close to the ones of the NULL
protocol except for the one processor executions. In the models with
are rarely sent out since null messages can efficiently advance the EIT value of each entity and the
situation where all the processors have no entity to schedule is unlikely. However, for 64, the
performance of the ANM protocol is expected to be closer to that of the COND protocol since the
EIT updates by ECOT messages are more efficient than the updates by null messages. As seen
in
Figure
12, when ECOT messages are in fact sent more frequently (the average window size is
approximately 7) when 64. The inefficiency in this case arises because of the large number
of null messages sent by each entity; hence the duration when the processor would be idle in the
COND protocol, is filled up with the processing time of null messages. This can be seen by comparing
Figure
9 and 11 in which both protocols have almost the same NMR. Considering that the ANM
protocol has more overhead caused by the ECOT computation, the reason that the efficiency is the
same as the NULL protocol is that the ECOT messages actually improve the advancement of EIT
a little, but the improvement makes up only for the overhead of ECOT computations.
In the experiments in this section, the global ECOT computations in the ANM protocol does
not have sufficient impact to improve its performance. However, all the models in these experiments
have good lookahead values, thus null messages can efficiently update EIT values of the entities.
In the next section, we examine the cases where the simulation models have poor lookahead and
compare the performance of the ANM and NULL protocols in this case.
5.3 Lookahead
5.3.1 No Precomputation of Service Times
It is well known that the performance of conservative protocols depends largely on the lookahead
value of the model. The lookahead for a stochastic server can be improved by precomputing service
times[14]. In this section, we examine the effects of lookahead in the CQNF models by not precomputing
the service times; instead the lookahead is set to one time unit, which is the lower bound of
the service time generated by the shifted exponential distribution. With this change, all the entities
have the lookahead value of one time unit, which is the worst lookahead value of our experiments.
Figures
13 to 15 show the effect of this change where we plot p, the performance degradation that
results from this change as a function of the number of processors. Note that the performance of ISP
cannot be affected by the lookahead value because it does not use lookahead for the simulation run.
Thus, the ISP performance is identical to the result shown in Figure 5. The fractional performance
degradation is calculated as follows:
precomputation of service times
Execution time with precomputation of service times \Gamma 1
Interestingly, the figures show that the performance degradation is negative in some cases, which
indicates that some executions actually become faster when lookahead is reduced. For the NULL and
ANM protocols, this is attributed to the choice of the scheduling strategy discussed in Section 4.2;
although this strategy reduces context switching overheads, it may also cause starvation of other
entities. An entity that is scheduled for a long period blocks the progress of other entities on the
same processor, which may also block other entities on remote processors and thus increase the
overall idle time. Note that the performance improvement with poorer lookahead occurs primarily
in the configuration with where the communication topology is not dense, and the number
of jobs available at each server is typically greater than 1, which may lead to long scheduling cycles.
For the COND protocol, the duration of the window size determines how long each entity can
be scheduled before a context switch. A longer window implies fewer ECOT computations which
may also lead to increased blocking of other entities. Thus, in some cases, poorer lookahead may
force frequent ECOT messages and thus improve performance. As this effect is not related to
the communication topology, the relative performance of the COND protocol with and without
lookahead is almost the same for all three values of N (Figures 13 to 15).
The performance of the NULL and ANM protocols degrade as the connectivity of the model
becomes dense, while the performance of the COND protocol does not change significantly as discussed
above. In particular, for the performance of the COND protocol does not degrade,
although it already outperforms the other two protocols in the experiment with better lookahead
Figure
8). This indicates that the COND protocol has significant advantage over the null message
based protocols in the case where the simulation model has high connectivity or poor lookahead.
Another interesting observation is that the performance of the ANM protocol does not degrade as
much as that of the NULL protocol in Figure 15. This behavior may be understood from Figure 16
which shows the relative increase in the number of null messages and global ECOT computations
between the poor lookahead experiment considered in this section and the good lookahead scenario of
Section 5.2. As seen in Figure 16, for the model with poor lookahead, the ANM protocol suppresses
the increase of null messages by increasing the number of global ECOT computations. This result
shows the capability of the ANM protocol to implicitly switch the EIT computation base to ECOT
messages when the EIT values cannot be calculated efficiently with null messages.
Null
Message
Number of Processors
Figure
11: CQNF: NMR in the ANM protocol501502503504501 2 3 4 5 6 7 8
Window
Size
Number of Processors
Figure
12: CQNF: Window size in the ANM protocol0.20.61
Performance
Degradation
Number of Processors
NULL
COND
Figure
13: CQNF: Performance with low lookahead
Performance
Degradation
Number of Processors
NULL
COND
Figure
14: CQNF: Performance with low lookahead
Performance
Degradation
Number of Processors
NULL
COND
Figure
15: CQNF: Performance with low lookahead
CQNF
with
good
lookahead
=Number of Processors
Number of global computations in COND
Number of global computations in ANM
Number of null messages in NULL
Number of null messages in ANM
Figure
Increase of null messages and global computations
with low lookahead
5.3.2 CQN with Priority Servers (CQNP)
In Section 5.3.1, we examined the impact of lookahead by explicitly reducing the lookahead value for
each server entity. In this section, we vary the lookahead by using priority servers rather than FIFO
servers in the tandem queues. We assume that each job can be in one of two priority classes: low or
high, where a low priority job can be preempted by a high priority job. In this case, precomputation
of service time cannot always imrove the lookahead because when a low priority job is in service, it
may be interrupted at any time so the lookahead can at most be the remaining service time for the
low priority job. Precomputed service times are useful only when the server is idle. The experiments
described in this section investigate the impact of this variability in the lookahead of the priority
server on the performance of each of the three protocols.
Figure
17 shows the speedup achieved by ISP in the CQN models with priority servers (CQNP),
where 25% of the jobs (256) are assumed to have a high priority. In comparison with Figure 5,
ISP achieves slightly better performance because the events in priority servers have slightly higher
computation granularity than those in the FCFS servers. The impact of computation granularity
on protocol performance is explored further in the next section. Since the ISP performance does not
depend on the lookahead of the model, no performance degradation was expected or observed for
this protocol.
Figures
to 20 show the efficiency of each protocol for 64g. The performance of the
COND protocol with the priority servers is very similar to that with the FCFS servers (Figures 6 to
8), while the performance of the other two protocols is considerably worse with the priority servers.
Even though the connectivity is the same, the COND protocol outperforms the other two protocols
for which shows that the COND protocol is not only connectivity insensitive but also less
lookahead dependent than the null message based protocols.
5.4 Computation Granularity
One of the good characteristics of the CQN models for the evaluation of parallel simulation protocols
is its fine computation granularity. Computation granularity is a protocol-independent factor,
but it changes the breakdown of the costs required for different operations. For a model with high
computation granularity, the percentage contribution of the protocol-dependent factors to the execution
time will be sufficiently small so as to produce relatively high efficiency for all the simulation
protocols. We examine this by inserting a synthetic computation fragment containing 1,000,000
operations which is executed for every incoming message at each server. Figure 21 shows the impact
of the computation granularity on speedup using ISP for the CQNF model with 32. The
Number of Processors
Figure
17: CQNP: Speedup achieved by ISP0.20.61
Efficiency
Number of Processors
NULL
COND
Figure
CQNP: Efficiency of each protocol
Efficiency
Number of Processors
NULL
COND
Figure
19: CQNP: Efficiency of each protocol
Efficiency
Number of Processors
NULL
COND
Figure
20: CQNP: Efficiency of each protocol
Number of Processors
Original
With Computation Fragment
Figure
21: CQNF: Speedup with synthetic computation fragment
Efficiency
Number of Processors
NULL
COND
Figure
22: CQNF: Efficiency of each protocol with synthetic
computation fragment
simulation horizon H is reduced to 1,000 as the sequential execution time for the model with the
dummy computation fragment is 1,200 times longer than the original model. In the figure, the
performance improvement with ISP is close to linear. The performance degrades a little when the
number of processors is 5, 6 or 7 because the decomposition of the model among the processors leads
to an unbalanced workload, which is one of the protocol-independent factors in parallel simulation.
The improvement in speedup with larger computation granularity can be attributed to an improvement
in the ratio of the event processing time to the other protocol-independent factors such as
communication latency and context switch times.
Figure
22 shows the impact of increasing the computation granularity on the efficiency of the
three conservative protocols. Clearly, the additional computation diminishes the protocol-dependent
overheads in the NULL and the ANM protocols, and the performance of these two protocols is very
close to that of ISP. The performance of the COND protocol, however, gets worse as the number
of processors increases, and the efficiency is only 54% with 8 processors. As described in 5.2.2, this
degradation is due to the idle time that is spent by a processor while it is waiting for the completion
of each global computation. This implies that even if the COND protocol employs a very efficient
method in global ECOT computation, and the overhead for it becomes close to zero, the protocol
can only achieve half the speedup of the ISP execution with 8 processors because of the idle time
that must be spent waiting for the global ECOT computation to complete. This result shows the
inherent synchronousness of the COND protocol, although our implementation of the algorithm is
asynchronous.
6 Related Work
Prior work on evaluating performance of discrete-event models with conservative protocols have used
analytical and simulation models, as well as experimental studies.
Performance of the null message deadlock avoidance algorithm [6] using queuing networks and
synthetic benchmarks has been studied by Fujimoto [10]. Reed et al [16] have studied the performance
of the deadlock avoidance algorithm and the deadlock detection and recovery algorithm on
shared memory architecture. Chandy and Sherman [7] describe the conditional event algorithm and
study its performance using queuing networks. Their implementation of the conditional event algorithm
is synchronous (i.e. all LPs carry out local computations followed by a global computation),
and the performance studies carried out in the paper assume a network with very high number of
jobs. The paper does not compare the performance of the conditional event protocol with others.
Nicol[15] describes the overheads of a synchronous algorithm similar to the conditional event algorithm
studied in this paper. A mathematical model is constructed to qualify the overhead due to
various protocol-specific factors. However, for analytical tractability, the paper ignores some costs
for protocol-independent operations or simplification of the communication topology of the physical
system. Thus, the results are useful for qualitative comparisons but do not provide the tight upper
bound on potential performance that can be derived using ISP.
The effect of lookahead on the performance of conservative protocols was studied by Fujimoto
[10]. Nicol [14] introduced the idea of precomputing the service time to improve the lookahead.
Cota and Sargent [8] have described the use of graphical representation of a process in automatically
computing its lookahead. These performance studies are carried out with simulation models specific
to their experiments.
The performance study presented in this paper differs in two important respects: first, we developed
and used a new metric - efficiency with respect to the Ideal Simulation protocol, which allows
the protocol-specific overheads to be separated cleanly from other overheads that are not directly
contributed by a simulation protocol. Second, the implementation of each of the algorithms was
separated from the model which allows the performance comparison to be more consistent than if
the algorithm is implemented directly in the simulation model.
Conclusions
An important goal of parallel simulation research is to facilitate its use by the discrete-event simulation
community. Maisie is a simulation language that separates the simulation model from the
specific algorithm (sequential or parallel) that is used to execute the model. Transparent sequential
and optimistic implementations of Maisie have been developed and described previously [2]. This
paper studied the performance of a variety of conservative algorithms that have been implemented in
Maisie. The three algorithms that were studied include the null message algorithm, the conditional
event algorithm, and a new algorithm called the accelerated null message algorithm that combines
the preceding approaches. Maisie models were developed for standard queuing network benchmarks.
Various configurations of the model were executed using the three different algorithms. The results
of the performance study may be summarized as follows:
ffl The Ideal Simulation Protocol (ISP) provides a suitable basis to compare the performance of
conservative protocols as it clearly separates the protocol dependent and independent factors
that affect the performance of a given synchronization protocol. It gives a realistic lower bound
on the execution time of a simulation model for a given partitioning and architecture. Existing
metrics like speedup and throughput indicate what the performance is but do not provide
additional insight into how it could be improved. Other metrics like NMR (Null Message
Ratio) and the window size are useful for the characterization of the protocol dependent
overheads for the null message and conditional event protocols respectively, but they cannot
be used among the diverse set of simulation protocols. ISP, on the other hand, can be used
by an analyst to compare the protocol-dependent overheads even between conservative and
optimistic protocols.
ffl The null message algorithm exploited good parallelism in models with dense connectivity.
However, the performance of this algorithm is very sensitive to the communication topology
and lookahead characteristics of the model, and thus is not appropriate for models which have
high connectivity or low lookahead values.
ffl Because of its insensitivity to the communication topology of the model, the conditional event
protocol outperforms the of null message based protocols when the LPs are highly coupled.
Also, it performs very well for models with poor lookahead. Although the algorithmic syn-
chronousness degrades its performance in parallel executions with a high number of processors,
the performance of this protocol is very stable throughout our experiments. With faster global
ECOT computations, it may achieve a better performance. Since the conditional event algorithm
also has a good characteristic of not requiring positive lookahead values; this algorithm
may also be used to simulate models which have zero lookahead cycles.
ffl The performance of the ANM protocol is between those of the null message and the conditional
event protocols for one processor execution, as expected. The performance in parallel
executions, however, is very close to that of the null message protocol since small EIT advancements
by the null messages defer the global ECOT computation. Thus, it performs worse than
the conditional event protocol when the simulation model has high connectivity or very poor
lookahead. However, compared to the null message protocol in such cases, the ANM protocol
prevents an explosion in the number of null messages with frequent global ECOT computa-
tions, and performs better than the null message protocol. This shows that the protocol has
the capability for adaptively and implicitly switching its execution mode from the null message
based synchronization to the conditional event based one for better performance. Since this
algorithm also inherits the characteristic of the conditional event algorithm that can simulate
the models with zero lookahead cycles, it is best suited for models whose properties are
unknown.
Future work includes the use of ISP for performance evaluation of various algorithms such as optimistic
algorithms and synchronous algorithms together with the algorithms examined in this paper,
using additional applications from different disciplines.
Acknowledgments
The authors wish to thank Professor Iyer and the three reviewers for their valuable comments
and suggestions that resulted in significnat improvements to the presentation. We also acknowledge
the assistance of past and present team members in the Parallel Computing Laboratory for
their role in developing the Maisie software. Maisie may be obtained by annonymous ftp from
http://may.cs.ucla.edu/projects/maisie.
--R
Stability of event synchronisation in distributed discrete event sim- ulation
A unifying framework for distributed simulations.
Maisie: A language for design of efficient discrete-event simulations
Language support for parallel discrete-event simulations
Global virtual time algorithms.
Asynchronous distributed simulation via a sequence of parallel computations.
The conditional event approach to distributed simulation.
A framework for automatic lookahead computation in conservative distributed simulation.
Parallel discrete event simulation.
Performance measurements of distributed simulation strategies.
State of the art in parallel simulation.
Understanding the Limits of Optimistic and Conservative Parallel Simulation.
Distributed discrete-event simulation
Parallel discrete event simulation of fcfs stochastic queueing networks.
The cost of conservative synchronization in parallel discrete event simulations.
Parallel discrete event simulation: A shared memory approach.
Variants of the chandy-misra-bryant distributed simulation al- gorithm
--TR
--CTR
Alfred Park , Richard M. Fujimoto , Kalyan S. Perumalla, Conservative synchronization of large-scale network simulations, Proceedings of the eighteenth workshop on Parallel and distributed simulation, May 16-19, 2004, Kufstein, Austria
Jinsheng Xu , Moon Jung Chung, Predicting the Performance of Synchronous Discrete Event Simulation, IEEE Transactions on Parallel and Distributed Systems, v.15 n.12, p.1130-1137, December 2004
Moo-Kyoung Chung , Chong-Min Kyung, Improving Lookahead in Parallel Multiprocessor Simulation Using Dynamic Execution Path Prediction, Proceedings of the 20th Workshop on Principles of Advanced and Distributed Simulation, p.11-18, May 24-26, 2006 | lookahead;parallel simulation languages;discrete-event simulation;algorithmic efficiency;parallel and distributed simulation;conservative algorithms |
351138 | Modeling and Performance Comparison of Reliability Strategies for Distributed Video Servers. | AbstractLarge scale video servers are typically based on disk arrays that comprise multiple nodes and many hard disks. Due to the large number of components, disk arrays are susceptible to disk and node failures that can affect the server reliability. Therefore, fault tolerance must be already addressed in the design of the video server. For fault tolerance, we consider parity-based as well as mirroring-based techniques with various distribution granularities of the redundant data. We identify several reliability schemes and compare them in terms of the server reliability and per stream cost. To compute the server reliability, we use continuous time Markov chains that are evaluated using the SHARPE software package. Our study covers independent disk failures and dependent component failures. We propose a new mirroring scheme called Grouped One-to-One scheme that achieves the highest reliability among all schemes considered. The results of this paper indicate that dividing the server into independent groups achieves the best compromise between the server reliability and the cost per stream. We further find that the smaller the group size, the better the trade-off between a high server reliability and a low per stream cost. | Introduction
1.1 Server Design Issues
Many multimedia applications such as online news, interactive television, and video-on-demand require large
video servers that are capable of transmitting video data to thousands of users. In contrary to traditional file
systems, video servers are subject to real-time constraints that impact the storage, retrieval, and delivery of video
data. Furthermore, video servers must support very high disk bandwidth for data retrieval in order to serve a large
number of video streams simultaneously. The most attractive approach for implementing a video server relies on
disk arrays that (i) achieve high I/O performance and high storage capacity, (ii) can gradually grow in size, and
(iii) are very cost efficient. Unfortunately, large disk arrays are vulnerable to disk failures, which results in poor
reliability for the video server. A challenging task is therefore to design video servers that provide not only high
performance but also high reliability.
The video server considered in this paper is composed of many disk arrays, which are also referred to as server
nodes. Each server node comprises a set of magnetic disk drives as illustrated in Figure 1 and is directly attached
to the network. A video to store is divided into many blocks and the blocks are distributed among all disks and
nodes of the video server in a round robin fashion. All server nodes are identical, each node containing the same
number of disks D n . The total number of server disks D is then denotes the number of
server nodes.
Client
Client
Client
Client
Network
Node
Node
Node
Figure
1: Video Server Architecture
A client that consumes a video from the server is connected to all server nodes and is served once every time
interval called the service round. A video block that is received at the client during service round i is consumed
during service round i+1. Further, since the transfer rate of a single disk is much higher than the stream playback
rate, each disk can serve many streams during one service round. In order to efficiently retrieve multiple blocks
from a disk during one service round, the video server applies the well known SCAN algorithm that optimizes
the seek overhead by reordering the service of the requests.
A very important decision in a video server design concerns the way data is distributed (striped) over its disks.
To avoid hot spots, each video is partitioned into video blocks that are distributed over all disks of the server
as already mentioned. Based on the way data are retrieved from disks, the literature distinguishes between the
Fine-Grained (FGS) and the Coarse-Grained (CGS) Striping algorithms. FGS retrieves for one stream (client)
multiple, typically small, blocks from many disks during a single service round. A typical example of FGS is
RAID3 as defined by Katz et al. [1]. Derivations of FGS include the streaming RAID of Tobagi et al. [2], the
staggered-group scheme of Muntz et al. [3], and the configuration planner scheme of Ghandeharizadeh et al. [4].
The main drawback of FGS is that it suffers from large buffer requirements that are proportional to the number
of disks in the server [5, 6]. CGS retrieves for one stream one large video block from a single disk during a
single service round. During the next service round, the next video block is retrieved from possibly a different
disk. RAID5 is the classical example of CGS. Oezden et al. [7, 6] showed that CGS results in higher throughput
than FGS for the same amount of resources (see also Vin et al. [8], Beadle et al. [9], and our contribution [5]).
Accordingly, we will adopt CGS to store original video data on the video server.
The paper is organized as follows. Section 2 classifies reliability schemes based on (i) whether mirroring or parity
is used and (ii) the distribution granularity of redundant data. Related work is discussed at the end of the section.
Section 3 studies reliability modeling for the reliability schemes considered. The reliability modeling is based on
Continuous Time Markov Chains (CTMC) and concerns the case of independent disk failures as well as the case
of dependent component failures. We focus in section 4 on the server performance, where we compare the per
stream cost for the different reliability schemes. Section 5 emphasizes the trade-off between the server reliability
and the per stream cost and studies the effect of varying the group size on the server reliability and the per stream
cost. The results of section 5 lead to the conclusions of this paper, which are presented in section 6.
1.2 Our Contribution
In the context of video servers, reliability has been addressed previously either by applying parity-based techniques
(RAID2-6), e.g. [10, 8, 6, 11, 5], or by applying mirroring-based techniques (RAID1), e.g. [12, 13, 14].
However, all of the following aspects have not been considered together:
ffl Comparison of several parity-based and mirroring-based techniques under consideration of both, the video
server performance and cost issues. Our cost analysis concerns the storage and the buffering costs to
achieve a given server throughput.
ffl Reliability modeling based on the distribution granularity of redundant data in order to evaluate the server
reliability for each scheme considered. We will perform a detailed reliability modeling that incorporates
the case of independent disk failures and the case of dependent component failures.
ffl Performance, cost, and reliability trade-offs of different parity-based as well as mirroring-based techniques.
We will study the effect of varying the group size on the server reliability and the per stream cost and
determine the best value of the group size for each technique.
We use CGS to store/retrieve original video blocks, since it outperforms FGS in terms of the server throughput.
Adding fault-tolerance within a video server implies the storage of redundant (replicated/parity) blocks. What
remains to be decided is how redundant data is going to be stored/retrieved on/from the server. For mirroring,
we limit ourselves to interleaved declustering schemes [15], where original data and replicated data are spread
over all disks of the server. For parity, we limit ourselves to RAID5-like schemes, where parity blocks are evenly
distributed over all server disks. We retain these schemes since they distribute the load uniformly among all
server components. Additionally, we consider the case where only original blocks of a video are used during
normal operation mode. During disk failure mode, replicated/parity blocks are needed to reconstruct lost original
blocks that are stored on the failed component.
Mirroring (also called consists in storing copies of the original data on the server disks. The
main disadvantage of mirroring schemes is the 100% storage overhead due to storing a complete copy of the
original data. Reliability based on parity consists in storing parity data in addition to original data (RAID2-6).
When a disk failure occurs, parity blocks together with the remaining original data are used to reconstruct failed
original blocks. The RAID5 [18] parity scheme requires one parity block for each (D \Gamma 1) original blocks,
where D is the total number of server disks. The (D \Gamma 1) original blocks and the one parity block constitute
a parity group. Although the additional storage volume is small for parity-based reliability, the server needs
additional resources in terms of I/O bandwidth or main memory when working in disk failure mode. In fact, in
the worst case, the whole parity group must be retrieved and temporarily kept in the buffer to reconstruct a lost
block. In [5], we have distinguished between the second read strategy and the buffering strategy. The second
read strategy doubles the I/O bandwidth requirement [19], whereas the buffering strategy increases the buffer
requirement as compared to the failure free mode. We will restrict our discussion to the buffering strategy, since
it achieves about twice as much throughput as the second read strategy [5, 20]. We will see that the buffering
strategy becomes more attractive in terms of server performance (lower buffer requirements) and also regarding
the server reliability when the size of the parity group decreases.
2.1 Classification of Reliability Schemes
Reliability schemes differ in the technique (parity/mirroring) and in the distribution granularity of redundant
data. We define below the distribution granularity of redundant data.
ffl For the parity technique, the distribution granularity of redundant data is determined by whether the parity
group comprises All (D) or Some (D c ) disks of the server. For the latter case, we assume that the server
is partitioned into independent groups and that all groups are the same size, each of them containing D c
disks. Let C denote the number of groups in the server
ffl For the mirroring technique, the distribution granularity of redundant data has two different aspects:
- The first aspect concerns whether the original blocks of one disk are replicated on One, Some (D c ),
or All remaining (D \Gamma 1) disks of the server.
- The second aspect concerns how a single original block is replicated. Two ways are distinguished.
The first way replicates one original block entirely into one replicated block [13], which we call
entire block replication. The second way partitions one original block into many sub-blocks and
stores each sub-block on a different disk [14], which we call sub-block replication. We will show
later on that the distinction between entire block and sub-block replication is decisive in terms of
server performance (throughput and per stream cost).
Table
classifies mirroring and parity schemes based on their distribution granularity. We use the terms One-to-
One, One-to-All, and One-to-Some to describe whether the distribution granularity of redundant data concerns
one disk (mirroring), all disks (mirroring/parity), or some disks (mirroring/parity) disks. For the One-to-One
scheme, only mirroring is possible, since One-to-One for parity would mean that the size of each parity group
equals 2, which consists in replicating each original block (mirroring). Hence the symbol "XXX" in the table.
Mirroring Parity
One-to-One Chained declustering [21, 22] XXX
One-to-All Entire block replication (doubly striped) [13, 23] RAID5 with one group [18]
Sub-block replication [15]
One-to-Some Entire block replication RAID5 with many groups [3, 8, 7]
Sub-block replication [14]
Table
1: Classification of the different reliability schemes
Table
distinguishes seven schemes. We will give for each of these schemes an example of the data layout .
Thereby, we assume that the video server contains 6 disks and stores a single video. The stored video is assumed
to be divided into exactly blocks. All schemes store original blocks in the same order (round robin)
starting with disk 0 (Figures 2 and 3). What remains to describe is the storage of redundant data for each of the
schemes.
Figure
presents examples of the mirroring-based schemes. These schemes have in common that each disk is
partitioned into two separate parts, the first part storing original blocks and the second part storing replicated
blocks.
As illustrated in Figure 2(a), the One-to-One mirroring scheme (Mirr one ) simply replicates original blocks of
one disk onto another disk. If one disk fails, its load is entirely shifted to its replicated disk, which creates
load-imbalances within the server (the main drawback of the One-to-One scheme).
Original
blocks
Copies
Organization MirrOne .0000001111110000001111110000001111110000001111110000001111110000001111110000001111110000001111110000001111110000001111118315105171220 21 22 23 24
26
Original
blocks
Copies
(b) One-to-All Organization with Entire block replication
Mirr all\Gammaentire .0000001111110000001111110000001111110000001111110000001111118315105171220 21 22 23 24
26 28 29 30719Original
blocks
Copies
7.1 7.2 7.3 7.4 7.5
13.1 13.2 13.3 13.4 13.5
19.1 19.2 19.3 19.4 19.5
25.1 25.2 25.3 25.4 25.5
1.1 1.2 1.3 1.4 1.5
(c) One-to-All Organization with Sub-blocks replication
Mirr all\Gammasub .0000001111110000001111110000001111110000001111110000001111110000001111110000001111110000001111110000001111110000001111118315105171220 21 22 23 24
26
Original
blocks
Copies
(d) One-to-Some Organization with Entire block replication
Mirrsome\Gammaentire
26 28 29 30719Original
blocks
Copies
7.1 7.2
13.1 13.2
19.1 19.2
25.1 25.2
1.1 1.2 4.1 4.2
28.1 28.2
One-to-Some Organization with Sub-blocks replication
Mirrsome\Gammasub .
Figure
2: Mirroring-based schemes.
In order to distribute the load of a failed disk evenly among the remaining disks of the server, the One-to-All
mirroring scheme is applied as shown in Figures 2(b) and 2(c). Figure 2(b) depicts entire block replication
(Mirr all\Gammaentire ) and Figure 2(c) depicts sub-block replication (Mirr all\Gammasub ). In Figure 2(c), we only show how
original blocks of disk 0 are replicated over disks 1, 2, 3, 4, and 5. If we look at Figures 2(b) and 2(c), we realize
that only a single disk failure is allowed. When two disk failures occur, the server cannot ensure the delivery of
all video data.
The One-to-Some mirroring scheme trades-off load-imbalances of the One-to-One mirroring scheme and the
low reliability of the One-to-All mirroring scheme. In fact, as shown in Figures 2(d) (entire block replication,
Mir some\Gammaentire ) and 2(e) (sub-block replication, Mir some\Gammasub ), the server is divided into multiple (2) independent
groups. Each group locally employs the One-to-All mirroring scheme. Thus, original blocks of one disk
are replicated on the remaining disks of the group and therefore the load of a failed disk is distributed over all
remaining disks of the group. Further, since each group tolerates a single disk failure, the server may survive
multiple disk failures.
Figure
3 presents two layout examples of RAID5 that correspond to the One-to-All parity scheme, Par all (Figure
3(a)) and the One-to-Some parity scheme, Par some (Figure 3(b)). In Figure 3(a) the parity group size is 6, e.g.
the 5 original blocks 16, 17, 18, 19, and 20 and the parity block P 4 build a parity group. In Figure 3(b) the parity
groups size is 3, e.g. the 2 original blocks 17 and and the parity block P 8 build a parity group.000111111000111111000111111000111111000111111000011111111
26
(a) One-to-All Organization Par all .000111111000111111000111111000111111000111111 000000111000000111000000111000000111000000111000000111000000111000000111000111000000111P1 1 2 P2 3 4
26 P13 27 28
29 P14
(b) One-to-Some Organization Parsome .
Figure
3: Parity-based Schemes.
Looking at Figures 2 and 3, we observe that all One-to-All schemes (mirroring with entire block replication
(Mirr all\Gammaentire ), mirroring with sub-block replication (Mirr all\Gammasub ), and RAID5 with one group (Par all
tolerate one disk failure. All these schemes therefore have the same server reliability. The same property holds
for all One-to-Some schemes (mirroring with entire block replication, mirroring with sub-block replication, and
RAID5 with C groups), since they all tolerate at most a single disk failure on each group. Consequently, it is
enough for our reliability study to consider the three schemes (classes): One-to-One, One-to-All, and One-to-
Some. However, for our performance study we will consider in section 4 the different schemes of Table 1.
2.2 Related Work
Based on RAID [1], reliability has been addressed previously in the literature either in a general context of file
storage, or for video server architectures. Mechanisms to ensure fault-tolerance by adding redundant information
to original content can be classified into parity-based schemes and mirroring-based schemes.
An extensive amount of work has been carried out in the context of parity-based reliability, see e.g. [24, 2, 17, 10,
25, 26, 7, 27, 28]. These papers ensure a reliable real-time data delivery, even when one or some components fail.
These papers differ in the way (i) they stripe data such as RAID3 (also called FGS) or RAID5 (also called CGS),
and (ii) allocate parity information within the server (dedicated, shared, declustered, randomly, sequentially, SID,
etc.), and (iii) the optimization goals (throughput, cost, buffer requirement, load-balancing, start-up latency for
new client requests, disk bandwidth utilization, etc.
Video servers using mirroring have been proposed previously, see e.g. [12, 15, 23, 14, 29, 21, 22]. However no
reliability modeling has been carried out. Many mirroring schemes were compared by Merchant et al. [15], where
some striping strategies for replicated disk arrays were analyzed. Depending on the striping granularity of the
original and the replicated data, they distinguish between the uniform striping (CGS for original and replicated
data in dedicated or in chained form) and the dual striping (original data are striped using CGS and replicated
data are striped using FGS). However, their work is different from our study in many regards. First, the authors
assume that both copies are used during normal and failure operation mode. Second, the comparison of different
mirroring schemes is based on the mean response times and on the throughput achieved without taking into
account server reliability. Finally, the authors do not analyze the impact of varying the distribution granularity of
redundant data on server reliability and server performance.
In a general context of Redundant Arrays of Inexpensive Disks, Trivedi et al. [30] analyzed the reliability of
RAID1-5 and focused on the relationship between disk's MTTF and system reliability. Their study is based
on the assumption that a RAID system is partitioned into cold and hot disks, where only hot disks are active
during normal operation mode. In our case, we study reliability strategies for video servers that do not store
redundant data separately on dedicated disks, but distribute original and redundant data evenly among all server
disks. Gibson [31] uses continuous Markov models in his dissertation to evaluate the performance and reliability
of parity-based redundant disk arrays.
In the context of video servers, reliability modeling for parity-based schemes (RAID3, RAID5) has been performed
in [32] and RAID3 and RAID5 were compared using Markov reward models to calculate server avail-
ability. The results show that RAID5 is better than RAID3 in terms of the so-called performability (availability
combined with performance).
To the best of our knowledge, there is no previous work in the context of video servers that has compared several
mirroring and parity schemes with various distribution granularities in terms of the server reliability and the
server performance and costs.
3 Reliability Modeling
3.1 Motivation
We define the server reliability at time t as the probability that the video server is able to access all videos stored
on it provided that all server components are initially operational. The server survives as long as its working
components deliver any video requested to the clients. As we have already mentioned, the server reliability
depends on the distribution granularity of redundant data and is independent of whether mirroring or parity
is used. In fact, what counts for the server reliability is the number of disks/nodes that are allowed to fail
without causing the server to fail. As an example, the One-to-All mirroring scheme with entire block replication
(Mirr all\Gammaentire ) only tolerates a single disk failure. This is also the case for Mirr all\Gammasub and for Par all . These
three schemes have therefore the same server reliability. In light of this fact, we use the term One-to-All to
denote all of the three schemes for the purpose of our reliability study. Analogous to One-to-All, the term One-to-
Some will represent the three schemes Mirr some\Gammaentire , Mirr some\Gammasub , and Par some and the term One-to-One
denotes the One-to-One mirroring scheme Mirr one .
We use Continuous Time Markov Chains (CTMC) for the server reliability modeling [33] . CTMC has discrete
state spaces and continuous time and is also referred to as Markov process in [34, 35]. We assume that the mean
time to failure (MTTF ) of every component is exponentially distributed. The server MTTF s and the server
reliability R s (t) have the following relationship (assuming that MTTF s ! 1): MTTF
The mean time to disk failure MTTF d equals 1
and the mean time to disk repair MTTR d equals 1
To build the state-space diagram [34] of the corresponding CTMC, we introduce the following parameters: s
denotes the total number of states that the server can denotes a state in the Markov chain with
is the probability that the server is in state i at time t. We assume that the server is fully
operational at time t is the initial state. Additionally, state (s \Gamma 1) denotes the system failure
state and is assumed to be an absorbing state (unlike previous work [32], where a Markov model was used to
compare the performance of RAID3 and RAID5 and allowed the repair of an overall server failure). When the
video server attains state (s \Gamma 1), it is assumed to stay there for an infinite time. This previous assumption allows
to concentrate our reliability study on the interval between the initial start up time and the time, at which the
first server failure occurs. Thus: p 0 1. The server
reliability function R s (t) can then be computed as [34]: R s
We present in the remainder of this section the Markov models for the three schemes One-to-All, One-to-One, and
One-to-Some, assuming both, (i) independent disk failures (section 3.2) and (ii) dependent component failures
(section 3.3).
3.2 Reliability Modeling for Independent Disk Failures
3.2.1 The One-to-All Scheme
With the One-to-All scheme (Mirr all\Gammaentire , Mirr all\Gammasub , Par all ), data are lost if at least two disks have failed.
The corresponding state-space diagram is shown in Figure 4, where states 0, 1, and F denote respectively the
initial state, the one disk failure state, and the server failure state.
l d l d
D . (D-1) .
Figure
4: State-space diagram for the One-to-All Scheme.
The generator matrix Q s of this CTMC is then:
\GammaD
3.2.2 The One-to-One Scheme
The One-to-One scheme (Mirr one ) is only relevant for mirroring. As the One-to-One schemes stores the original
data of disk i on disk ((i D), the server fails if two consecutive disks fail. Depending on the location
of the failed disks, the server therefore tolerates a number of disk failures that can take values between 1 and DThus, the number of disks that are allowed to fail without making the server fail can not be known in advance,
which makes the modeling of the one-to-one scheme complicated: Let the D server disks be numbered from 0
to D \Gamma 1. Assume that the server continues to operate after failures. Let P (k) be the probability
that the server does not fail after the k th disk failure. P (k) is also the probability that no disks that have failed
are consecutive (adjacent). We calculate in Appendix A the probability P (k) for all k 2 [2:: D
2 ]. It is obvious that
Figure
5 shows the state-space diagram for Mirr one . If the server is in state i (i - 1) and failure
then the probability that the server fails (state F) equals and the probability that the server continuous
operating
d n+1 d D/2
F
l
l l
d d d
Figure
5: State-space diagram for the One-to-One scheme.
The parameters of Figure 5 have the following values:
. The corresponding
generator matrix Q s of the CTMC above is:
(1)
3.2.3 One-to-Some Scheme
The One-to-Some scheme (mirroring/parity) builds independent groups. The server fails if one of its C groups
fails and the group failure distribution is assumed to be exponential. We first model the group reliability and then
derive the server reliability. Figures 6(a) and 6(b) show the state-space diagrams of one group and of the server
respectively. In Figure 6(b), the parameter C denotes the number of groups in the server and - c denotes the group
failure rate.
The generator matrix Q c for the CTMC of a single group is:
l d
D c . D c l d
(a) State-space diagram of one group.
l c
C .
(b) State-space diagram of the
server.
Figure
State-space diagrams for the One-to-Some Scheme.
The group reliability function R c (t) at time t is R c
(t). The
group mean lifetime MTTF c is then derived from R c (t). To calculate the overall server reliability function, we
assume that the group failure distribution is exponential. The server failure rate is thus C
MTTFc .
The server generator matrix Q s of the CTMC of Figure 6(b) is therefore:
\GammaC
(2)
3.3 Reliability Modeling for Dependent Component Failures
Dependent component failures mean that the failure of a single component of the server can affect other server
components. We recall that our server consists of a set N of server nodes, where each node contains a set of
disks. Disks that belong to the same node have common components such as the node's CPU, the disk
controller, the network interface, etc. When any of these components fails, all disks contained in the affected
node are unable to deliver video data and are therefore considered as failed. Consequently, a single disk does not
deliver video data anymore if itself fails or if one of the components of the node fails to which this disk belongs.
We present below the models for the different schemes for the case of dependent component failures. Similarly
to a disk failure, a node failure is assumed to be repairable. The failure rate - n of the node is exponentially
distributed with -
MTTFn , where MTTF n is the mean life time of the node. The repair rate - n of a failed
node is exponentially distributed as -
MTTRn , where MTTR n is the mean repair time of the node.
For mirroring and parity schemes, we apply the so called Orthogonal RAID mechanism whenever groups must
be built. Orthogonal RAID was discussed in [31] and [17]. It is based on the following idea. Disks that belong to
the same group must belong to different nodes. Thus, the disks of a single group do not share any (common) node
hardware components. Orthogonal RAID has the property that the video server survives a complete node failure:
When one node fails, all its disks are considered as failed. Since these disks belong to different groups, each
group will experience at most one disk failure. Knowing that one group tolerates a single disk failure, all groups
will survive and therefore the server will continue operating. Until now, Orthogonal RAID was only applied in
the context of parity groups. We generalize the usage of Orthogonal RAID for both, mirroring and parity.
In order to distinguish between disk and node failure when building the models of the schemes considered, we
will present each state as a tuple [i; j], where i gives the number of disks failed and j gives the number of nodes
failed. The failure (absorbing) state is represented with the letter F as before.
3.3.1 The One-to-All Scheme
For the One-to-All schemes (Mirr all\Gammaentire , Mirr all\Gammasub , and Par all ), double disk failures are not allowed and
therefore a complete node failure causes the server to fail. Figure 7 shows the state-space diagram for the One-to-
All scheme for the case of dependent component failures. The states of the model denote respectively the initial
state ([0; 0]), the one disk failure state ([1; 0]), and the server failure state
l d l n
l d
l n
d
N .
D . (D-1) . + N .
Figure
7: State-space diagram for the One-to-All scheme with dependent component failures.
The generator matrix Q s is then:
\GammaD
3.3.2 The Grouped One-to-One Scheme
Considering dependent component failures, the One-to-One scheme as presented in Figure 2(a) would achieve a
very low server reliability since the server immediately fails if a single node fails. We propose in the following an
organization of the One-to-One scheme that tolerates a complete node failure and even Nnode failures in the best
case. We call the new organization the Grouped One-to-One scheme. The Grouped One-to-One organization
keeps the initial property of the One-to-One scheme, which consists in completely replicating the original content
of one disk onto another disk. Further, the Grouped One-to-One organization divides the server into independent
groups, where disks belonging to the same group have their replica inside this group. The groups are built based
on the Orthogonal RAID principle and thus disks of the same group belong to different nodes as Figure 8 shows.
Figure
8 assumes one video containing 40 original blocks and is stored on a server that is made of four nodes
each containing two disks. Inside one group, up-to Dcdisk failures can be tolerated, where D
is the number of disks inside each group. The Grouped One-to-One scheme can therefore survive N
failures in the best case (the server in Figure 8 continues operating even after nodes N 1 and N 2 fail). In order to
distribute the load of a failed node among possibly many and not only one of the surviving nodes, the Grouped
One-to-One scheme ensures that disks belonging to the same node have their replica on disks that do not belong
to the same node 1 .
1 Assume that node N1 has failed, then its load is shifted to node N3 (replica of disk 0 are stored on disk 4) and to node N4 (replica of
disk 1 are stored on disk 7)
Original
blocks
Copies
6 79251329
Figure
8: Grouped One-to-One scheme for a server with 4 nodes, each with 2 disks (D
In order to model the reliability of our Grouped One-to-One scheme, we first study the behavior of a single group
and then derive the overall server reliability. One group fails when two consecutive disks inside the group fail.
We remind that two disks are consecutive if the original data of one of these disks are replicated on the other disk
(for group 1 the disks 0 and 4 are consecutive, whereas the disks 0 and 2 are not). Note that the failure of one disk
inside the group can be due to (i) the failure of the disk itself or (ii) the failure of the whole node, to which the
disk belongs. After the first disk failure, the group continues operating. If the second disk failure occurs inside
the group, the group may fail or not depending on whether the two failed disks are consecutive. Let P (2) denotes
the probability that the two failed disks of the group are not consecutive. Generally, P (k) denotes the probability
that the group does not fail after the k th disk failure inside the group. P (k) is calculated in Appendix A.
The state-space diagram of one group for the example in Figure 8 is presented in Figure 9. The number of disks
inside the group is D disks have failed inside the group, where i disks,
themselves, have failed and j nodes have failed. Obviously, all of the i disks that have failed belong to different
nodes than all of the j nodes that have failed. We describe in the following how we have built the state-space
diagram of Figure 9. At time t 0 , the group is in state [0; 0]. The first disk failure within the group can be due to
a single disk failure (state [1; 0]) or due to a whole node failure (state [0; 1]). Assume that the group is in state
[1; 0] and one more disk of the group fails. Four transitions are possible: (i) the group goes to state [2; 0] after the
second disk of the group has failed itself and the two failed disks are not consecutive, (ii) the group goes to state
after the node has failed, on which the second failed disk of the group is contained and the two failed disks
are not consecutive, (iii) the group goes to state F after the second disk of the group has failed (disk failure or
node failure) and the two failed disks are consecutive, and finally (iv) the group goes to state [0; 1] after the node
has failed, to which the first failed disk of the group belongs and thus the number of failed disks in the group does
not increase (only one disk failed). The remaining of the state-space diagram is to derive in an analogous way.
The parameters of Figure 9 are the following:
The generator matrix Q c for a group is:
l n
l n
l n
F
F
F
l 5
l 7
l 7
l 7
l 7
l 3
l 1
l 2
l 6
l 4
Figure
9: State-space diagram of one group for the Grouped One-to-One scheme with dependent component
failures (D
From the matrix Q c we get the group mean life time MTTF c , which is used to calculate the overall server
reliability. The state-space diagram for the server is the one of Figure 6(b), where the parameter - takes the value
denotes the failure rate of one group with -
MTTFc . The server reliability is then calculated
analogously to Eq. 2.
Note that the example described in Figure 9 considers a small group size (D c = 4). Increasing D c increases the
number of states contained in the state-space diagram of the group. In general, the number of states for a given
We present in Appendix B a general method for building the state-space diagram
of one group containing D c disks.
3.3.3 The One-to-Some Scheme
We use Orthogonal RAID for all One-to-Some schemes (Mirr some\Gammaentire , Mirr some\Gammasub , and Par some ). If we
consider again the data layouts of Figures 2(d), 2(e), and 3(b), Orthogonal RAID is then ensured if the following
holds: Node 1 contains disks 0 and 3; node 2 contains disks 1 and 4; and node 3 contains disks 2 and 5.
For the reliability modeling of the One-to-Some scheme, we first build the state-space diagram for a single group
Figure
and then compute the overall server reliability (Figure 10(b)). The states in Figure 10(a) denote the
following: the initial state ([0; 0]), the state where one disk fails ([1; 0]), the state where one node fails resulting
in a single disk failure within the group ([0; 1]), the state where one disk and one node have failed and the failed
disk belongs to the failed node ([0; 1"]), and the one group failure state ). The parameter values used in Figure
are:
l 4
l 4
l 4
l 1
l 2
l 3
(a) State-space diagram for one group.
l c
C .
(b) State-space diagram for the
server.
Figure
10: State-space diagrams for the One-to-Some Scheme for the case of dependent component failures.
The generator matrix Q c for a group is:
3.4 Reliability Results
We resolve our continuous time Markov chains using the SHARPE (Symbolic Hierarchical Automated Reliability
and Performance Evaluator) [33] tool for specifying and evaluating dependability and performance models.
Sharpe takes as input the generator matrix and computes the server reliability at a certain time t. The results for
the server reliability are shown in Figures 11 and 12. The total number of server disks considered is
and the number of server nodes is 10, each node containing 10 disks. We examine server reliability for two
failure rates, 1
hours
and 1
100000 hours
, which are pessimistic values.
Figure
11 plots the server reliability for the One-to-One, One-to-All, and One-to-Some schemes for the case of
independent disk failures. As expected, the server reliability for the One-to-One scheme is the highest. The
One-to-Some scheme exhibits higher server reliability than the One-to-All scheme. Figures 11(a) and 11(b)
also show how much the server reliability is improved when mean time to disk failure increases (- d decreases).
For example, for the One-to-One scheme and after 10 4 days of operation, the server reliability is about 0:3 for
hours
and is about 0:66 for -
100000 hours
Figure
12 depicts the server reliability for the Grouped One-to-One, the One-to-All, and the One-to-Some
schemes for the case of dependent component failures. We observe that the Grouped One-to-One scheme provides
a better reliability than the One-to-Some scheme. The One-to-All scheme has the lowest server reliability,
0.40.8Time [days]
Reliability
Function
Server Reliability (Independent Disk Failures)
One-2-One
One-2-Some
One-2-All
(a) Server reliability for
hours
and
72 hours
Time [days]
Reliability
Function
Server Reliability (Independent Disk Failures)
One-2-One
One-2-Some
One-2-All
(b) Server reliability for
100000 hours
and
72 hours
Figure
11: Server reliability for the three schemes assuming independent disk failures with
10.
e.g. for -
100000 hours
and after three years, the server reliability is 0 for the One-to-All scheme, 0:51
for the One-to-Some scheme, and 0:85 for the Grouped One-to-One scheme. Figures 12(a) and 12(b) show that
the server reliability increases when - d (- n ) decreases.
Time [days]
Reliability
Function
Server Reliability (Dependent Component Failures)
Grouped One-2-One
One-2-Some
One-2-All
(a) Server reliability for
hours
and
72 hours
Time [days]
Reliability
Function
Server Reliability (Dependent Component Failures)
Grouped One-2-One
One-2-Some
One-2-All
(b) Server reliability for
100000 hours
and
72 hours
Figure
12: Server reliability for the three schemes assuming dependent component failures with
Comparing Figures 11 and 12, we see, as expected, that the server reliability is higher for the independent disk
failure case than for the dependent component failure case. We restrict our further discussion to the case of
dependent component failures since it is more realistic than the case of independent disk failures.
4 Server Performance
An important performance metric for the designer and the operator of a video server is the maximum number
of streams Q s that the server can simultaneously admit, which is referred to as the server throughput. Adding
fault-tolerance within a server requires additional resources in terms of storage volume, main memory and I/O
bandwidth capacity. As we will see, the reliability schemes discussed differ not only in the throughput they
achieve, but also in the amount of additional resources they need to guarantee uninterrupted service during disk
failure mode. Throughput is therefore not enough to compare server performance of these schemes. Instead,
we use the cost per stream. We calculate in section 4.1 the server throughput for each of the schemes. Section
4.2 focuses on buffer requirements. Section 4.3 compares then the different schemes with respect to the cost per
stream.
4.1 Server Throughput
The admission control policy decides, based on the remaining available resources, whether a new incoming
stream is admitted. The CGS striping algorithm serves a list of streams from a single disk during one service
round. During the next service round, this list of streams is shifted to the next disk. If Q d denotes the maximum
number of streams that a single disk can serve simultaneously (disk throughput) in a non fault-tolerant server,
then the overall server throughput Q s is simply Q . Accordingly, we will restrict our discussion to
disk throughput. Disk throughput Q d is given by Eq. 3 [5], where meaning and value of the different parameters
are listed in Table 2. The disk parameter values are those of Seagate and HP for the SCSI II disk drives [36].
r d
r d
Parameter Meaning of Parameter Value
r p Video playback rate 1:5 Mbps
r d Inner track transfer rate 40 Mbps
t stl Settle time 1:5 ms
ms
t rot Worst case rotational latency 9:33 ms
b Block size 1 Mbit
- Service round duration b dr
rp sec
Table
2: Performance Parameters
To allow for fault-tolerance, each disk reserves a portion of its available I/O bandwidth to be used during disk failure
mode. Since the amount of reserved disk I/O bandwidth is not the same for all schemes, the disk throughput
will also be different.
Let us start with the Grouped One-to-One scheme Mirr one . Since the original content of a single disk is entirely
replicated onto another disk, half of each disk's I/O bandwidth must be kept unused during normal operation
mode to be available during disk failure mode. Consequently the disk throughput Q mirr
One is simply the half of Q
For the One-to-All mirroring scheme Mirr all\Gammaentire with entire block replication, the original blocks of one
disk are spread among the other server disks. However, it may happen that the original blocks that would have
been required from a failed disk during a particular service round are all replicated on the same disk (worst case
situation). In order to guarantee deterministic service for this worst case, half of the disk I/O bandwidth must be
reserved to disk failure mode. Therefore, the corresponding disk throughput Q mirr
All\GammaEntire is: Q mirr
worst case retrieval pattern for the One-to-Some mirroring scheme Mirr some\Gammaentire with entire block replication
is the same as for the previous scheme and we get: Q mirr
Since the three schemes Mirr one ,
Mirr all\Gammaentire , and Mirr some\Gammaentire achieve the same throughput, we will use the term MirrEntire to denote
all of them and Q mirr
Entire
2 to denote their disk throughput:
For the One-to-All mirroring scheme Mirr all\Gammasub with sub-block replication, the situation changes. In fact,
during disk failure mode, each disk retrieves at most Q mirr
All\GammaSub original blocks and Q mirr
All\GammaSub replicated sub-blocks
during one service round. Let us assume that sub-blocks have the same size b all
sub , i.e.
sub
The admission control formula becomes:
All\GammaSub \Delta
r d
b all
sub
r d
b+b all
sub
r d
Similarly, the disk throughput Q mirr
Some\GammaSub for One-to-Some mirroring with sub-block replication Mirr some\Gammasub
is:
b+b some
sub
r d
where b some
sub denotes the size of a sub-block as
c is the number of disks contained on
each group.
We now consider the disk throughput for the parity schemes. Recall that we study the buffering strategy and
not the second read strategy for lost block reconstruction. For the One-to-All parity scheme Par all , one parity
block is needed for every (D \Gamma 1) original blocks. The additional load of each disk consisting in retrieving parity
blocks when needed can be seen from Figure 3(a). In fact, for one stream in the worst case all requirements for
parity blocks concern the same disk, which means that at most one parity block is retrieved from each disk every
service rounds. Consequently, each disk must reserve 1
D of its I/O bandwidth for disk failure mode. The disk
All is then calculated as:
Analogous to the One-to-All parity scheme, the One-to-Some parity scheme Par some has the following disk
These three schemes share the common property that each original block is entirely replicated into one block.
In
Figure
13(a), we take the throughput value Q mirr
Entire of MirrEntire as base line for comparison and plot the
ratios of the server throughput as a function of the total number of disks in the server.
8000.51.5Number of Disks D in the Server
Throughput
Server Throughput Ratio
Par all
Par some
Mirr all-sub
Mirr some-sub
Mirr Entire
(a) Throughput Ratios.
Number
of
Disks
in
the
Server
Number of disks for the same throughput
Mirr Entire
Mirr some-sub
Mirr all-sub
Par some
Par all
(b) Number of disks required for the same Server
throughput.
Figure
13: Throughput results for the reliability schemes with D
Mirroring schemes that use entire block replication (MirrEntire ) provide lowest throughput. The two mirroring
schemes Mirr all\Gammasub and Mirr some\Gammasub that use sub-block replication have throughput ratios of about 1:5.
The performance for Mirr all\Gammasub is slightly higher than the one for Mirr some\Gammasub since the sub-block size
b all
sub . Parity schemes achieve higher throughput ratios than mirroring schemes and the One-to-All parity
scheme Par all results in the highest throughput. The throughput for the One-to-Some parity scheme Par some
is slightly smaller than the throughput for Par all . In fact, the parity group size of (D \Gamma 1) for Par all is larger
than D c . As a consequence, the amount of disk I/O bandwidth that must be reserved for disk failure is smaller
for Par all than for Par some . In order to get a quantitative view regarding the I/O bandwidth requirements, we
reverse the axes of Figure 13(a) to obtain in Figure 13(b) for each scheme the number of disks needed to achieve
a given server throughput.
4.2 Buffer Requirement
Another resource that affects the cost of the video server and therefore the cost per stream is main memory. Due
to the speed mismatch between data retrieval from disk (transfer rate) and data consumption (playback rate),
main memory is needed at the server to temporarily store the blocks retrieved. For the SCAN retrieval algorithm,
the worst case buffer requirement for one served stream is twice the block size b. Assuming the normal operation
mode (no component failures), the buffer requirement B of the server is therefore
denotes the server throughput.
Mirroring-based schemes replicate original blocks that belong to a single disk over one, all, or a set of disks.
During disk failure mode, blocks that would have been retrieved from the failed disk are retrieved from the
disks that store the replica. Thus, mirroring requires the same amount of buffer during normal operation mode
and during component failure mode independently of the distribution granularity of replicated data. Therefore,
for all mirroring schemes considered (Grouped One-to-One Mirr one , One-to-All with entire block replication
Mirr all\Gammaentire , One-to-All with sub-block replication Mirr all\Gammasub , One-to-Some with entire block replication
Mirr some\Gammaentire , and One-to-Some with sub-block replication Mirr some\Gammasub ) the buffer requirement during
component failure is B
Unlike mirroring-based schemes, parity-based schemes need to perform a X-OR operation over a set of blocks
to reconstruct a lost block. In fact, during normal operation mode the buffer is immediately liberated after
consumption. When a disk fails, original blocks as well as the parity block that belong to the same parity group
are sequentially retrieved (during consecutive service rounds) from consecutive disks and must be temporarily
stored in the buffer for as many service rounds that elapse until the lost original block will be reconstructed.
Since buffer overflow must be avoided, the buffer requirement is calculated for the worst case situation where
the whole parity group must be contained in the buffer before the lost block gets reconstructed. An additional
buffer size of one block must be also reserved to store the first block of the next parity group. Consequently,
during component failure, the buffer requirement B par
all for Par all is B par
B, and the buffer
requirement B par
some for Par some is B par
Note that the buffer requirement for Par all
depends on D and therefore increases linearly with the number of disks in the server. For Par some , however, the
group size D c can be kept constant while the total number of disks D varies. As a result, the buffer requirement
some for Par some remains unchanged when D increases.
4.3 Cost Comparison
The performance metric we use is the per stream cost. We first compute the total server cost $ server and then
derive the cost per stream $ stream as:
Qs . We define the server cost as the cost of the hard disks
and the main memory dimensioned for the component failure mode:
Pmem is the price of 1 Mbyte of main memory, B the buffer requirement in Mbyte, P d is the price of 1 Mbyte
of hard disk, V disk is the storage volume of a single disk in MByte, and finally D is the total number of disks
in the server. Current price figures - as of 1998 - are Pmem = $13 and P these prices change
frequently, we will consider the relative costs by introducing the cost ratio ff between Pmem and
Thus, the server cost function becomes: Pmem
ff
and
the per stream cost is:
Pmem \Delta
ff
To evaluate the cost of the five different schemes, we compute for each scheme and for a given value of D the
achieved and the amount of buffer B required to support this throughput. Note that we take
for the schemes Mirr some\Gammaentire , Mirr some\Gammasub , and Par some .
Figure
14 plots the per stream cost for the schemes Par all , Par some , MirrEntire , Mirr some\Gammasub , and Mirr all\Gammasub
for different values of the cost ratio ff. We recall that the notation MirrEntire includes the three mirroring
schemes Mirr all\Gammaentire , Mirr some\Gammaentire , and the Grouped One-to-One Mirr one that experience the same
throughput and require the same amount of resources. In Figure 14(a), we consider
26 that presents
the current memory/hard disk cost ratio. Increasing the value of
means that the price for the disk
storage drops faster than the price for main memory: In Figure 14(b), we multiply the current cost ratio by five to
get On the other hand, decreasing the value of ff means that the price for main memory drops
faster than the price for hard disk: In Figure 14(c) we divide the current cost ratio by five to get
3 To illustrate the faster decrease of the price for hard disk as compared to the one for main memory, we consider the current price for
main memory (Pmem = $13) and calculate the new reduced price for hard disk
4 Analogously, to illustrate the faster decrease of the price for memory as compared to the price for hard disk, we take the current price
for hard disk calculate the new reduced price for memory (Pmem
Number of Disks D in the Server
Cost
Per Stream Cost
Par all
Mirr Entire
Par some
Mirr some-sub
Mirr all-sub
(a)
0:5
Number of Disks D in the Server
Cost
Per Stream Cost
Par all
Par some
Mirr Entire
Mirr some-sub
Mirr all-sub
0:1
Number of Disks D in the Server
Cost
Per Stream Cost
Par all
Mirr Entire
Mirr some-sub
Mirr all-sub
Par some
(c)
0:5
5:2.
Figure
14: Per stream cost for different values of the cost ratio ff with D
The results of Figure 14 indicate the following:
ffl The increase or the decrease in the value of ff as defined above means a decrease in either the price for
hard disk or the price for main memory respectively. Hence the overall decrease in the per stream cost in
Figures
14(b) and 14(c) as compared to Figure 14(a).
Figure
shows that the One-to-All parity scheme (Par all ) results in the highest per stream
cost that increases when D grows. In fact, the buffer requirement for Par all is highest and also increases
linearly with the number of disks D and thus resulting in the highest per stream cost. Mirroring schemes
with entire block replication (MirrEntire ) have the second worst per stream cost. The per stream cost
for the remaining three schemes (Mirr all\Gammasub , Mirr some\Gammasub , and Par some ) is roughly equal and is low-
est. The best scheme is the One-to-All mirroring scheme with sub-block replication (Mirr all\Gammasub ). It
has a slightly smaller per stream cost than the One-to-Some mirroring scheme with sub-block replication
due to the difference in size between b all
sub and b some
sub (see the explanation in section 4.1).
ffl The increase in the cost ratio ff by a factor of five Figure 14(b)) slightly decreases the
per stream cost of Par all and results in a dramatic decrease in the per stream cost of all three mirroring
schemes and also the parity scheme Par some . For instance the per stream cost for Par some decreases from
down-to $28:72 and the per stream cost of Mirr some\Gammasub decreases from $79:79 down-to $18:55.
All three mirroring schemes become more cost efficient than the two parity schemes.
ffl The decrease in the cost ratio ff by a factor of five Figure 14(c)) affects the cost of the three
mirroring schemes very little. As an example, the per stream cost for Mirr some\Gammasub is $79:79 in Figure
14(a) and is $77:19 in Figure 14(c). On the other hand, decreasing ff, i.e. the price for main memory
decreases faster than the price for hard disk, clearly affects the cost of the two parity schemes. In fact,
Par some becomes the most cost efficient scheme with a cost per stream of $65:64. Although the per
stream cost of Par all decreases significantly with still remains the most expensive for high
values of D. Since Par all has the highest per stream cost that linearly increases with D, we will not
consider this scheme in further cost discussion.
5 Server Reliability and Performance
5.1 Server Reliability vs. Per Stream Cost
Figure
15 and Table 3 depict the server reliability and the per stream cost for the different reliability schemes
discussed herein. The server reliability is computed after 1 year (Figure 15(a)) and after 3 years (Figure 15(b))
of server operation. The results in Figure 15 are obtained as follows. For a given server throughput, we calculate
for each reliability scheme the number of disks required to achieve that throughput. We then compute the server
reliability for each scheme and its respective number of disks required. Table 3 shows the normalized per stream
cost for different values of ff. We take the per stream cost of Mirr one as base line for comparison and divide
the cost values for the other schemes by the cost for Mirr one . We recall again that the three schemes Mirr one ,
Mirr all\Gammaentire and Mirr some\Gammaentire have the same per stream cost since they achieve the same throughput given
the same amount of resources (see section 4).
Server Throughput
Reliability
Function Mirr one
Par some
Mirr some-sub
Mirr some-entire
Par all
Mirr all-sub
Mirr all-entire
(a) Server reliability after 1 year of server operation with
100000 hours
Server Throughput
Reliability
Function
(b) Server reliability after 3 years of server operation with
100000 hours
Figure
15: Server reliability for the same server throughput with D
Mirr one
Mirr all\Gammaentire
Par some 0:688 1:129 0:588
Mirr some\Gammasub 0:698 0:729 0:691
Mirr all\Gammasub 0:661 0:696 0:653
Table
3: Normalized stream cost (by Mirr one ) for different values of ff with D
The three One-to-All schemes Par all , Mirr all\Gammasub , and Mirr all\Gammaentire have poor server reliability even for
a low values of server throughput, since they only survive a single disk failure. The difference in reliability
between these schemes is due to the fact that Par all requires, for the same throughput, fewer disks than
Mirr all\Gammasub that in turn needs fewer disks than Mirr all\Gammaentire (see Figure 13(b)). The server reliability of these
three schemes decreases dramatically after three years of server operation as illustrated in Figure 15(b)). Accord-
ingly, these schemes are not attractive to ensure fault tolerance in video servers and hence we are not going to
discuss them more in the remainder of this paper. We further discuss the three One-to-Some schemes Par some ,
Mirr some\Gammasub , and Mirr some\Gammaentire and the Grouped One-to-One scheme Mirr one . Based on Figures 15(a)
and 15(b), Mirr one has a higher server reliability than the three One-to-Some schemes Par some , Mirr some\Gammasub ,
and Mirr some\Gammaentire .
From
Table
3, we see that Mirr one , which has the same per stream cost as Mirr some\Gammaentire , has a per stream cost
about 1:5 higher than Mirr some\Gammasub . Par some has the highest per stream cost for a high value of ff
and is the most cost effective for a small value of ff
In summary, we see that the best scheme among the One-to-Some schemes is Par some since it has a low per
stream cost and requires fewer disks than Mirr some\Gammasub and thus provides a higher server reliability than
both, Mirr some\Gammasub and Mirr some\Gammaentire . Since Mirr some\Gammaentire achieves much lower server reliability than
Mirr one for the same per stream cost, we conclude that Mirr some\Gammaentire is not a good scheme for achieving
fault tolerance in a video server.
Based on the results of Figure 15 and Table 3, we conclude that the three schemes: Mirr one , Par some , and
Mirr some\Gammasub are the good candidates to ensure fault tolerance in a video server. Note that we have assumed in
Figure
15 for all these three schemes the same value D Mirr one has the highest server reliability but a
higher per stream cost as compared to the per stream cost of Par some and Mirr some\Gammasub . For the value D
the two schemes Par some and Mirr some\Gammasub have a lower per stream cost but also a lower server reliability than
Mirr one . This difference in server reliability becomes more pronounced as the number of disks in the video
server increases. We will see in the next section how to determine the parameter D c for the schemes Par some
and Mirr some\Gammasub in order to improve the trade-off between the server reliability and the cost per stream.
5.2 Determining the Group Size D c
This section evaluates the impact of the group size D c on the server reliability and the per stream cost. We limit
our discussion to the three schemes: our Mirr one , Par some , and Mirr some\Gammasub . Remember that we use the
Orthogonal RAID principle to build the independent groups (see section 3.3). Accordingly, disks that belong
to the same group are attached to different nodes. Until now, we have assumed that the group size D c and the
number of nodes N are constant (D In other terms, increasing D leads to an increase in the
number of disks D n per node. However, the maximum number of disks D n is limited by the node's I/O capacity.
Assume a video server with disks and D 5. We plot in Figure 16 two different ways to configure
the video server. In Figure 16(a) the server contains five nodes 5), where each node consists of D
disks. One group contains D disks, each belonging to a different node. On the other hand, Figure 16(b)
configures a video server with nodes, each containing only D disks. The group size is again
single group does not stretch across all nodes. Note that the number of groups C is the same
for both configurations 20). When the video server grows, the second alternative suggests to add new
nodes (containing new disks) to the server, whereas the first alternative suggests to add new disks to the existing
nodes. Since D n must be kept under a certain limit given by the node's I/O capacity, we believe that the second
alternative is more appropriate to configure a video server.
We consider two values the group size: D for the remaining three schemes Mirr one ,
Par some , and Mirr some\Gammasub . Figures 17(a) and 17(b) depict the server reliability for Mirr one , Par some , and
Mirr some\Gammasub after one year and after three years of server operation, respectively. Table 4 shows for these
schemes the normalized per stream cost with different values of ff and with D
the per stream cost of Mirr one as base line for comparison and divide the cost values for the other schemes by
group 20
(a)
group 20
group 19
Figure
Configuring a video server with 5.
the cost for Mirr one .
Server Throughput
Reliability
Function Mirr one
Par someMirr some-sub 5
Par someMirr some-sub 20
(a) Server reliability after 1 year of server operation with
100000 hours
and
Server Throughput
Reliability
Function
(b) Server reliability after 3 years of server operation with hours
and
Figure
17: Server reliability with D
The results of Figure 17 and Table 4 are summarized as follows:
ffl The server reliability of Mirr one is higher than for the other two schemes. As expected, the server re-
Mirr one
Par some 20 0:798 1:739 0:584
Par some 5 0:695 0:879 0:653
Mirr some\Gammasub 20 0:678 0:717 0:671
Mirr some\Gammasub 5 0:745 0:771 0:739
Table
4: Normalized stream cost (by Mirr one ) for different values of ff and D c .
liability increases for both, Par some and Mirr some\Gammasub with decreasing D c . In fact, as D c decreases,
the number of groups grows and thus the number of disk failures (one disk failure per group) that can be
tolerated increases as well.
ffl Depending on the value of ff, the impact of varying the group size D c on the per stream cost differs for
Par some and Mirr some\Gammasub . For Mirr some\Gammasub , the cost per stream decreases as the group size D c grows
for all three values of ff considered. Indeed, the sub-block size to be read during disk failure is inversely
proportional to the value of D c . Consequently, the server throughput becomes smaller for decreasing D c
and the per stream cost increases. For Par some with 26 and ff = 130, the per stream cost decreases
as D c decreases. However, this result is reversed with where the per stream cost of Par some is
higher with D 20. The following explains the last observation:
1. A small value of ff (e.g. signifies that the price for main memory decreases faster than
the one for hard disk and therefore main memory does not significantly affect the per stream cost for
Par some independently of the group size D c .
2. As the group size D c decreases, the amount of I/O bandwidth that must be reserved on each disk
for the disk failure mode increases. Consequently, the throughput is smaller with D
As a result, the per stream cost for Par some increases when the group size D c decreases.
3. Since the memory cost affects only little the per stream cost of Par some for a small value of ff, the
weight of the amount of I/O bandwidth to be reserved on the per stream cost becomes more visible
and therefore the per stream cost of Par some increases as D c decreases.
Note that the per stream cost of Par some (D lowest for it is highest for
Par some has always a higher server reliability than Mirr some\Gammasub . Further, for the values 26 and
Par some has a higher cost per stream than Mirr some\Gammasub given the same value of D c . However,
for the value becomes more cost effective than Mirr some\Gammasub .
ffl Based on the reliability results and for the high values of ff, e.g. we observe that a small
group size (D c = 5) considerably increases the server reliability and decreases the per stream cost for
Par some . For Mirr some\Gammasub , the server reliability increases as D c decreases, but also the per stream cost
slightly increases whatever the value of ff is.
In summary, we have shown that the three schemes Mirr one , Par some , and Mirr some\Gammasub are good candidates to
ensure fault-tolerance in a video server. The Grouped One-to-One scheme Mirr one achieves a higher reliability
than the other two schemes at the expense of the per stream cost that is about 1:5 times as high. For Par some ,
the value of D c must be small to achieve a high server reliability and a low per stream cost. For Mirr some\Gammasub ,
the value of D c must be small to achieve a high server reliability at the detriment of a slight increase in the per
stream cost.
6 Conclusions
In the first part we have presented an overview of several reliability schemes for distributed video servers. The
schemes differ by the type of redundancy used (mirroring or parity) and by the distribution granularity of the
redundant data. We have identified seven reliability schemes and compared them in terms of the server reliability
and the cost per stream. We have modeled server reliability using Continuous Time Markov Chains that were
evaluated using the SHARPE software package. We have considered both cases: independent disk failures and
dependent component failures.
The performance study of the different reliability schemes led us to introduce a novel reliability scheme, called
the Grouped One-to-One mirroring scheme, which is derived by the classical One-to-One mirroring scheme. The
Grouped One-to-One mirroring scheme Mirr one outperforms all other schemes in terms of server reliability. Out
of the seven reliability schemes discussed the Mirr one , Par some , and Mirr some\Gammasub schemes achieve both, high
server reliability and low per stream cost. We have compared these three schemes in terms of server reliability
and per stream cost for several memory and hard disk prices and various group sizes. We found that the smaller
the group size, the better the trade-off between high server reliability and low per stream cost.
A Calculation of P (k) for the One-to-One Scheme
We calculate in the following the probability P (k) that the video server that uses the One-to-One mirroring
scheme does not fail after k disk/node failures.
Let us consider the suite of n units. Note that the term unit can denote a disk (used for the independent disk failure
case) or a node (used for the dependent component failure mode). Since we want to calculate the probability that
the server does not fail having k units down, we want those units not to be adjacent. Therefore we are looking
for the sub-suites
Let us call S the set of these suites. We introduce a bijection of this set of suites to the set
Introducing the second suite j allows to suppress condition (10) on the suite i l , since the suite j is strictly growing,
whose number of elements are thus easy to count: it is the number of strictly growing functions from [1::k] in
1)], that is
This result though doesn't take into account the fact that the units number 1 and n are adjacent. In fact, two
scenarii are possible
ffl The first scenario is when unit 1 has already failed. In this case, units 2 and the n are not allowed to fail,
otherwise the server will fail. We have then a set of n \Gamma 3 units among which we are allowed to pick
non-adjacent units. Referring to the case that we just solved, we obtain: C(n
ffl The second scenario is when the first unit still works. In this case, k units are chosen among the
remaining ones. This leads us to the value
The number of possibilities N k that we are looking for is given by :
Consequently, for a given number k of failed units (disks/nodes), the probability P (k) that the server does not fail
after k unit failures is calculated as:
Building the group state-space diagram for the Grouped One-to-One scheme
We show in the following how to build the state-space diagram of one group containing D c disks for the Grouped
One-to-One scheme. We focus on the transitions from state [i; j] to higher states and back. We know that state
represents the total number (i + j) of disks that have failed inside the group. These failures are due to i disk
failures and j node failures. We also know that the number of states in the state-space diagram is 2 Dc
. The
parameters i and j must respect the Dc, since at most Dcdisk failures are tolerated inside one
group. We distinguish between the two cases:
Figure
shows the possible
transitions and the corresponding rates for the case (i), whereas Figure 19 shows the transition for the case (ii).
l n
F
l 1
l 2
Figure
18: Transitions from state [i; j] to higher states and back, for (i
l
Figure
19: Transition to the failure state for (i
.
The parameters in the two figures have the following values:
--R
"A Case for Redundant Arrays of Inexpensive Disks (RAID),"
"Streaming raid(tm) - a disk array management system for video files,"
"Fault tolerant design of multimedia servers,"
"Striping in multi-disk video servers,"
"Data striping and reliablity aspects in distributed video servers,"
"Disk striping in video server environments,"
"Fault-tolerant architectures for continuous media servers,"
"Design and performance tradeoffs in clustered video servers,"
"Predictive call admission control for a disk array based video server,"
"Architectures and algorithms for on-line failure recovery in redundant disk arrays,"
"A survey of approaches to fault tolerant design of vod servers: Techniques, analysis, and comparison,"
"Disk shadowing,"
"Doubly-striped disk mirroring: Reliable storage for video servers,"
"The tiger video fileserver,"
"Analytic modeling and comparisons of striping strategies for replicated disk arrays,"
Performance Modeling and Analysis of Disk Arrays.
"Raid: High-performance, reliable secondary storage,"
"Raid: High-performance, reliable secondary storage,"
"Architectures and algorithms for on-line failure recovery in redundant disk arrays,"
"Performance and cost comparison of mirroring- and parity-based reliability schemes for video servers,"
"Chained declustering: A new availability strategy for multiprocessor database ma- chines.,"
"Chained declustering: Load balancing and robustness to skew and failures,"
"Issues in the design of a storage server for video-on-demand,"
"Raid-ii: A scalabale storage architecture for high-bandwidth network file service,"
"Striping in multi-disk video servers,"
"Segmented information dispersal (SID) for efficient reconstruction in fault-tolerant video servers,"
"High availability for clustered multimedia servers,"
"Random raids with selective exploitationof redundancy for high performance video servers,"
"Using rotational mirrored declustering for replica placement in a disk-array-based video server,"
"Reliability analysis of redundant arrays of inexpensive disks,"
"Performability of disk-array-based video servers,"
Performance and Reliability Analysis of Computer Systems: An Example-Based Approach Using the SHARPE Software Package
System Reliability Theory: Models and Statistical Methods
Placement of Continuous Media in Multi-Zone Disks
--TR
--CTR
Yifeng Zhu , Hong Jiang , Xiao Qin , Dan Feng , David R. Swanson, Design, implementation and performance evaluation of a cost-effective, fault-tolerant parallel virtual file system, Proceedings of the international workshop on Storage network architecture and parallel I/Os, p.53-64, September 28-28, 2003, New Orleans, Louisiana
Xiaobo Zhou , Cheng-Zhong Xu, Efficient algorithms of video replication and placement on a cluster of streaming servers, Journal of Network and Computer Applications, v.30 n.2, p.515-540, April, 2007
Seon Ho Kim , Hong Zhu , Roger Zimmermann, Zoned-RAID, ACM Transactions on Storage (TOS), v.3 n.1, p.1-es, March 2007
Stergios V. Anastasiadis , Kenneth C. Sevcik , Michael Stumm, Maximizing Throughput in Replicated Disk Striping of Variable Bit-Rate Streams, Proceedings of the General Track: 2002 USENIX Annual Technical Conference, p.191-204, June 10-15, 2002
Xiaobo Zhou , Cheng-Zhong Xu, Harmonic Proportional Bandwidth Allocation and Scheduling for Service Differentiation on Streaming Servers, IEEE Transactions on Parallel and Distributed Systems, v.15 n.9, p.835-848, September 2004
Eitan Bachmat , Tao Kai Lam, On the effect of a configuration choice on the performance of a mirrored storage system, Journal of Parallel and Distributed Computing, v.65 n.3, p.382-395, March 2005
Yifeng Zhu , Hong Jiang, CEFT: a cost-effective, fault-tolerant parallel virtual file system, Journal of Parallel and Distributed Computing, v.66 n.2, p.291-306, February 2006
Stergios V. Anastasiadis , Kenneth C. Sevcik , Michael Stumm, Scalable and fault-tolerant support for variable bit-rate data in the exedra streaming server, ACM Transactions on Storage (TOS), v.1 n.4, p.419-456, November 2005 | reliability modeling;performance and cost analysis;distributed video servers;SHARPE;markov chains |
351186 | A PTAS for Minimizing the Total Weighted Completion Time on Identical Parallel Machines. | We consider the problem of scheduling a set ofn jobs onm identical parallel machines so as to minimize the weighted sum of job completion times. This problem is NP-hard in the strong sense. The best approximation result known so far was a 1/2 (1 2)-approximation algorithm that has been derived by Kawaguchi and Kyan back in 1986. The contribution of this paper is a polynomial time approximation scheme for this setting, which settles a problem that was open for a long time. Moreover, our result constitutes the =rst known approximation scheme for a strongly NP-hard scheduling problem with minsum objective. | Introduction
The problem. We consider the following machine scheduling model. We are given a set J of n independent
jobs that have to be scheduled on m identical parallel machines or processors. Each job
is specified by its positive processing requirement p j and by its positive weight w j . In a feasible schedule
for J , every job j 2 J is processed for p j time units on one of the m machines in an uninterrupted
fashion. Every machine can process at most one job at a time, and every job can be processed on at
most one machine at a time. The completion time of job j in some schedule is denoted by C j . The goal
is to minimize the total weighted completion time
. In the standard classification scheme of
Graham, Lawler, Lenstra, & Rinnooy Kan 1979, this scheduling problem is denoted by P
m part of the input, and by Pm
Complexity of the problem. For the special case of only one machine (i. e., the problem
can be solved in polynomial time by Smith's Ratio Rule: process the jobs in order of nonincreasing
. Thus, for the single machine case, the 'importance' of a job is measured by its ratio. For
a constant number m - 2 of machines, the problem is NP-hard in the ordinary sense and solvable
in pseudopolynomial time. For m part of the input, the problem is NP-hard in the strong sense; see
problem SS13 in Garey & Johnson 1979. The special case P j j
solvable in polynomial time by sorting, see Conway, Maxwell, & Miller 1967.
An extended abstract will appear in the Proceedings of the 31st Annual ACM Symposium on Theory of Computing
(STOC'99).
y Technische Universit?t Berlin, Fachbereich Mathematik, MA 6-1, Stra-e des 17. Juni 136, D-10623 Berlin, Germany,
E-mail skutella@math.tu-berlin.de. Part of this research was done while the first author was visiting C.O.R.E., Louvain-
la-Neuve, Belgium.
z Technische Universit?t Graz, Institut f?r Mathematik, Steyrergasse 30, A-8010 Graz, Austria, E-mail
gwoegi@opt.math.tu-graz.ac.at. Supported by the START program Y43-MAT of the Austrian Ministry of Science.
Approximation algorithms. In this paper we are interested in how close one can approach an optimum
solution to these NP-hard scheduling problems in polynomial time. Thus, our research focuses on
approximation algorithms which efficiently construct schedules whose values are within a constant factor
of the optimum solution value. The number ff is called performance guarantee or performance
ratio of the approximation algorithm. A family of polynomial time approximation algorithms with performance
called a polynomial time approximation scheme (PTAS). If
the running times of the approximation algorithms are even bounded by a polynomial in the input size
and 1
" , then these algorithms build a fully polynomial time approximation scheme (FPTAS). It is known
that unless P=NP, a strongly NP-hard optimization problem cannot possess an FPTAS, see Garey &
Johnson 1979.
Known approximation results. Sahni (1976) gives a FPTAS for the weakly NP-hard scheduling
problem
fixed m. Kawaguchi & Kyan (1986) analyze list scheduling in order of
nonincreasing ratios w j
on identical parallel machines. They prove a performance ratio 1
for the
strongly NP-hard problem
Till now, this was the best approximation result for this problem.
Alon, Azar, Woeginger, & Yadid (1998) study scheduling problems on identical parallel machines with
various objective functions that solely depend on the machine completion times. In particular, they give
a polynomial time approximation scheme for the problem of minimizing
denotes the
finishing time of machine i. By rewriting the objective function in an appropriate way, this result implies
a PTAS for the problem of minimizing the weighted sum of job completion times if all job ratios w j
are equal.
Generally speaking, the approximability of scheduling problems with total job completion time objective
(so-called minsum scheduling problems) is not well-understood. Some minsum problems can
be solved in polynomial time using straightforward algorithms (like the single machine version of the
problem
weakly NP-hard minsum problems allow an FPTAS based on dynamic
programming formulations (see, e. g., Sahni 1976 and Woeginger 1998). Some minsum problems do not
have a PTAS unless P=NP (Hoogeveen, Schuurman, & Woeginger 1998). Some of these problems cannot
even be approximated in polynomial time within a constant factor (like minimizing total flow time,
see Kellerer, Tautenhahn, & Woeginger 1996 and Leonardi & Raz 1997). Some minsum problems have
constant factor approximation algorithms that are based on rounding and/or transforming and/or manipulating
the solutions of preemptive relaxations or relaxations of integer programming formulations
(see, e. g., Phillips, Stein, & Wein 1995, Hall, Schulz, Shmoys, & Wein 1997, or Skutella 1998); due to the
integrality gap, these approaches can never yield a PTAS. However, there is not a single PTAS known
for a strongly NP-hard minsum scheduling problem.
Contribution of this paper. Our contribution is a polynomial time approximation scheme for the
general problem
of minimizing the total weighted completion time on identical parallel
machines. This result is derived in two steps. In the first step, we derive a PTAS for the special case
where the largest job ratio is only a constant factor away from the smallest job ratio. This result is
derived by modifying and by generalizing a technique of Alon et al. 1998. In the second step, we derive a
in its full generality. The main idea is to partition the jobs into subsets according
to their ratios such that near optimal schedules can be computed for all subsets; the key observation is
that these schedules can be concatenated without too much loss in the overall performance guarantee.
Our result yields the first polynomial time approximation scheme for a strongly NP-hard scheduling
problem with minsum objective. It confirms a conjecture of Hoogeveen, Schuurman, & Woeginger 1998,
it solves an open problem posed in Alon et al. 1998, and it finally improves on the ancient result by
Only very recently, several groups announced independently from each other approximation schemes
for the strongly NP-hard problem 1j r j j
even for more general scheduling problems, cf. Chekuri
et al. 1999.
Organization of the paper. Section 2 contains preliminaries which will be used throughout the paper.
In Section 3 we give an approximation scheme for the special case that the ratios of jobs are within a
constant range. Finally, in Section 4 we present the approximation scheme for arbitrary instances of
Preliminaries
By Smith's Ratio Rule, it is locally optimal to schedule the jobs that have been assigned to a machine
in order of nonincreasing ratios w j
; the proof is a simple exchange argument. Throughout the paper, we
will restrict to schedules meeting this property. For a given schedule
that
Notice that the second term on the right hand side is nonnegative and does not depend on the specific
schedule. Therefore, for each " ? 0, a (1 ")-approximation algorithm for the problem to minimize
the function
")-approximation algorithm for minimizing the total weighted
completion time. In fact, in Section 3 it will be more convenient to consider the objective function
and we will give a polynomial time approximation scheme for the problem to minimize this
function there.
We use the following notation: For a subset of jobs J
denote the total
processing time of the jobs in J 0 . Moreover, denote the average machine load caused by the jobs in J 0
by
use the simpler notation L := L(J). We start with the following
observation: Consider a subset of jobs J
ae for all the jobs in J 0 are consecutively
scheduled on a machine in an arbitrary order starting at time - , their contribution to the objective function
is given by
This observation together with Smith's Ratio Rule leads to the following lemma which generalizes a result
of Eastman, Even, & Isaacs 1964.
Lemma 2.1. Let ae 1 denote the different job ratios w j
let J h :=
g. For a given schedule, denote the subset of J h that is being assigned to
machine i by J h;i . Then, the value of the schedule is given by
ae h
ae '
Proof. On each machine i and for each 1 - q, the jobs in J ';i are scheduled consecutively starting at
time
h=1 p(J h;i ). The result thus follows from (2).
Notice that the right hand side of (3) only depends on the total processing times of the sets J h;i , but
not on the structure of these sets. In the analysis of the approximation scheme we will make use of the
following lower bound on the value of an optimal schedule:
Lemma 2.2. Using the same notation as in Lemma 2.1, the value of an arbitrary schedule is bounded
from below by
Proof. By (1) and Lemma 2.1 we get
ae h
The second inequality follows from the convexity of the function (a 1 ; a
3 The approximation scheme for a constant range of ratios
In this section we consider the problem to minimize
give a polynomial time approximation
scheme for instances with bounded weight to length ratios of jobs, i. e., where for all job ratios w j
for an arbitrary R ? 0 and a real constant that does not depend on the input. First notice
that by rescaling the weights of jobs we can restrict to the case R = 1.
3.1 Structural insights
be an arbitrary real constant and choose a corresponding constant \Delta 2 N with ffi \Delta+1 - ae.
If we round up the weights of jobs such that the ratio of each job attains the nearest integer power of ffi,
the value of an optimal schedule increases at most by a factor 1
. Since the constant can be chosen
arbitrarily close to 1, we may restrict to instances of P
are of the form
\Deltag. We use the following notation in this section: For
by J h the class of all jobs
. The completion time of machine i in some schedule is
denoted by M i .
Lemma 3.1. Let f
in an optimal schedule M i ? fM i 0 for a pair of machines
machine i processes exactly one job.
Proof. Let j denote the last job on machine i. Observe that j must start at or before time
moving j to the end of machine i 0 would decrease \Gamma j and hence the value of the schedule. Thus, we get
. By contradiction, assume that j is not the first job on machine i; denote the first job by k.
Let W i and W i 0 denote the sum of the weights of jobs scheduled on machine i and i 0 , respectively. Since
since the weight to length
ratios of all jobs are at most 1. Thus, removing job k from machine i and inserting it at the beginning
of machine i 0 changes the value of the schedule by p k (W contradicting optimality.
For the following corollary notice that there is at least one machine i with M i - L in every schedule.
Corollary 3.2. If a job j has size p j - fL, then j occupies a machine of its own in any optimal schedule.
If no job has size greater than fL, then the completion time of each machine is at most fL in any optimal
schedule.
As a result of Corollary 3.2, we can iteratively reduce a given instance: as long as a job j of size
remove it from the set of jobs J , assign it to a machine of its own, and decrease the
number of machines by one. In the following we can thus restrict to instances of the following form:
Assumption 3.3. The processing time p j of each job j as well as the completion time M i of any machine
i in an optimal schedule are bounded from above by fL.
3.2 Rounding the instance
We define a simplified, rounded version of the input for which we can compute an optimal schedule in
polynomial time. Moreover, under some assumption that we will specify later, an optimal solution to the
rounded instance will lead to a near optimal solution to the original instance. The rounding is based on
the positive integral constant - which will later be chosen to be 'sufficiently large'.
The rounding is done for every class J h , 0 - h - \Delta, separately. We will replace the jobs in J h by new
jobs, with slightly different processing times; however, the length to weight ratio will stay at ffi h .
ffl Every 'big' job j in the original instance with
- , is replaced by a corresponding rounded job
whose processing time
rounded up to the next integer multiple of L
. The weight
j of the rounded job equals
. Note that for the rounded processing time
holds.
ffl Denote by S h the total processing time of the 'small' jobs in J h whose processing times are not
greater than L
- . Denote by S #
h the value of S h rounded up to the next integer multiple of L
- . Then
the rounded instance contains S #
new jobs, each of length L
- . The weight of every new job equals
Note that the total number of jobs in the rounded instance is bounded by n. By construction we get the
following lemma.
Lemma 3.4. By replacing the rounded jobs with their unrounded counterparts, an arbitrary schedule for
the transformed instance induces a schedule of smaller or equal value for the original instance.
The rounded instance can be solved in polynomial time, e. g., by dynamic programming. However, we
use a generalization of an alternative approach of Alon, Azar, Woeginger, & Yadid 1998; we formulate
the problem as an integer linear program whose dimension is bounded by a constant. By Assumption 3.3
and by construction, the size p #
j of each job j is bounded by
Therefore, since all job sizes are integer multiples of 1
there is at most a constant number \Pi of possible
Moreover, since the number of possible length to weight ratios is bounded by
the constant \Delta + 1, the total number of different types of jobs is bounded by the constant
For denote the number of jobs of size k L
define the vector n := (n 1;0 ; n An assignment of jobs to one machine is given by a vector
is the number of jobs of size k L
that are assigned to
the machine. The completion time of the machine is given by
h=0
let c(u) denote the contribution to the objective function of a machine that is scheduled according to u.
j denote the average machine load for the transformed instance; observe that by
construction
Thus, by Assumption 3.3 we can bound the completion time of each machine by f -+\Delta+2
L. Denote by
U the set of vectors u with M(u) - f -+\Delta+2
- L. For a vector u 2 U , each entry u k;h is bounded by a
number that only depends on the constants -, \Delta, f and is thus independent of the imput. Therefore the
set U is of constant size.
We can now formulate the problem of finding an optimal schedule as an integer linear program with
a constant number of variables. For each vector u 2 U we introduce a variable xu which denotes the
number of machines that are assigned jobs according to u. An optimal schedule is then given by the
following program:
min
u2U
subject to
u2U
u2U
It has been shown by Lenstra (1983) that an integer linear program in constant dimension can be solved
in polynomial time.
3.3 Proving near optimality
By Lemma 3.4 it remains to show that the value of an optimal schedule for the rounded instance is at
most a factor of (1 above the optimal objective value of the original instance. We will prove this
under the following assumption on the original instance, and afterwards we will demonstrate how to get
rid of the assumption:
Assumption 3.5. There exists an optimal schedule of the following form: The completion time M i of
every machine i fulfills the inequality L
In order to achieve the desired precision, we now choose the integer - sufficiently large to fulfill the
inequality
Since \Delta, ffi, f , and " are constants that do not depend on the input, also - is constant and independent
of the input.
Lemma 3.6. Under Assumption 3.5, the optimal objective value of the rounded instance is at most a
factor of (1 above the optimal objective value of the original instance.
Proof. Take an optimal schedule for the original instance as described in Assumption 3.5. Consider some
fixed machine i with finishing time M i in this optimal schedule. For
h denote the
subset of J h processed on machine i and let J
h denote the set of all jobs processed on machine
i. Note that
By Lemma 2.1, the contribution of machine i to the objective function is given by
Replace every big job j by its corresponding rounded big job j # . This may increase every p(J 0
a multiplicative factor of at most -+1
- . For every J 0
h , denote by s h the total size of its small jobs. Round
s h up to the next integer multiple of L
- and replace it by an appropriate number of rounded small jobs
with length to weight ratio ffi h . This may increase p(J 0
h ) by an additive factor of at most L
- . Note that
by repeating this replacement procedure for all machines, one can accommodate all jobs in the rounded
instance.
Denote by K h the resulting set of rounded jobs that are assigned to machine i and have length to
weight ratio ffi h . We have for
Now compare the first term on the right hand side of (6) for J 0
h to the corresponding term for the jobs
in K h :2
In the last inequality, we applied together with the second inequality in (5). In a similar way, we
compare the second term on the right hand side of (6) for J 0
h to the corresponding term for the jobs in
because of the first inequality in (5), we get that the right hand side of (6) fulfills:2
Putting this together with (7), (8), and (4), a short calculation shows that the objective value for the
rounded instance is at most a factor of (1 above the optimal objective value for the original instance.
Finally, we consider the case where Assumption 3.5 may be violated. By Assumption 3.3 we can
restrict to the case that only the lower bound 1
f L can be violated for some machine completion time. As
a result of Lemma 3.1 we get:
Corollary 3.7. If in an optimal schedule
f for some machine i, then every machine i 0 with
only one job; in particular, there exists a job j of size p j - L and every such job
occupies a machine of its own.
In the following we assume that there exists a job j of size p j - L; otherwise, Assumption 3.5 is true
by Corollary 3.7 and we are finished. Unfortunately, we do not know in advance whether or not M i ! L
f
for some machine i in an optimal schedule. Therefore we take both possibilities into account and compute
two schedules for the given instance such that the better one is guaranteed to be a (1 ")-approximation
by Corollary 3.7.
On the one hand, we compute an optimal schedule for the rounded instance and turn it into a feasible
schedule for the original instance as described in Lemma 3.4. On the other hand, we assign each job j with
to a machine, remove those jobs from the instance, and decrease the number of machines by the
number of removed jobs. For the reduced instance, we can recursively compute a (1 ")-approximation
by again taking the better of two schedules. Notice that in each recursion step the number of machines
is decreased by at least one; thus, after at most m \Gamma 1 steps we arrive at a trivial problem that can be
solved to optimality in polynomial time.
We can now state the main result of this section:
Theorem 3.8. There exists a PTAS for the special case of the problem P
are in a constant range [aeR; R] for an arbitrary R ? 0 and a real constant that does not depend
on the input.
4 The polynomial time approximation scheme
In this section we present the approximation scheme for arbitrary instances of P
. The main
idea for deriving this result is to partition the set of jobs into subsets according to their ratios w j
. The
ratios of all jobs in one subset are within a constant range such that for each subset a near optimal
schedule can be computed within arbitrary precision in polynomial time, see Theorem 3.8. In a second
step, these 'partial schedules' are concatenated in order of nonincreasing job ratios such that Smith's
Ratio Rule is obeyed on each machine.
For the sake of a more accessible analysis, we first present a randomized variant of the approximation
scheme and discuss its derandomization later. Throughout this section we assume that w j
all weights of jobs can be rescaled by the inverse of the maximal ratio.
4.1 The randomized approximation scheme
The partitioning step. Let \Delta be a positive integer and let ffi := 1
later we will choose \Delta large such
that ffi gets small. The partitioning of the set of jobs J is performed in two steps. The first step computes
a fine partition which is then randomly turned into a rougher partition in the second step.
1. For h 2 N let J(h) :=
\Psi .
2. Draw q uniformly at random from f1;
I q
s
J(h).
Notice that for fixed q the number of nonempty subsets J q
bounded by n. Of course, we
only take those subsets into consideration in the algorithm. The intuition for step 2 is to compensate
the undesired property of the fine partition computed in step 1 that jobs with similar ratio may lie in
different subsets of the computed partition. The random choice in step 2 assures that the probability for
those jobs to lie in different subsets of the rough partition is small, i. e., in O
Computing partial schedules. The quotient of the biggest and the smallest ratio of jobs in a subset
s is bounded by the constant ffi \Delta . Thus, by Theorem 3.8, we can compute in polynomial time for all
nonempty sets of jobs J q
s near optimal m-machine schedules of value Z q
s
, where Z q
s
denotes the value of an optimal schedule for J q
s .
The concatenation step. In the final step of the algorithm these partial schedules are concatenated.
One possibility is to do it machine-wise: On machine i, all jobs that have been assigned to i in the partial
schedules are processed according to Smith's Ratio Rule. However, this deterministic concatenation can
lead to an undesired unbalance of load on the machines. It might for example happen that each subset
of jobs J q
s consists of at most one job which is always assigned to machine 1 in the corresponding partial
schedule. Thus, concatenating the partial schedules as proposed above would leave all but one machine
idle.
Therefore, we first randomly and uniformly permute the numbering of machines in each partial schedule
and then apply the machine-wise concatenation described above. In this randomly generated schedule
the probability for two jobs from different subsets to be processed on the same machine is equal to 1
such that one can expect an appropriately balanced machine assignment.
4.2 The analysis of the randomized approximation scheme
The analysis is based on the observation that the value of the computed schedule is composed of the
sum of the values of the partial schedules plus the additional cost caused by the delay of jobs in the
concatenation step. It is easy to see that the sum of the values of the near optimal partial schedules
cannot substantially exceed the value of an optimal schedule.
The key insight for the analysis is that the delay of jobs in one subset caused by another subset in
the concatenation step can essentially be neglected. One reason is that the delayed jobs usually (i. e.,
with high probability) have much smaller ratio and are thus less important than the jobs which cause
the delay. On the other hand, if there are too many 'unimportant' jobs to be neglected, then the total
weighted completion time of the corresponding near optimal partial schedule must be large compared to
the delay caused by the important jobs.
The following lemma provides two lower bounds on the value of an optimal schedule Z . To simplify
Lemma 4.1. For each q 2 \Deltag, the value Z of an optimal schedule is bounded by
Z -
Z q
s
and Z - mX
Proof. Take an optimal schedule for the set of jobs J and denote the completion time of job j in this
schedule by C
j . This yields
s
Z q
s
since the completion times C
also define a feasible schedule for each subset of jobs J q
s . In order to prove
the second lower bound, first observe that the value of an optimal schedule decreases if we round the
weights of jobs j 2 J(h) to w N. The result then follows from Lemma 2.2.
The next step in the analysis is to determine an expression for the expected value of the computed
schedule.
Lemma 4.2. The expected value of the computed schedule is given by
Z q
s
min
Proof. We first keep q fixed and analyze the conditional expectation E q
\Theta P
. This conditional
expectation is equal to the sum of the values of the partial schedules
s plus the expected cost
caused by the delay of jobs in the concatenation step.
The expected delay of an arbitrary job j 2 J can be determined as follows. Let t 2 N 0 and k 2 I q
such that j 2 J(k) ' J q
t . Then, the expected delay of job j is equal to the expected load caused by
jobs from
s on the machine which job j is being processed on. Since the machines are permuted
uniformly at random in the concatenation step, the expected load is equal to the average load
s
lie in different sets of indices I q
As
a consequence, for fixed q, the conditional expectation of the total weighted completion time can be
as:
Z q
Notice that for randomly chosen q the expected value of j q
h;k is equal to the probability that h and k lie
in different sets of indices I q
By construction of the sets of indices I q
r , this probability is equal
to k\Gammah
\Delta. The result thus follows from (11).
The following theorem contains the main result of this subsection.
Theorem 4.3. For a given
\Upsilon
. Then, the expected value of the computed schedule
is bounded by times the value of an optimal solution, i. e.,
Proof. We compare the expected value given in Lemma 4.2 to the lower bounds on the value of an optimal
schedule given in Lemma 4.1. Since, by construction of the algorithm, Z q
s
, for each q, and
s
- Z by Lemma 4.1, the first term on the right hand side of (10) can be bounded from above
by
Z q
s
In order to bound the second term on the right hand side of (10), first observe that for each k 2 N
Thus, the inequality for the geometric and the arithmetic mean yields:
Notice that (12) bounds the cost for the possible delay of jobs in J(k) caused by jobs in J(h) in terms
of the lower bounds on optimal schedules for J(h) and J(k) in (9). In particular, if then the
cost for the delay is small compared to the sum of the values of optimal schedules for J(h) and J(k). For
the cases 2, however, this cost may be too large to be neglected. This is the point
where we will make use of the random choice of q.
To be more precise, we divide the sum over all pairs h ! k in the second term of (10) into three
partial sums . The first partial sum \Sigma 1 takes all pairs with account and can
thus be bounded by
by (12)
The second partial sum \Sigma 2 takes all pairs with account and can be bounded using the
same arguments
by (12)
Finally, the third partial sum \Sigma 3 takes all pairs with k - h account. In this case we replace the
\Psi by 1 (i. e., we do not make use of the random choice of q) and get
h-k\Gamma3
h-k\Gamma3
Putting the results together, the expected value of the computed schedule is bounded by
The second inequality follows from the choice of ffi and a short calculation.
4.3 The deterministic approximation scheme
Up to now we have presented a randomized approximation scheme, i. e., we can efficiently compute schedules
whose expected values are arbitrarily close to the optimum. However, it might be more desirable to
have a deterministic approximation scheme which computes schedules with a firm performance guarantee
in all cases. Therefore we discuss the derandomization of the randomized approximation scheme.
Since the random variable q can only attain a constant number of different values, we can afford to
derandomize the partition step of the algorithm by trying all possible assignments of values to q. In
the following discussion we keep q fixed. The derandomization of the concatenation step is slightly more
complicated. We use the method of conditional probabilities, i. e., we consider the random decisions one
after another and always choose the most promising alternative assuming that all remaining decisions
will be made randomly.
Thus, starting with the partial schedule for J q
0 , we iteratively append the remaining partial schedules
for J q
to the current schedule. In each iteration t we use a locally optimal permutation of
the machines which is given in the following way. For denote the load or completion
time of machine i in the current schedule of the jobs in
s . Renumber the machines such that
the sum of the weights of jobs in J q
t that are processed
on machine i in the corresponding partial schedule. Renumber the machines in the partial schedule such
that . It is an easy observation, which can be proved by a simple exchange
argument, that appending the partial schedule machine-wise to the current schedule according to the
given numbering of machines minimizes the cost caused by the delay of jobs in J q
t . In particular, this
cost is smaller than the expected cost for the randomized variant of the algorithm which is given by
s
Moreover, the permutation of the machines chosen in iteration s does not influence the expected delay of
jobs considered in later iterations (since the expected delay is simply the average machine load).
As a result of the above discussion, the value of the schedule computed by the deterministic algorithm
is bounded from above by the expected value of the schedule computed by its randomized variant given
in Subsection 4.1. Thus, as a consequence of Theorem 4.3 we can state the following main result of this
paper.
Theorem 4.4. There exists a polynomial time approximation scheme for the problem P
Acknowledgements
The authors would like to thank the organizers of the Dagstuhl-Seminar 98301 on
'Graph Algorithms and Applications' during which the result presented in this paper has been achieved.
--R
Approximation schemes for scheduling on parallel machines.
Personal communication.
Theory of Scheduling.
Bounds for the optimal scheduling of n jobs on m processors.
Computers and Intractability: A Guide to the Theory of NP- Completeness
Rinnooy Kan.
Scheduling to minimize average completion time: Off-line and on-line approximation algorithms
Worst case bound of an LRF schedule for the mean weighted flow-time problem
Approximability and nonapproximability results for minimizing total flow time on a single machine.
Approximating total flow time on parallel machines.
Minimizing average completion time in the presence of release dates.
Algorithms for scheduling independent tasks.
Various optimizers for single-stage production
Semidefinite relaxations for parallel machine scheduling.
When does a dynamic programming formulation guarantee the existence of an FPTAS?
--TR
--CTR
Nicole Megow , Marc Uetz , Tjark Vredeveld, Models and Algorithms for Stochastic Online Scheduling, Mathematics of Operations Research, v.31 n.3, p.513-525, August 2006
Mark Scharbrodt , Thomas Schickinger , Angelika Steger, A new average case analysis for completion time scheduling, Journal of the ACM (JACM), v.53 n.1, p.121-146, January 2006
Martin Skutella, Convex quadratic and semidefinite programming relaxations in scheduling, Journal of the ACM (JACM), v.48 n.2, p.206-242, March 2001 | approximation scheme;combinatorial optimization;worst-case ratio;scheduling theory;approximation algorithm |
351414 | Machine-adaptable dynamic binary translation. | Dynamic binary translation is the process of translating and optimizing executable code for one machine to another at runtime, while the program is executing on the target machine. Dynamic translation techniques have normally been limited to two particular machines; a competitor's machine and the hardware manufacturer's machine. This research provides for a more general framework for dynamic translations, by providing a framework based on specifications of machines that can be reused or adapted to new hardware architectures. In this way, developers of such techniques can isolate design issues from machine descriptions and reuse many components and analyses. We describe our dynamic translation framework and provide some initial results obtained by using this system. | INTRODUCTION
Binary translation is a migration technique that allows software
to run on other machines achieving near native code
performance. Binary translation grew out of emulation
techniques in the late 1980s in order to provide for a migration
path from legacy CISC machines to the newer RISC machines.
Such techniques were developed by hardware manufacturers
interested in marketing their new RISC platforms. From mid
1990, binary translation techniques have been used to translate
competitors' applications to the desired hardware platform. In
the near future, we can expect to see such techniques being used
to optimize programs within a family of computers, for example,
by optimizing Sparc architecture binaries to UltraSparc
architecture binaries.
UQBT, the University of Queensland Binary Translator, has
developed techniques, specification languages and a complete
framework for performing static translations of code [14,19]. In
static binary translation, the code is translated off-line, before the
program is run, by creating a new program that uses the machine
instructions of the target machine. However, static translation
has its limitations. Due to the nature of the von Neumann
machine, where code and data are represented in the same way,
it is not always possible to discover all the code of a program
statically. For example, the target(s) of indirect transfers of
control such as jumps on registers are sometimes hard to analyse
statically. Therefore, a fall-back mechanism is commonly used
with a statically translated program, in the form of an interpreter.
The interpreter processes any untranslated code at runtime and
returns to translated code once a suitable path is found.
The limitations of static binary translation are overcomed with
dynamic translation, at the expense of performance. In a
dynamic binary translator, code gets translated "on the fly", at
runtime, while the user perceives ordinary execution of the
program on the target machine. As oppossed to emulation,
dynamic translation generates native code and performs on-demand
optimizations of the code. Hot spots in the code are
optimized at runtime to increase the performance of execution of
such code. Further, some optimizations that are not possible
statically are possible dynamically.
In this paper we describe the design of a machine-adaptable
dynamic binary translator based on the static UQBT framework -
UQDBT. A tool is said to be machine-adaptable when it can be
"configured" to handle different source and/or target machines.
In this way, a machine-adaptable dynamic binary translator is
capable of being configured for different source and target
machines through the specification of properties of these
machines and their instruction sets. In other words, the
translator is not bound to two particular machines (as per
existing translators) but is capable of supporting a variety of
source and target machines.
UQDBT differs from other dynamic translators in that it provides
a clean separation of concerns, by allowing machine-dependent
information to be specified, as well as performing machine-independent
analyses to support machine-adaptability. In this
way, UQDBT can support a variety of CISC and RISC machines
at low cost. To support a new machine, the specifications for
that machine need to be written and most of the UQDBT
framework can be reused. New machine-specific modules may
need to be added if a particular feature of a machine is not
supported by the UQDBT framework and such feature is not
generic across different architectures.
This paper is structured in the following way. Section 2
discusses the static and dynamic frameworks for binary
translation. Section 3 outlines the research problems of machine
adaptable binary translation and what is addressed in UQDBT.
Section 4 provides a case study of translation in the framework
through an example program. Section 5 shows preliminary
results in the use of the framework. Section 6 discusses effects
of changing the granularity of translation and conclusions are
given in Section 7. The work reported herein is work in progress.
1.1 Related work
In an attempt to improve on existing emulation techniques,
companies in the late 1980s began using binary translation to
achieve native code performance. Perhaps the most well known
binary translators are Digital's VEST and mx[1], which translate
VAX and MIPS machine instructions to 64-bit Alpha
instructions. Both of these translators and others, like Apple's
MAE[2] and Digital's Freeport Express[3] have a runtime
environment that reproduces the old machine's operating
environments. The runtime environment offers a fallback
interpreter for processing old machine code that was not
discovered at translation time, for example, due to indirect
transfers of control.
In recent years, we have seen a transition to hybrid translators,
which are proving to be extremely successful. The process of
mixing translation with emulation and runtime profiling brought
about some of the leading performers in the hybrid translation
scene - Digital's FX!32[4], Executor by Ardi[5] and Sun's
Wabi[6]. FX!32 emulates the program initially and statically
translates it in the background, using information gathered
during profiling. Embra[7], a machine simulator, is built using
dynamic translation techniques that were developed in Shade; a
fast instruction-set simulator for execution profiling [17]. Le[8]
investigates out-of-order execution techniques in dynamic binary
translators, though their results are based on an interpreter-based
implementation. Many of the optimization techniques used in
dynamic translators have been derived from dynamic compilers
such as SELF[9] and tcc[10]. Runtime optimizations in such
compilers can provide 0.9x - 2x the performance of statically
compiled programs. Such techniques have also been used in
Just-in-time (JIT) compilers for Java. JITs from Sun[11],
and others dynamically generate native machine code at
runtime.
To date, none of the current binary translators can generate code
for more than one source and target machine pair. The machine-dependent
aspects of the translation are hard coded into the
translator, making it hard to reuse the translator's code for
another set of machines. Our research differs from previous
research in that machine-dependent issues are separated from
machine-independent translation concerns, hence providing a
way of specifying different machines (source and target
machines) and supporting those specifications through reusable
components, which implement the machine-independent
analyses. This paper shows that this process is feasible and
therefore enhances the reuse of code for the creation of dynamic
binary translators. However, the machine-adaptability of the
translator comes at the cost of performance, which is discussed in
Section 5.
2. BINARY TRANSLATION
FRAMEWORKS
Binary translation is a process of low-level re-engineering; that
is, decoding to a higher level of abstraction, followed by
encoding to a lower level of abstraction. Figure 1 gives a block-
view of the UQBT static translation framework [14,19]. The re-engineering
process is divided into the initial reverse engineering
phase on the left-hand side and the forward engineering phase on
the right-hand side. The reverse engineering steps recover the
semantic meaning of the machine instructions by a three-step
process of decoding the binary file, decoding the machine
instructions of the code segment, and mapping such instructions
to their semantic meaning in the form of register transfer lists
(RTLs). The high-level analysis process lifts the level of
representation of the code to a machine-independent form,
performs binary translation specific optimizations on the code,
and then brings down the level of abstraction to RTLs for the
target machine. This is followed by the forward engineering
process of optimizing the code, encoding the instructions into
machine code and storing the code and data of the program in a
binary file. The forward engineering process is standard
optimizing compiler code generation technology.
RTL is a simple, low-level register-transfer representation of the
effects of machine instructions. A single instruction corresponds
to a register-transfer list, which in UQBT is a sequential
composition of effects. Each effect assigns an expression to a
location. All side effects are explicit at the top level; expressions
are evaluated without side effects, using purely function RTL
operators. An RTL language is a collection of locations and
operators. For a machine M, the sub-language M-RTL is defined
as those RTLs that represent instructions of machine M in a
single RTL.
As previously mentioned, the problems with static binary
translation are the inability to find all the code that belongs to a
program and the limitation of optimizations to static ones,
without taking advantage of dynamic optimization techniques.
One of the hardest problems to solve during the decoding of the
S em an tic
apper O p tim izer
In s truc t io n
In s truc io n
Enc oder
Analy s is
Bin ary -f ile
Bin ary -f ile
Enc oder
As s em b ly
in s tr uc tio n s
in ar y
in s tr uc tio n s s tr eam
ary
in s tr uc tio n s s tr eam
As s em b ly
in s tr uc tio n s
F
rw
ar
d
e
ng
ne
e
r
ng
Reverse
e
ng
ne
e
r
ng
ary
f ile
M b in ar y
f ile
Figure
1. Static binary translation framework
machine instructions is the separation of code from data - any
binary-manipulation tool faces the same problem. Unfortunately,
this problem is not solvable in general as both code and data are
represented in the same way in von Neumann machines. This
makes static translation incomplete and hence a runtime support
environment is needed in the form of an interpreter, for example.
2.1 Dynamic binary translation framework
In dynamic binary translation, the actual translation process takes
place on an "as needed" basis, whereas static binary translation
attempts to translate the entire program at once. Figure 2
illustrates a typical framework for a dynamic translator that uses
a basic block as the unit of translation (i.e. its granularity). The
left-hand side is similar to that of a static translator, but the
processing of code is done at a different level of granularity
(typically, one basic block at a time). The right-hand side is a
little different to that of static translation. The first time a basic
block is translated, assembly code for the target machine is
emitted and encoded to binary form. This binary form is run
directly on the target machine's memory as well as being kept in
a cache. A mapping of the source and target addresses of the
entire program for that basic block is stored in a map. If a basic
block is executed several times, when the number of executions
reaches a threshold, optimizations on the code are performed
dynamically to generate better code for that hot spot. Different
levels of optimization are possible depending on the number of
times the code is executed.
Optimized code then replaces the cached version of that basic
block's code. The processing of basic blocks is driven by a
switch manager. The switch manager determines whether a new
translation needs to be performed by determining whether there
is an entry corresponding to a source machine address in the
map. If an entry exists, the corresponding target machine
address is retrieved and its translation is fetched from the cache.
If a match is not found, the switch manager directs the decoding
of another basic block at the required source address.
2.2 Machine-adaptable dynamic binary
translation framework
Figure
3 extends Figure 2 to enable a dynamic translator to easily
adapt to different source and target machines. This effort is
achieved by a clean separation of concerns between machine-dependent
information and machine-independent analyses.
Through the use of specifications, a developer is able to
concentrate on writing descriptions of properties of machines
instead of having to (re)write the tool itself. The use of
specifications to support machine-dependent information can also
generate parts of the system automatically and provide a skeleton
for the user to work on.
As seen in Figure 3, the decoding of the binary file to source
Bas ic b loc k
Sem an tic
Mapper
imp le
Op tim izations
Bas ic b loc k
Ins tr uc tion
Dec oder
Bas ic b loc k
Ins tr uc tion
Enc oder
Ms
tr ans lato r
Bas ic b loc k
Binary-f ile
Dec oder
Op tim ized
inary
ins truc tions s tream
Ms b inary
ins truc tions s tream
Ms b inary
file
Ms
rans lation
cac he
Add r es s
Mapp ing
a r
Ms assemb ly
ins truc tions
As semb ly
Enc oder
ly
ins truc tions
a r
Sw it ch
Manager
Figure
3. Machine-adaptable dynamic binary translation framework
Bas ic b lo ck
Sem an tic
Mapper
imple
izations
Bas ic b lo ck
Ins truc tion
Dec oder
Bas ic b lo ck
Ins truc tion
Encoder
Bas ic b lo ck
ans lato
Bas ic b lo ck
Binary - file
Dec oder
As s emb ly
in s truc tio ns
Ef fic ient M t ass em bly
in s truc tio ns
ary
in s truc tio ns s tream
Ms binary
in s truc tio ns s tream
Ms assemb ly
in s truc tio ns
Ms binary
file
Ms-RTLs
ans lation
cache
Add r es s
Mapping
Manager
Figure
2. Dynamic binary translation framework
machine RTLs (Ms-RTLs) requires the description of the binary-
file format of the program, and the syntax and semantics of the
machine instructions for a particular processor. We have
experimented with three different languages, reusing the SLED
language and developing our own BFF and SSL languages:
. BFF: the binary-file format language supports the
description of a binary-file's structure [15]. Current formats
supported are DOS EXE, Solaris ELF and to a certain
extent, Windows PE. SRL, a simple resourceable loader,
supports the automatic generation of code to decode files
specified using the BFF language.
. SLED: the specification language for encoding and
decoding supports the description of the syntax of machine
instructions; ie. its binary to assembly mnemonic
representation [18]. SLED is supported by the New Jersey
machine-code toolkit [13]. The toolkit provides partial
support for automatically generating an instruction decoder
for a particular SLED specification. Current machines
specified in this form include the Pentium, SPARC, MIPS
and Alpha.
. SSL: the semantic specification language allows for the
description of the semantics of machine instructions. SSL is
supported by SRD [16]. SRD is the semantic mapper
component, which supports the parsing of SSL files and the
storing of such information in the form of a dictionary,
which can be instantiated dynamically. The output of this
stage is Ms-RTLs.
Ms-RTLs are converted to machine-independent RTLs (I-RTLs)
through analyses, which remove machine dependent concepts of
the source machine. This process identifies source machine's
control transfers and maps it to the more general forms in I-RTL.
For example: the following SPARC Ms-RTL for a call instruction
is associated as a high-level call instruction in I-RTL. Other
forms of transfers of control that exist in I-RTL are jumps,
returns, conditional and unconditional branches. I-RTL supports
register transfers, stack pushes and pops, high-level control
transfers, and condition code functions. Some of these higher-level
instructions allow for abstraction from the underlying
machine. I-RTLs are converted into Mt-assembly instructions by
mapping the functionality of such register transfers to the
instructions available on the target machine, assisted by the SSL
specifications. The instruction encoding process is supported by
the SLED specification language, which maps assembly
instructions into binary form. This code is then stored in the
translator's cache for later reference.
As an example, Figure 4 shows the various instruction
transformations during the translation of a Pentium machine
instruction to a SPARC machine instruction. The reverse-engineering
stage decodes the Pentium binary code (0000 0010
1101 1000) to produce Pentium assembly code, which is then
lifted to Pentium-RTLs and finally abstracted to I-RTL by
replacing machine-dependent registers with virtual registers.
The forward-engineering phase encodes the I-RTL to SPARC-
RTL, SPARC assembly instructions and finally SPARC binary
code.
2.3 Specification requirements in dynamic
binary translation
Dynamic translation cannot afford time-consuming analyses to
lift the level of representation to a stage that resembles a high-level
language, as per UQBT [14]. In UQBT, static analyses
recover procedure call signatures including parameters and
return values, thereby allowing the generated code to use native
calling and parameter conventions on the target machine. If such
analyses were used in dynamic translation, high performance
degradation would be experienced during the translation.
The alternative to costly analyses to remove properties of the
underlying source machine is to go halfway to a high-level
representation. We support inexpensive analyses to recover a
basic form of high-level instructions (such as conditional
branches and calls without parameters) and we emulate (rather
than abstract away from) conventions used by the hardware and
operating system in the source machine (i.e. without using the
native conventions on the target machine). Both these steps are
possible through the specification of features of the underlying
hardware. For example, we emulate the SPARC architecture
register windowing mechanism on a Pentium machine by
specifying how this mechanism works. On a SPARC machine,
we emulate the Pentium stack parameter passing convention.
However, we do not emulate the SPARC processor delayed
transfers of control as we support higher level branching
instructions. Clearly, this compromise has a performance impact
on the translated code, but it provides a fast way of translating
code, which can then be optimized at runtime if it becomes a
hotspot in the program.
In order to support the translation of Ms-RTLs to I-RTLs, the
SSL language has been extended from machine instruction level
semantics to include hardware semantics as well. For example,
for the SPARC architecture, the effects of changing register
windows and how each register in the current window is
accessed were specified, and for the Pentium architecture,
properties about the stack movement were specified. This
information is currently not used by UQBT; only by UQDBT.
UQBT relies on costly analysis to abstract higher-level
information, without depending on very low-level details of the
underlying hardware.
addl %ebx, %eax
add %o1, %o2, %o1
Figure
4. Pentium to SPARC example
3. RESEARCH PROBLEMS
Unlike other dynamic binary translators that are written with a
fixed set of source and destination machines in mind, UQDBT is
designed to handle a wide range of CISC and RISC machine
architectures. While some translators can directly map source
machine-specific idioms to the target machine, such translators
are bound to work only under that source/target pair. To extend
those translators to support different machines, extensive
rewriting of the code is needed, as the direct idiom mapping
between machines is different. The goal of UQDBT is to provide
a framework that can be modified and extended with ease to
support additional source and target machines without the need
to rewrite a new translator from scratch. The process of finding a
generalization of all existing (and future) machines is non-trivial
and cannot be fully predicted. UQDBT uses UQBT's approach
of specifying properties of machine instruction sets that are
widely available in today's machines and allowing the user to
extend the specification language to support new features of
(future) machines to reuse the rest of the translation framework.
As with UQBT, we use multi-platform operating systems to
concentrate on the more fundamental issues of instruction
translations.
UQDBT's goal was to address the following types of research
problems in dynamic machine-adaptable binary translation:
1. What is the best way of supporting the machine-dependent
to machine-independent RTL translation? The main criteria in
the translation is efficiency, hence expensive analyses are not an
option. Further, the translation needs to be supported by the
underlying specification language, in order to generate Ms-RTLs
that contain enough information about the underlying Ms
machine.
2. How much state of the source machine is needed for
dynamic translation and what effects does this have on
specification of such properties of the machine?
3. What is the best way of automating the transformation of I-
RTLs down to Mt-assembly code? Can a code selector be
automatically generated from a target machine specification?
4. Is it possible to efficiently use specifications that contain
information about operating system conventions, such as calling
and parameter conventions used by the OS to communicate with
the program? For example, in order to use Pentium's stack
parameter convention in code that was translated from a SPARC
architecture binary (which passes parameters on registers),
analysis to determine the parameters needs first to be performed.
3.1 Implementation of UQDBT
We have been experimenting with the right level of description
required in order to support dynamic translation based on
specifications. In our experience, too low-level or high-level a
description of the underlying machine is unsuitable. We view
UQBT's semantic specifications of machines as a high-level
description, as they only describe the machine instruction
semantics but do not specify the underlying hardware that
supports the control transfer instructions (e.g. the register
windows and delayed instructions on the SPARC architecture).
In UQBT, such detailed level of information is not needed
because of the specification and use of calling conventions and
control transfer instruction. Further, other semantic description
languages are used to describe all the low-level details of the
underlying machine; such languages are suitable for emulation
purposes but contain too much information for dynamic
translation.
The first problem above has been addressed by specifying how
the hardware works in relation to control transfer instructions;
this provides for a fast translation of Ms-assembly instructions
into information-rich Ms-RTLs (the extended ones), avoiding the
need to recover information at runtime. The following are the
types of information that have been described for SPARC and
Pentium processors:
. The effects of the SPARC register windowing mechanism
. Stack properties
. Memory alignment
. Parameters and return locations.
The SPARC machine allocates a new set of working registers
each time a SAVE instruction is called. In other words, it
effectively provides an infinite number of registers for program
use. The effect of the SPARC register windows is captured by
extending SSL to specifying how each of the registers are
accessed and how the register windows change during each
save and restore instruction, hence providing a different set
of working registers. This provides accurate simulation on target
machines that only have a limited amount of usable registers.
The effects of the stack pointer are different on different types of
machines. On Pentium machines, the stack pointer can change
indefinitely within a given procedure. On RISC machines, the
stack pointer is normally constrained to a pre-allocated stack
frame's fixed size that includes enough space for all register
spills of that procedure. Specifying how the stack changes in the
original machine suggests ways for the code generator to
generate stack manipulation instruction on the target machine.
For example, simulating stack pushing and popping on a SPARC
machine.
Memory alignment places constraints on how the machine state
at a particular point in the program should be. In SPARC, the
frame pointer and stack pointer need to be double word aligned.
Thus, the code generator needs to enforce such conditions before
entry to or exit from a call.
Differences in machine calling conventions, namely how the
parameters are passed and where return values are stored, play a
crucial part on how the code generator constructs the right setup
when calling native library functions. SPARC generally passes
parameters in registers while Pentium pushes them on the stack.
This information is needed for both source and target machines
to identify the transformation of parameters and return values.
Differences in endianness between source and target machines
requires byte swapping to be performed when loading and storing
data. Byte swapping is an expensive process. Although the
Pentium can do byte swapping quite easily, it takes about
SPARC V8 instructions for a 32-bit swap. This is an expensive
process and should be avoided if possible. In particular, when
running a Pentium binary on a SPARC, every push and pop
instruction (which appears quite often in Pentium programs) will
require byte swapping. Heuristics are used in UQDBT to avoid
byte swapping on pushing and popping to the stack.
The second problem above is related to the first one. The
amount of source machine state that is carried across depends on
the effectiveness of translation in the first problem. For areas
that are not easily specified or unspecified, they are carried
across and are apparent within the machine-independent RTL. In
UQDBT, control transfer instructions will contain a tag
indicating how to process its delayed slot instruction (in
architectures that support delayed slots).
The third problem above is current work in progress. The goal is
not only to automatically construct the code generator, but also
determine the best performance heuristics for selecting target
machine instruction when encountering similar patterns. Some
patterns may never be matched or may be nearly impossible to
match. For example, trying to pattern match a SPARC save
instruction with some Pentium-RTLs.
Our experiences with the fourth problem above suggest that
performance can be gain by using native OS conventions.
UQDBT currently simulates the calling convention for Pentium
programs on a SPARC machine, i.e. parameters are passed on
the stack instead of in registers. To remove this simulated effect
and convert it to use native conventions, one needs know:
. How much improvement does it offer over direct machine
simulation?
. At what level should this conversion occur?
. Is it worth while doing such analysis in a dynamic binary
translation environment?
4. CASE STUDY
In this section we show an example of a small Pentium program
converted by UQDBTps (the "ps" postfix indicates translations
from Pentium to SPARC architectures) to run on a SPARC
machine. Both programs are for the Solaris operating system.
The main differences of the two test machines are:
1. SPARC is a RISC architecture, whereas Pentium is CISC.
2. SPARC is big-endian, while Pentium is little-endian.
3. SPARC passes parameters in registers (and sometimes on
the stack as well), while Pentium normally passes them on
the stack.
4.1 Basic block translations and address
mappings
Figure
5 is the disassembly of a "Hello World" binary program
compiled for the Pentium machine running Solaris. The first
column is the source address seen by the Pentium processor. The
second and third columns are the actual Pentium binaries and its
corresponding assembly representation. There are 3 basic blocks
0x43676c8: save %sp, -132, %sp
0x43676cc: add %sp, -4, %sp
0x43676d0: add %i7, 8, %l0
0x43676d4: st %l0, [
0x43676d8: mov %sp, %l0
0x43676dc: subcc %l0, 4, %l0
0x43676e0: mov %l0, %sp
0x43676e4: mov %fp, %l0
0x43676e8: st %l0, [
0x43676f0: mov %l0, %fp
0x43676f4: mov %sp, %l0
0x43676f8: subcc %l0, 4, %l0
0x43676fc: mov %l0, %sp
0x4367704: add %l0, 0x3f8, %l0 ! 0x80493f8
0x4367708: st %l0, [
ld
0x4367714: mov %sp, %l1
0x4367718: sethi %hi(0xfffffc00), %l2
0x4367720: and %sp, %l2, %sp
0x4367724: and %fp, %l2, %fp
0x4367728: sethi %hi(0xef663400), %g6
0x4367730: nop
0x4367734: mov %l1, %sp
0x4367738: mov %l0, %fp
0x4367740: add %g5, 0x125, %g5
0x4367744: sethi %hi(0x41bfc00), %g6
0x4367748: call %g6
0x436774c: nop
Figure
7. Generated Sparc assembly for the 1 st BB of
Figure
8048919: 8b ec movl %esp,%ebp
8048920: e8 9b fe ff ff call 0xfffffe9b
8048925:
804892a: eb 00 jmp 0x0
804892c: c9 leave
804892d: c3 ret
Figure
5. "Hello World" x86 disassembly
8048918: PUSH r[29]
804891b: PUSH 134517752
8048925: *32* r[tmp1] := r[28]
804892a: JUMP 0x804892c
Figure
(BBs) in this program:
. first BB - 4 instructions (0x8048918 to 0x8048920)
. second BB - 3 instructions (0x8048925 to 0x804892a)
. third BB - 2 instructions (0x804892c to 0x804892d)
Figure
6 shows the intermediate representation (I-RTLs) for
these BBs. Note that the translation is done incrementally, i.e.
each BB is decoded separately at runtime.
UQDBTps works in the source machine's address space and
translates a basic block at a time. Both the data and text from
the source Pentium program are mapped to the actual machine's
source address space even though it is actually running on a
SPARC machine. For example, a Pentium program with data
and text sections located at 0x8040000 and 0x8048000 will be
mapped exactly at these addresses even though a typical SPARC
program expects the text and data at addresses 0x10000 and
0x20000. UQDBTps also simulates the Pentium machine's
environment in the SPARC generated code, i.e. the pushing and
popping of temporaries and parameters from the Pentium
machine is preserved in the generated code. UQDBTps tries to
generate code as quickly as it possibly can with little or no
optimization.
4.2 Pentium stack simulation
Figure
7 is the SPARC code generated for the first BB of Figure
6. The first four instructions simulate the Pentium main
prologue by setting up the stack and storing the return address
(obtained by the value of %i7 bytes of stack space
are reserved in the initial save %sp, -132, %sp. This
space is used by the SPARC processor to store parameters, return
structures, local variables and register spills. Hence the actual
simulated Pentium stack pointer %esp starts at %sp+0x84 (see
Figure
8), while Pentium's %ebp is mapped to the SPARC
register %fp. Pushing is handled by subtracting the size of the
value pushed from %sp and storing the result in [%sp+0x84].
Popping removes from [%sp+0x84] and increments %sp by the
appropriate size.
4.3 Function calls and stack alignments
In Pentium, actual parameters to function calls are passed on the
stack while SPARC parameters are passed in registers. The
printf format string to "Hello World" is at address
$0x80493f8 and is pushed by the instruction 804891b (see
Figure
5). To successfully call the native SPARC printf
function, this address must be stored in register %o0 (instruction
0x436770c in
Figure
7). The equivalent printf in SPARC
is at 0xef6635b8 and a call to this function is made
(instruction 0x436772c). Calls to library functions such as
printf are assumed by UQDBTps to exist on the source as
well as the target machine. This assumption is not restrictive as
long as there is a mapping from the source library function to an
equivalent function on the target machine; i.e. libraries can be
reproduced on the target machine by translation or rewriting to
produce such a mapping. Translators such as FX!32 make the
same assumption.
SPARC machines expect %sp and %fp to be aligned to double
word (64 bits) boundaries. Therefore, before calling a native
library function, %sp and %fp need to be 8-byte
aligned. The current values are restored after the function call
returns (instructions 0x4367734 and 0x4367738).
At the end of a basic block, control is passed back to UQDBTps'
switch manager, with the indication of the next basic block
address to be processed in %g5 (0x8048925 in the above
case). It is the role of the switch manager to decide whether to
start the translation indicated in %g5 or fetch an already
translated BB from the translation cache. In the above case
(where the next BB starts at 0x8048925), the address is not in
the translation map, hence the translation starts at that new
address. Figure 9 shows the generated SPARC assembly for the
next BB for the Pentium program.
a r a m e e r s
a r a m e e r s
S a c k p
Figure
8. Sparc stack frame
0x435ff58: sethi %hi(0x4485c00), %l0
0x435ff64: st %l1, [ %l0
0x435ff68: mov %sp, %l0
0x435ff6c: addcc %l0, 4, %l0
0x435ff70: mov %l0, %sp
0x435ff74: sethi %hi(0x42b7000), %l0
0x435ff78: add %l0, 0x3d4, %l0 ! 0x42b73d4
0x435ff7c: rd %ccr, %l1
0x435ff80: st %l1, [ %l0
0x435ff84: sethi %hi(0x4486000), %l0
0x435ff88: add %l0, 0x18, %l0 ! 0x4486018
ld [ %l1 ], %l1
0x435ff98: sethi %hi(0x4486000), %l2
ld [ %l2 ], %l2
0x435ffa4: xorcc %l1, %l2, %l1
0x435ffa8: st %l1, [ %l0
0x435ffb4: rd %ccr, %l1
0x435ffb8: st %l1, [ %l0
0x435ffc4: sethi %hi(0x41bfc00), %g6
0x435ffc8: call %g6
0x435ffcc: nop
Figure
9. Generated Sparc assembly for the 2 nd BB of
figure 6
4.4 Register mapping and condition codes
During the translation, all Pentium registers are mapped to
virtual registers (i.e. memory locations). To access a virtual
register on a SPARC machine, a sethi and an add instruction
are used. For example, instructions 0x435ff84 and
0x435ff88 in
Figure
9 are used to access the virtual register
representing the x86 register %eax.
While most instructions on SPARC do not affect condition codes
(flags) unless explicitly indicated by the instruction, almost all
Pentium instructions affect the flags. Each Pentium instruction
that affects the status of the flags is simulated using the
equivalent condition code version of the same instruction on the
SPARC machine (instructions 0x435ff6c and 0x435ffa4).
The condition codes are read (instruction 0x435ff7c) and
saved to the virtual flag register (instruction 0x435ff80) after
these instructions, to preserve its current value, which can be
retrieved later if required.
A closer look at the above example shows that the generated
code is not very efficient. Simple optimizations such as forward
substitutions and dead code elimination can greatly reduce the
size of the generated code. Such optimizations can yield better
code but will take longer to generate - a trade off between code
quality and speed. The code generator back-end of UQDBTps is
very fast despite the poor quality of the generated code. We are
currently implementing on-demand optimizations, to the hotspots
in the program, as well as performing register allocation.
5. PRELIMINARY RESULTS
UQDBT is based on the UQBT framework, as such, its front-end
is re-used from UQBT, changing its granularity of decoding from
the procedure level to the basic block level. The front-end uses
the extended SSL specifications and generates Ms-RTLs. The
machine instruction encoding routines of the back-end are
automatically generated from SLED specifications using the
toolkit.
This section shows some preliminary results obtained by two
dynamic translators instantiated from the UQDBT framework;
UQDBTps (Pentium to SPARC) and UQDBTss (SPARC to
SPARC). It then looks at the types of optimizations that need to
be introduced in order to improve the performance of frequently
executed code, and it gives the reader an idea of effort gone into
the development of the framework and the amount of reuse
expected.
5.1 Performance
Micro-benchmark results were obtained using a Pentium MMX
machine and an UltraSparc II 250 MHz machine, both
running the Solaris operating system. The results reported
herein are those of the translation overhead and do not currently
make use of dynamic optimizations, only register caching within
basic blocks. Clearly, the performance of generated binaries
from UQDBT without optimization is inferior to direct native
compilation. A typical 1:10 ratio (i.e. 1 source machine
instruction to 10 target machine instructions) is expected in a
typical emulator/interpreter without caching. UQDBTps gives
figures close to this ratio. For example, for the 9 Pentium
instructions of Figure 5, 95 SPARC instructions were generated.
While this ratio is similar to that of emulation, the speed up
gained from UQDBT comes from reusing already translated BBs
from the translation cache when the same piece of code is
executed again.
In its present form, UQDBT is still in its early development and
hence we provide preliminary results for UQDBTps, a Pentium
to SPARC translator, and UQDBTss, a SPARC to SPARC
translator. It is undoubtedly true that there is little practical use
for SPARC to SPARC translation unless runtime optimizations
can significantly speed up translated programs. The inclusion of
this translation is to show the effect of machine-adaptability in
UQDBT. Further, the translation from SPARC binaries to I-RTL
removes any machine dependencies and thus, during I-RTL to
SPARC code generation, the UQDBTss is unaware of the fact
that the source machine is SPARC. This is also true for
UQDBTps. Since little analysis is done to the decoded
instructions and processing is concentrated on decoding and code
generation requested by the switch manager, it better reflects the
performance impact on the use of on-demand techniques prior to
introducing optimizations.
The test programs showed in the tables are:
. Sieve 3000 (prints the first 3,000 prime numbers),
. Fibonacci of 40, and
. Mbanner (prints the banner for the "ELF" string
500,000 times).
Sieve mainly contains register to register manipulation, while
Fibonacci has a lot of recursive calls and Mbanner has a lot of
stack operations and accesses to an array of data.
Tables
show the times of translation and execution of
programs using UQDBTps and UQDBTss, compared to natively
gcc O0 compiled programs. The source programs were also O0
compiled. Column 2 shows the preprocessing time that is
needed before the actual translation takes place. Note that
UQDBTps takes longer to start than UQDBTss. This is because
Pentium has a larger instruction set (hence a larger SSL
specification file) which takes longer to process. It is also caused
by different page alignment sizes between the Pentium and
SPARC; as a result, extra steps are taken to ensure that both text
and data sections are loaded correctly on the SPARC machine.
Column 3 shows the total time spent decoding the source
instructions, transforming them to I-RTLs and generating the
final SPARC code. Column 4 shows the execution time in the
generated SPARC code without using register caching, i.e. every
register access was done through virtual registers. Column 5
shows the execution time of the generated SPARC code with
register caching, which yields between 15 to 50 percent
performance gain. Column 6 is the natively compiled gcc version
of the same program on SPARC. Comparing columns 5 and 6
gives the relative performance of the translators. The figures
suggest 2 to 6 times slowdown when running programs using
UQDBTps and UQDBTss. The slow performance of the
translated Fibonacci program under UQDBTss is caused by the
effects of the register windowing mechanism in SPARC, which
are carried forth to the I-RTLs. Since the I-RTLs are unaware of
the fact that the source and target machines are the same, this
causes the entire register windowing system to be simulated in
the generated code. Given that on-demand optimizations have
not been performed yet, the quality of the generated code is
comparable to O0 optimization level of a traditional compiler.
Tables
3 and 4 show the efficiency of the translators relative to
the size of the original program. Column 2 is the size of the
program's text area. Note that not all bytes necessarily represent
instructions and that not all code is necessarily reachable or
executed at runtime. Column 3 shows the actual bytes decoded
by the translator at runtime. This number varies from Column 2
since only valid paths at runtime are translated, and sometimes
re-translation is needed when a jump into the middle of a BB is
made. Columns 4 and 5 show the number of bytes of code
generated by the translator without/with register caching.
Register caching has been done at a basic block level; cached
registers are copied back to their memory locations at the end of
each basic block. Comparing column 5 with column 3 gives the
relative ratio of bytes generated versus bytes decoded. The
above figures suggest that on average, each byte from the source
machine translates to around 7 to 10 bytes of target SPARC code.
The last column is the ratio of machine cycles to bytes of source
code. It gives a rough indication of the performance of the
translation. On an UltraSparc II 250MHZ, the translators require
about 180,000 machine cycles per byte of input source, which is
about 10 times more cycles used than in a traditional O0
compiler.
5.2 Optimizations - future work
Most programs spend 90% of the time in a small section of the
code. It is these hotspots that are worthwhile for a dynamic
binary translator to spend time optimizing. UQDBT currently
does not perform any optimizations. The next revision of
UQDBT will contain optimizations that are triggered by
counters. Counters are inserted in basic blocks to indicate the
number of times a particular basic block is executed at runtime.
When a certain threshold is reached (indicating that the program
spends significant time in a piece of code), the optimizer will be
invoked in an attempt to produce efficient code. Four levels of
optimization will be provided by UQDBT progressively when
certain thresholds are reached:
1. Register liveness analysis, forward substitution,
constant propagation - improves the quality of the
generated code and reduces the number of instructions
executed.
2. Register allocation - a more rigorous process for
removing access to virtual registers and replacement
with allocation to hardware registers on the target
machine, assisted by liveness information, rather than
just caching registers at a basic block level.
3. Code movement - moving and joining frequently
executed BBs closer together, thus reducing transition
costs (calls and jumps).
4. Customization - create specialized versions of the BBs
that are found to have a fixed range of runtime values
within the BB, e.g. on repeated entry to a BB, a
register or variable contains the same value 90% of the
time.
5.3 Effort
In order to give the reader an idea of the effort that has gone into
the development of UQDBT and the effort of reusing the system,
we quantify such effort as follows.
UQDBT has been the effort of 1 person over a period of 1.5
years, experimenting with the amount of specification required at
the semantic level for different machines. This effort was
performed by a person who was already familiar with the UQBT
having worked on SSL in the past.
UQDBT's current implementation size is of 18,500 lines of
source code in C++, 3,300 lines of partially-generated code, and
lines (1,000 for SPARC and 2,500 for Pentium) of
Test programs Pre- processing Translation time Execution time
without
reg
caching Execution
time
simple
reg
caching
Native
gcc
compiled Test programs Pre- processing Translation time Execution time
without
reg
caching Execution
time
simple
reg
caching
Native
gcc
compiled
Fibonacci 0.52 0.07 162.23 139.35 41.18 Fibonacci 0.22 0.12 256.05 198.97 41.18
mbanner 0.50 0.34 191.00 126.28 22.85 mbanner 0.21 0.50 204.09 97.09 22.85
Test programs Original program
size
Source
bytes
decoded Target
bytes
generated
w/o
reg
caching
Target
bytes
generated
w/-
reg
caching
1,000
cycles
source
byte
Test programs Original program
size
Source
bytes
decoded Target
bytes
generated
w/o
reg
caching
Target
bytes
generated
w/-
reg
caching
1,000
cycles
source
byte
Fibonacci 102 94 1188 1104 186 Fibonacci 220 176 1764 1468 170
mbanner 483 467 4548 3816 182 mbanner 748 760 7112 5032 164
Table
4: UQDBTss - SPARC to SPARC translation
Table
1: UQDBTps - Pentium to SPARC translation (second) Table 2: UQDBTss - SPARC to SPARC translation (seconds)
Table
3: UQDBTps - Pentium to SPARC translation
specification files. A user of the UQDBT framework would be
able to reuse most of this source code and would need to write
syntax and semantics specification files for new machines (or
reuse existing ones). These figures are not final at this stage, as
most dynamic optimizations have not been implemented yet. It
nevertheless gives an indication of the amount of reuse of code in
the system.
6. DISCUSSION
The preliminary results of UQDBT point at the tradeoffs of
machine-adaptability. In return for writing less code to support
two particular machines, a performance penalty in the generated
code is seen at this stage. A binary translation writer would be
expected to write specifications for new machines, which are in
the order of a few thousand lines of code, and reuse a good part
of 18,500 lines of code, reaping the benefits of reuse and time
efficiency. However, at this stage, UQDBT generates code that
performs at about the same speed as emulated code, therefore a
user seeing a 10x performance degradation in their translated
programs. The introduction of register caching in the generated
code has brought down this factor to 6. We expect that the
introduction of on-demand optimizations on hotspots of the
program will improve the performance of the generated code,
bringing down the performance factor to 2x-3x.
One of the main questions we have dealt with throughout
experiments in this area has been how much should be specified
and how much should be supported by hand. The level of detail
in a specification can make a translator faster or slower. If the
full details of a machine are specified, the specification is
suitable for generating an emulator that supports 100% that
machine. However, if we can provide a means for eliminating
part of that emulation process, then a different type of
specification is needed. This is what we have tried to achieve
through our semantic specifications and the use of two
intermediate languages. The RTL language describes low-level
and machine specific aspects of a machine, and UQDBT finds
support in the specifications to perform simple analyses to lift the
level of the representation to I-RTL. The aim is to perform
simple transformations of the code that are not expensive on time
and that are generic enough to be suitable for our intermediate
representations. This is why I-RTL is different to HRTL, the
high-level intermediate representation used by the static UQBT
framework. In HRTL, expensive analyses recover parameters to
procedures and return values, hence allowing the code generator
to use native calling conventions on the target machine. In I-
RTL, the code generator makes use of the specification of a stack
for example, in order to pass parameters on the stack, without
ever determining which locations are parameters to procedure
calls. However, some notion of parameters is needed in order to
interface correctly to native library functions, and to pass
parameters in the right locations.
It has also been our experience that some modules may be better
off written by hand, without specifying the complete semantics of
features of a machine that are too unique. For example, one
could consider implementing an SPARC-specific module for
supporting the register windowing semantics, so that better
register allocation is performed in this case. At present, we have
specified the register windowing mechanism and generated code
that puts all these registers in virtual (memory) locations.
Through the use of register caching, some of the memory
locations are mirrored to hardware registers of the target
machine, improving somewhat the performance of the program.
However, there is still a large overhead in the copying of the
registers to virtual locations at each call and return. This can be
reduced with dead code elimination, but perhaps hand written
code would have achieved better code.
Another aspect to take into consideration is the granularity of
translation. In UQDBT, the granularity unit for processing is a
basic block (BB) at a time. Just after code generation, a link is
made to the switch manager at the exit of each BB and flushing
of cached registers to virtual registers is performed. This is
needed to keep data accurately stored and consistent across
transitions from one BB to another. Transitions from one BB to
the next will go via the switch manager if the next BB has not
been translated yet. Using BB as the unit of translation restricts
the effectiveness of register allocation. Since BBs are relatively
small, it is difficult to determine register live-ness information,
as data is not collected across BB boundaries. If the unit of
granularity is changed, this could yield better code in some cases
while worse code in others. For example, the translation unit
might be changed from a BB to a procedure at a time. This
would allow the code generator to reduce the amount of flushing
of cached registers and hence reduce the number of instructions
that need to be executed at runtime. It would improve the
effectiveness of allocating registers during code generation since
more live-ness information could be collected. But using a larger
unit of translation such as a procedure may involve decoding
paths that may not be ever taken at runtime, thus generating code
that is not executed. It is not obvious what the granularity unit
should be since some types of programs will benefit by using a
particular granularity unit while others may suffer. Program with
a lot of small procedures will benefit if the unit of translation is a
procedure, but suffer if the program has a lot of conditional
branches.
7. CONCLUSION
UQDBT is a machine-adaptable dynamic binary translator
framework that is capable of being configured for different
source and target machines through specifications of properties
of those machines. The UQDBT framework can be modified and
extended with ease to support additional source and target
machine architectures without the need to write a new translator
from scratch.
Our case study shows that the translation process between two
different architectures is both complex and challenging using
machine-adaptable dynamic translation techniques.
Nevertheless, preliminary results suggest that performance of
implementing on-demand processing in a dynamic system can be
done efficiently. Despite that, some research problems remain in
building a fully machine-adaptable dynamic translation
framework. UQDBT appears to be a promising model to provide
a generic dynamic binary translation framework.
8.
ACKNOWLEDGMENTS
The authors wish to thank Mike Van Emmerik for his helpful
discussions in implementation and testing strategies and the
members of the Kanban group at Sun Microsystems, Inc.; whose
system motivated some of this work. This work is part of
the University of Queensland Binary Translation (UQBT)
project. More information can be obtained about the project by
visiting the following URL:
http://www.csee.uq.edu.au/csm/uqbt.html.
9.
--R
Binary translation.
Macintosh application environment.
http://www.
Digital FX!
How to Efficiently Run Mac Programs on PCs.
Embra: Fast and Flexible Machine Simulation.
SELF: The power of simplicity.
Kaashoek. tcc: A System for Fast
Micha&lstrok
The New Jersey Machine-Code Toolkit
The Design of a Resourceable and Retargetable Binary Translator.
Specifying the semantics of machine instructions.
A Fast Instruction-Set Simulated for Execution Profiling
Specifying representation of machine instructions.
Preliminary Experiences with the Use of the UQBT Binary Translation Framework.
--TR
Self: The power of simplicity
Binary translation
Shade: a fast instruction-set simulator for execution profiling
Embra
Specifying representations of machine instructions
Fast, effective code generation in a just-in-time Java compiler
An out-of-order execution technique for runtime binary translators
SRL 3/4?A Simple Retargetable Loader
The Design of a Resourceable and Retargetable Binary Translator
Specifying the Semantics of Machine Instructions
--CTR
Naveen Kumar , Bruce R. Childers , Daniel Williams , Jack W. Davidson , Mary Lou Soffa, Compile-time planning for overhead reduction in software dynamic translators, International Journal of Parallel Programming, v.33 n.2, p.103-114, June 2005
Lian Li , Jingling Xue, Trace-based leakage energy optimisations at link time, Journal of Systems Architecture: the EUROMICRO Journal, v.53 n.1, p.1-20, January, 2007
David Ung , Cristina Cifuentes, Dynamic binary translation using run-time feedbacks, Science of Computer Programming, v.60 n.2, p.189-204, April 2006
Lian Li , Jingling Xue, A trace-based binary compilation framework for energy-aware computing, ACM SIGPLAN Notices, v.39 n.7, July 2004
Giuseppe Desoli , Nikolay Mateev , Evelyn Duesterwald , Paolo Faraboschi , Joseph A. Fisher, DELI: a new run-time control point, Proceedings of the 35th annual ACM/IEEE international symposium on Microarchitecture, November 18-22, 2002, Istanbul, Turkey
Jason D. Hiser , Daniel Williams , Wei Hu , Jack W. Davidson , Jason Mars , Bruce R. Childers, Evaluating Indirect Branch Handling Mechanisms in Software Dynamic Translation Systems, Proceedings of the International Symposium on Code Generation and Optimization, p.61-73, March 11-14, 2007
Gregory T. Sullivan , Derek L. Bruening , Iris Baron , Timothy Garnett , Saman Amarasinghe, Dynamic native optimization of interpreters, Proceedings of the workshop on Interpreters, virtual machines and emulators, p.50-57, June 12-12, 2003, San Diego, California
Bruening , Timothy Garnett , Saman Amarasinghe, An infrastructure for adaptive dynamic optimization, Proceedings of the international symposium on Code generation and optimization: feedback-directed and runtime optimization, March 23-26, 2003, San Francisco, California
John Aycock, A brief history of just-in-time, ACM Computing Surveys (CSUR), v.35 n.2, p.97-113, June | dynamic execution;binary translation;interpretation;emulation;dynamic compilation |
351523 | On the Eigenvalues of the Volume Integral Operator of Electromagnetic Scattering. | The volume integral equation of electromagnetic scattering can be used to compute the scattering by inhomogeneous or anisotropic scatterers. In this paper we compute the spectrum of the scattering integral operator for a sphere and the eigenvalues of the coefficient matrices that arise from the discretization of the integral equation. For the case of a spherical scatterer, the eigenvalues lie mostly on a line in the complex plane, with some eigenvalues lying below the line. We show how the spectrum of the integral operator can be related to the well-posedness of a modified scattering problem. The eigenvalues lying below the line segment arise from resonances in the analytical series solution of scattering by a sphere. The eigenvalues on the line are due to the branch cut of the square root in the definition of the refractive index. We try to use this information to predict the performance of iterative methods. For a normal matrix the initial guess and the eigenvalues of the coefficient matrix determine the rate of convergence of iterative solvers. We show that when the scatterer is a small sphere, the convergence rate for the nonnormal coefficient matrices can be estimated but this estimate is no longer valid for large spheres. | Introduction
. The convergence of iterative solvers for systems of linear equations is closely related
to the eigenvalue distribution of the coefficient matrix. To be able to predict the convergence of
iterative solvers one therefore needs to look at how the matrix and its eigenvalues arise from a discretization
of a physical problem.
In this article we examine the eigenvalues of the coefficient matrix arising from a discretization of
a volume integral equation of electromagnetic scattering and the behavior of iterative solvers for this
problem. In the early stages of this research project we noticed that in the case of a spherical scatterer,
most of the eigenvalues of the matrix lie on a line in the complex plane. The line segment can also be
seen for other types of scatterers.
The coefficient matrix arises from a discretization of a physical problem. Thus, the eigenvalues of
the coefficient matrix should resemble the spectrum of the corresponding integral operator in a function
space. The spectrum of an operator is the infinite-dimensional counterpart of the eigenvalues of a matrix.
In the case of a spherical scatterer, we show that it is possible to relate the spectrum of the scattering
integral operator to the behavior of the analytic solution of scattering by a sphere. The main result is
that the eigenvalues of the coefficient matrix are related either to resonances in the analytic solution or
to the branch cut of the complex square root used in the definition of the refractive index.
An analysis of the eigenvalues of the surface integral operator in electromagnetic scattering has been
carried out by Hsiao and Kleinman [12], who studied the mapping properties of integral operators in an
appropriate Sobolev space. Colton and Kress have also studied these weakly singular surface integral
operators [3]. In our case, we look at the spectrum of a strongly singular volume integral operator. We
have not studied the mapping properties of this operator.
The paper is organized as follows: In x 2 we give the volume integral equation formalism and discuss
its discretization. In x 3 we compute the eigenvalues of the coefficient matrix. Section 4 shows how the
spectrum of the integral operator can be determined with the help of the analytic solution. Numerical
Parts of this work have been carried out at the Center for Scientific Computing, Finland, and at the Helsinki University
of Technology. This work was supported by the Jenny and Antti Wihuri foundation and by the Cultural Foundation of
Finland.
y CERFACS, 42 av G. Coriolis, 31057 Toulouse Cedex, France (Jussi.Rahola@cerfacs.fr).
J. RAHOLA
experiments are presented that show how well the eigenvalues of the coefficient matrix can approximate
the spectrum of the integral operator. In x 5 we show how the eigenvalue distribution can be used in the
prediction of the convergence of iterative solvers. In x 6 we conclude and point out some areas of future
research.
2. Volume integral equation for electromagnetic scattering. The volume integral equation
of electromagnetic scattering is employed especially for inhomogeneous and anisotropic scatterers. For
homogeneous scatterers it also offers some advantages over the surface integral equation, namely a simple
description of the scatterer with the help of cubic cells and the use of the fast Fourier transform in the
computation of the matrix-vector products. On the other hand, the volume integral equation uses many
more unknowns than the surface integral formulation for a given computational volume.
We will study the scattering of monochromatic electromagnetic radiation of frequency f , wavelength
wave number In our presentation we assume
that the electric field is time-harmonic and thus its time-dependence is of the form exp(\Gammai!t). The
scattering material is described by its complex refractive index defined by
~ffl, where the complex
permittivity ~ ffl is given by ~ is the (real) permittivity and oe is the conductivity. We
have assumed that the magnetic permeability of the material is the same as that of vacuum. The volume
integral equation is typically used for dielectric or weakly conducting objects.
The volume integral equation of electromagnetic scattering is given by [9, 15, 16]
Z
where V is the volume of space the scatterer occupies, E(r) is the electric field inside the object, E inc (r)
is the incident field and G is the dyadic Green's function given by
where
The Green's function also has an explicit representation by
ae \Gammaae 2
ae
The scattering integral equation can be discretized in various ways. The simplest discretization uses
cubic cells and assumes that the electric field is constant inside each cube. By requiring that the integral
equation (2.1) be satisfied at the centers r i of the N cubes (point-matching or collocation technique) and
by using simple one-point integration, we end up with the equation [14, 15, 22]
is the volume of the computational cube, M is given by
ii
e ikb(3=4-) 1=3
EIGENVALUES OF INTEGRAL OPERATOR IN SCATTERING 3
b is the length of the side of the computational cube and T ij is given by
ae 3
ae 3
The factor M arises from the analytical integration of the self-term, using a sphere whose volume is equal
to the volume of the cube [15].
Note that all the physical dimensions of the problem are of the form kr. This means that the
scattering problem depends only on the ratio of the size of the object to the wavelength. In the rest of
the paper we will give the dimensions of the scattering object in the form
The systems of linear equations (2.5) has a dense complex symmetric coefficient matrix. Various
ways of solving the linear system with iterative solvers has been studied by Rahola [17, 23, 22] who chose
the complex symmetric version of QMR [7] as the iterative solver. The matrix-vector products can be
computed with the help of the fast Fourier transform if the cells sit on a regular lattice. For the FFT,
the computational grid must be enlarged with ghost cells to a cube [10].
3. Eigenvalues of the coefficient matrix. In this section we show some examples of the eigen-values
of the coefficient matrices. All the eigenvalues were computed with the eigenvalue routines in the
library [1].
Figure
3.1 shows the eigenvalues of a coefficient matrix for a sphere of radius refractive
computational cells, respectively, are used to discretize the
sphere. Note how most of the eigenvalues lie on a line in the complex plane. When the discretization is
refined, the line segment becomes "denser" but there remains a small number of eigenvalues off the line.
An interesting feature of these coefficient matrices is that when the discretization is refined, the
number of iterations needed for iterative solvers to converge remains constant even when the linear
system is not preconditioned [21, 22]. This is of course the optimal situation for iterative solvers. In
practical calculations, when the number of computational points is increased, the size of computational
cells is kept constant and the physical size of the object is increased. In this case the number of iterations
naturally grows with the size of the problem.
4. Spectrum of the integral operator. In this section we will explain what is meant by the
spectrum of a linear operator, how to compute points in the spectrum of the scattering integral operator
and how they correspond to the eigenvalues of the coefficient matrix of the discretized problem.
The spectrum of a linear operator T is the set of points z in the complex plane for which the operator
does not have an inverse operator that is a bounded linear operator defined everywhere. Here 1
stands for the identity operator.
A matrix is the prototype of a finite-dimensional linear operator. The spectrum of a matrix is
exactly the set of its eigenvalues, that is the point spectrum. In the rest of this paper, we reserve the
word 'spectrum' only for the infinite-dimensional integral operator and talk only about eigenvalues of
matrices. For an eigenvalue -, there exists an eigenvector x such that thus the matrix
singular and not invertible.
For an arbitrary linear operator, there is no general way of obtaining its spectrum. Each case must
be analyzed separately with different analytical tools. We will consider the case of the scattering integral
equation (2.1). We will write it in the form
where K is the integral operator defined by
Z
4 J. RAHOLA
Fig. 3.1. Eigenvalues of the coefficient matrix for a spherical scatterer of radius refractive index
0:05i. The sphere is discretized with 136 computational cells (upper) and 455 computational cells (lower).
We do not know of any direct way of computing the spectrum of the scattering integral operator
K. However, with the following observation, we can relate the spectrum to the well-posedness of some
related scattering problems.
To find the spectrum of the operator K we need to find the complex points z for which the operator
does not have a well-defined inverse. For a homogeneous scatterer the integral operator (4.2)
has the form
Z
The crucial observation is the following: the operator (z1 \Gamma K) is a scaled version of the operator
z
which is equivalent to the scattering integral equation (2.1) by the same homogeneous object but with a
different refractive index m 0 defined by
z
EIGENVALUES OF INTEGRAL OPERATOR IN SCATTERING 5
Thus, to find the points z in the spectrum for a scatterer with a given shape and refractive index m,
it suffices to find all possible refractive indices m 0 for which the scattering problem is not well defined.
After this, the points in the spectrum are recovered from
In all our figures, we actually plot the points in the spectrum of the operator
correspond to the eigenvalues of the matrix.
4.1. Scattering by a sphere. In this section we recall the analytic solution of scattering by a
sphere. This solution will be used to find the points in the spectrum of the integral operator. The
solution uses the spherical vector wave functions M lm and N lm [2, 5] that are defined by
l ('; OE));
l ('; OE) are the spherical harmonics and f l (kr) stands for the spherical Bessel or Hankel function.
The functions M lm (r) and N lm (r) are both solutions to the vector Helmholtz equation r \Theta r \Theta
The scattering by a sphere can be computed analytically using the so-called Mie theory [2, 25]. The
incoming electric field E inc , the electric field inside the object E 1 and the scattered electric field E s are
all expanded in terms of the spherical vector wave functions:
l
m=\Gammal
l M (1)
l N (1)
lm (kr)
l
m=\Gammal
l M (1)
l N (1)
l
m=\Gammal
l M (3)
l N (3)
lm (kr)
In these formulas k is the free-space wave number and k is the wave number inside the
scatterer. The notation M (1)
lm refers to the spherical vector wave functions that have a radial dependency
given by the spherical Bessel function j n (kr) while for M (3)
lm the radial dependency is given by the spherical
Hankel function h (1)
(kr). The magnetic field can easily be computed from these expansions.
One then requires that the tangential components of the electric and magnetic field are continuous
across the boundary of the sphere at radius kr = x. Now we can integrate the expansions and the vector
spherical harmonics over the full solid angle and the orthogonality of the vector wave functions shows
that the single mode M (1)
lm (r) in the incoming field excites the corresponding modes M (1)
lm (r) and M (3)
lm (r)
in the internal and scattered fields, respectively. This means that the coefficients of the internal and
scattered fields are related to the coefficients ff m
l and fi m
l of the incoming field by i m
l
l ,
l , and
l . At the radius kr = x, the boundary conditions and the expansions for
the electric and magnetic fields give four linear equations for the coefficients a l , b l , c l , and d l . The Mie
coefficients giving the field inside the object are [2]
l
l (x)[xj l (x)] 0
l (mx)[xh (1)
l
l (x)[mxj l (mx)] 0
l
l (x)[xj l (x)] 0
l
l (x)[mxj l (mx)] 0
6 J. RAHOLA
4.2. Resonances in the Mie series. To find points in the spectrum of the scattering integral
operator, we now need to find those values of the refractive index m for which the Mie solution is not
well-behaved. One such possibility is that the denominator in Equation (4.11) or (4.12) becomes zero.
The case when a denominator of a Mie coefficient becomes zero or close to zero is called the Mie
resonance. It means that a single spherical harmonic mode is greatly amplified and efficiently suppresses
all the other modes. The resonances in Mie scattering have been studied quite extensively, see, e.g.,
[4, 8, 13, 26]. In these studies both the refractive index m of the scatterer and the size of the scatterer,
which is measured by the size parameter kr = 2-r=-, can obtain complex values.
The imaginary part of the refractive index determines how the object absorbs electromagnetic energy.
A zero imaginary part means no absorption, a positive imaginary part is used for absorbing materials,
while a negative imaginary part signifies a somewhat unphysical case when the objects are generating
electromagnetic energy.
If the size parameter is real, the denominators of Mie coefficients can become zero only if the refractive
index has a negative imaginary part. However, in some cases the resonance appears with a refractive
index that has a very small negative imaginary part. In this case the proximity of the resonance is visible
in the scattering calculations also if we set the imaginary part of the resonant refractive index to zero.
Likewise, for a object with a refractive index with zero or positive imaginary part, the resonance can
appear only if the size parameter becomes complex. Sometimes the resonant size parameter has a very
small imaginary part, making the resonance visible at the corresponding real size parameter.
In this study we have a fixed the size parameter and try to find all the values for the complex refractive
index that satisfy the resonance conditions exactly. We must stress that this is purely a mathematical
trick to find the points in the spectrum of the integral equation and thus the refractive indices have no
physical meaning here.
To find the resonance points, we find the approximate locations of the zeros of denominators for Mie
coefficients c l and d l for all orders l by visual inspection of the absolute value of the denominators when m
varies in the complex plane. After this, we feed these values as initial guesses to the root-finding routine
DZANLY in the IMSL library that uses Muller's method [20].
Figure
4.1 shows some of the resonant complex refractive indices m for a sphere of radius kr = 1.
Note how the resonance positions gather along the positive real axis and at around \Gammai. We did not
compute the resonant refractive indices with large positive real parts, because all these will be mapped
to very small z-values in Equation (4.5). If the refractive index m 0 is resonant, so is \Gammam 0 , but these will
be mapped to the same z-point.
The preceding analysis indicates also that the coefficient matrix should become singular when we try
to compute scattering by a sphere with a resonant refractive index. For a resonant refractive index, the
mapping (4.5) indicates that 1 belongs to the spectrum of the integral operator K and thus there is a
zero eigenvalue in the coefficient matrix. In addition to this, the matrix should become badly conditioned
in the vicinity of such refractive indices and thus the convergence of iterative solvers should slow down.
This has indeed been observed in another study [11].
4.3. Branch cut of the square root. Now we turn our attention to the line segment in the
eigenvalue plots of the matrix and ask in what other way besides a resonance can the Mie solution fail.
To this end we will study the definition of the refractive index in the whole complex plane.
Recall that the refractive index is defined by
~ffl, where ~ ffl is the complex permittivity. The
square root y of a complex number z is defined as the solution of the equation y z. But this equation
has two solutions. We will define the square root as being the solution that lies in the right half-plane,
i.e., with non-negative real part. If the complex number z is given by re iOE , where \Gamma- ! OE -,
then the square root can be uniquely defined as
re iOE=2 . This is the main branch corresponding to
positive square roots for positive real numbers.
There is a discontinuity associated with this definition of the square root. When z approaches the
negative real axis from above,
z approaches the positive imaginary axis. However, when z approaches the
EIGENVALUES OF INTEGRAL OPERATOR IN SCATTERING 7
-1.4
-1.2
-0.4
Fig. 4.1. Some of the complex refractive indices (m) giving rise to a resonance in the Mie series for a sphere of radius
negative real axis from below, p
z approaches the negative imaginary axis. Thus there is a discontinuity
in p
z as z crosses the negative imaginary axis. The negative imaginary axis is called the branch cut of
the complex square root.
We recall that in the Mie series solution, inside the scatterer the radial dependency of the solution
is given by the spherical Bessel function where the wave number inside the object is given by
mk. Thus in the scattering problem, when the complex permittivity ~
ffl sits on the negative real axis,
the refractive index being purely imaginary, there is an ambiguity in the Mie solution due to the branch
cut. Using mapping (4.5) these refractive indices imply that the corresponding points z belong to the
spectrum of the integral operator and that these points form a line segment in the complex plane.
The analysis of the branch cut of the refractive index and the mapping (4.4) are valid for any
homogeneous scatterer, not just the sphere. Thus it is clear that the position and size of the line segment
in the spectrum only depends on the refractive index, but not on the shape or size of the object. On the
other hand, the part of the spectrum corresponding to the resonances does depend on the shape and size
of the scatterer.
4.4. Numerical experiments. In Figure 4.2 we show the eigenvalues of the coefficient matrix for
two different discretizations of a sphere with kr = 1. We also show the points in the spectrum of the
found with the help of the resonances of the Mie series and the branch cut of the square
root. If m 0 is a refractive index for which the scattering problem is not well defined, the corresponding
points in the spectrum of operator are recovered from
Most of the eigenvalues lying outside the line can be explained by the resonance points and when the
discretization is refined, the agreement gets better. The endpoint of the line segment corresponding to
the branch cut, i.e., scattering with purely imaginary refractive indices, is also shown. These refractive
indices are of type iy, where y is real. When y approaches infinity, the points z in the spectrum
approach 1. When y goes to zero, the points in the spectrum approach the point m 2 , which is marked in
the plots with a large circle.
In
Figure
4.3 we show the same information as in Figure 4.2 but with computational cells)
and cells), respectively. These figures show that when the size of the sphere is increased,
8 J. RAHOLA
Fig. 4.2. The eigenvalues of the coefficient matrix (small black dots) and some points in the spectrum of the integral
operator due to the resonances (small circles). The points in the spectrum of the integral operator that correspond to
scattering by objects with purely imaginary refractive indices extend from the point one to the point marked with a large
circle. The eigenvalues were computed from a discretization with 280 computational cells (upper) and 1000 computational
cells (lower). The radius of the sphere is given by kr = 1.
the number of eigenvalues off the line also increases.
Figure
4.4 shows the eigenvalues for an anisotropic spherical scatterer with and the refractive
indices in the direction of the coordinate axes being m
Three line segments are visible in the eigenvalue plots. The segments start from 1, and they correspond
to scattering with purely imaginary refractive indices in the mapping (4.13), where the refractive index
m is replaced by any of the above indices.
The fact that the scattering integral operator has a continuous spectrum implies that the operator
cannot be compact. However, in a related situation, there is a theorem by Colton and Kress [3, Section 9]
that states that the scattering integral operator is a compact operator if the refractive index is a smooth
function of r.
In the case of scattering by a homogeneous sphere, the refractive index is not smooth but instead has
a discontinuity at the surface of the scatterer. Corresponding to the case analyzed by Colton and Kress
EIGENVALUES OF INTEGRAL OPERATOR IN SCATTERING 9
-1.4
-1.2
-0.4
-0.20.2
Real(l)
Real(l)
kr=5, m=1.4+0.05*i
Fig. 4.3. Same as Figure 4.2 but with size kr = 3 and with 480 computational cells (upper) and with size
with 1064 computational cells (lower).
we also computed the eigenvalues of the coefficient matrices of the integral equation when the refractive
index varies smoothly. In this case, the line segment was still visible in the eigenvalue plots, but the
eigenvalues no longer resided uniformly along the line but converged towards 1. This is the behavior one
would expect of the discretization of the identity operator plus a compact operator.
5. Convergence estimates for iterative solvers. In this section we study the possibility of
estimating the speed of convergence of iterative solvers from the knowledge of the eigenvalues of the
coefficient matrix. In general, the convergence of iterative methods is dictated by the distribution of the
eigenvalues, the conditioning of the eigenvalues and the right-hand side vector.
We shall now give a basic convergence results for a Krylov-subspace method, i.e., iterative methods
based on only the information given by successive matrix-vector products of the matrix A. We shall
denote the linear system by b, the current iterate by xn and the residual by r
and r 0 being the initial guess and initial residual, respectively.
Iterative Krylov-subspace methods produce iterates xn such that the corresponding residual r n is
J. RAHOLA
Real(l)
Fig. 4.4. The eigenvalues of an anisotropic sphere with refractive indices in the direction of the coordinate
axes being 0:3i. The sphere is discretized with 1064 computational
cells. For each of the three indices, the endpoint of the line segments corresponding to purely imaginary refractive indices
in the mapping (4.13) are shown as large circles.
given by r is a polynomial of degree n with pn 1. The task of iterative
methods is to construct polynomials pn such that the norm of the residual decreases rapidly when n
increases.
Suppose that the coefficient matrix A is diagonalizable and thus has distinct eigenvectors v i ,
. Denote by V the matrix whose columns are v i and by the diagonal matrix of the corresponding
eigenvalues - i . The matrix A can be decomposed as . Given this decomposition we can
estimate the norm of the matrix polynomial pn (A) in terms of the polynomial evaluated at the eigenvalues:
where the condition number of the eigenvalue matrix is given by -(V
In our experiments we will use two iterative methods: the generalized minimal residual method
(GMRES) [24] without restarts and the complex symmetric version of the quasi-minimal residual method
(QMR) [7]. GMRES minimizes the residual of the current iterate among all the possible iterates in
a Krylov subspace, a subspace generated by successive multiplications by the initial residual with the
coefficient matrix. QMR has only an approximate minimization property and thus converges slower
than GMRES, but its iterates are much cheaper to compute. QMR is the iterative solver used in our
production code.
For the iterative method GMRES, we have the following theorem [18]:
kr nk
pn (z)
pn (0)=1
In other words, the iterative method GMRES finds a polynomial such that the maximum value of the
polynomial at the eigenvalues is minimized. In the rest of this paper, we assume that the eigenvectors
are well-conditioned and thus -(V ) is close to unity.
The question of the optimal convergence speed in iterative methods has been studied by, e.g., Nevanlinna
[19], who studied iteration of the problem g. We study the so-called linear phase of the
convergence of iterations. The fastest possible linear convergence of the residual is given by kr k
EIGENVALUES OF INTEGRAL OPERATOR IN SCATTERING 11
where j is the optimal reduction factor. Nevanlinna shows by potential-theoretic arguments that this is
given by is the conformal map from the outside of the spectrum of K to the outside
of the unit disk. Here we have assumed that the spectrum of K is a simply connected region in the
complex plane.
Now we will try to estimate the convergence speed of iterative solvers assuming the spectrum is a line
segment in the complex plane, thus neglecting the few eigenvalues off the line. We are working with the
spectrum of the scattering integral operator K defined in (4.2), not the operator that corresponds to
the coefficient matrix. The line segment in the spectrum of K starts from zero, has a negative imaginary
part, makes and angle of ff with the negative real axis, and has a length of d. From the analysis in the
preceeding section it follows that
The conformal map that maps the outside of the line segment to the outside of the unit disk consists
of three parts: rotates the line segment to the negative imaginary axis
(i.e., to the segment [\Gammad; 0]), OE 2 translates this segment to the segment [\Gamma1; 1] and finally OE 3 maps the
outside of this segment to the outside of the unit disk. These maps are explicitly given by
d
Thus the optimal reduction factor is given by
where
d
Figure
5.1 shows the convergence behavior or QMR and GMRES together with the estimated optimal
convergence speed for a sphere with refractive index 0:05i and with size of kr
image) and (lower image). It can be seen that the convergence estimate and the actual behavior
of iterative methods are very close in the case kr = 1. Also, the convergence of QMR is very close to
GMRES. In the case of kr = 3 we notice that the observed convergence of iterative methods is much
slower than the predicted rate. This is due to the increased number of eigenvalues that lie off the line, as
can be seen from Figure 4.3. The convergence could be better estimated by drawing a polygon around
the spectrum and using the Schwarz-Cristoffel theorem [6] to compute the values of a conformal map
from this polygon to the outside of the unit disk, but this approach has not been pursued further.
6. Conclusions. We have studied the eigenvalues of the coefficient matrix arising from a discretization
of the volume integral equation of electromagnetic scattering. The eigenvalues of a coefficient matrix
for a spherical scatterer consist of a line segment plus some isolated points. We have studied how these
eigenvalues and the spectrum of the scattering integral operator are related. To find the spectrum of the
integral operator, we show that it is sufficient to find all the refractive indices for which the scattering
problem is not well defined. These indices are then mapped to the points in the spectrum of the operator.
We have shown that the isolated eigenvalues of the coefficient matrix correspond to exact resonances
in the analytical solution of scattering by a sphere. The line segment in the eigenvalue plots corresponds
to scattering by purely imaginary refractive indices, which is related to the branch cut of the square root
in the definition of the complex refractive index and the wave number inside the scatterer.
The knowledge of the eigenvalues of the integral operator can help us to understand the convergence
of iterative solution methods for the discretized scattering problem. For example, we have observed that
J. RAHOLA
Iteration number
Residual
norm
Iteration number
Residual
norm
Fig. 5.1. Convergence given by the optimal reduction factor (solid line), convergence of GMRES (dashed line) and of
QMR (dash-dotted line) for a sphere with and with refractive index
when the same object is discretized with increasing resolution, the number of unpreconditioned iterations
is practically constant. The convergence of iterative solvers depends on the eigenvalue distribution of the
coefficient matrix. Successively finer discretizations of the scattering integral equation produce coefficient
matrices which have approximately the same eigenvalue distribution and thus the same convergence
properties.
In contrast, when partial differential equations are discretized with increasingly finer meshes, the
convergence of unpreconditioned iterative solvers typically gets worse. This situation can arise if zero
belongs to the spectrum of the partial differential operator. When such a problem is discretized with
finer and finer meshes, some of the eigenvalues of the coefficient matrices move closer and closer to zero,
giving rise to poor convergence of iterative solvers. This problem can sometimes be remedied with the
help of preconditioners.
Finally, we have tried to predict the convergence speed of iterative solvers based on the knowledge
of the eigenvalues. In doing so, we have only used the location and length of the line segment in the
eigenvalue plots, thus neglecting the isolated eigenvalues that lie outside the line. This strategy gives
EIGENVALUES OF INTEGRAL OPERATOR IN SCATTERING 13
convergence estimates that are very close to the observed convergence for small scatterers. Once the
size of the sphere is increased, the number of eigenvalues lying off the line is increased and thus the
convergence estimate quickly becomes useless.
The analysis presented in this paper could be applied to other scattering geometries, such as spheroids,
for which analytical solutions exist. It would also be interesting to compute the eigenvalues arising from
more refined discretization schemes of the volume integral operator. The convergence analysis of iterative
solvers presented here could be augmented to account for the eigenvalues lying off the line segment in the
complex plane. However, for a given physical problem it is quite tedious to compute all the resonance
locations and thus this approach will probably not give a practical convergence analysis tool.
Acknowledgment
. I would like to thank Francis Collino for fruitful discussions on the subject.
--R
Absorption and Scattering of Light by Small Particles
Inverse Acoustic and Electromagnetic Scattering Theory
Resonant spectra of dielectric spheres
Translational addition theorems for spherical vector wave functions
Algorithm 756: A Matlab toolbox for Schwarz-Christoffel mapping
Conjugate gradient-type methods for linear systems with complex symmetric coefficient matrices
Electromagnetic resonances of free dielectric spheres
Scattering by irregular inhomogeneous particles via the digitized Green's function algorithm
Application of fast-Fourier-transform techniques to the discrete- dipole approximation
Accuracy of internal fields in VIEF simulations of light scattering
Mathematical foundations for error estimation in numerical solutions of integral equations in electromagnetics
Strong and weak forms of the method of moments and the coupled dipole method for scattering of time-harmonic electromagnetic fields
On two numerical techniques for light scattering by dielectric agglomerated structures
Light scattering by porous dust particles in the discrete-dipole approximation
How fast are nonsymmetric matrix iterations?
Convergence of Iterations for Linear Equations
Numerical Recipes in Fortran - The Art of Scientific Computing
Solution of dense systems of linear equations in electromagnetic scattering calculations.
GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems
Resonances and poles of weakly absorbing spheres
--TR
--CTR
Matthys M. Botha, Solving the volume integral equations of electromagnetic scattering, Journal of Computational Physics, v.218 n.1, p.141-158, 10 October 2006 | electromagnetic scattering;spectrum of linear operators;eigenvalues of matrices;iterative methods;integral equations |
351531 | A Priori Sparsity Patterns for Parallel Sparse Approximate Inverse Preconditioners. | Parallel algorithms for computing sparse approximations to the inverse of a sparse matrix either use a prescribed sparsity pattern for the approximate inverse or attempt to generate a good pattern as part of the algorithm. This paper demonstrates that, for PDE problems, the patterns of powers of sparsified matrices (PSMs) can be used a priori as effective approximate inverse patterns, and that the additional effort of adaptive sparsity pattern calculations may not be required. PSM patterns are related to various other approximate inverse sparsity patterns through matrix graph theory and heuristics concerning the PDE's Green's function. A parallel implementation shows that PSM-patterned approximate inverses are significantly faster to construct than approximate inverses constructed adaptively, while often giving preconditioners of comparable quality. | Introduction
. A sparse approximate inverse approximates the inverse of a
(usually sparse) matrix A by a sparse matrix M . This can be accomplished, for
example in the least-squares method, by minimizing the matrix residual norm 1
F
with the constraint that M is sparse. In general, the degrees of freedom of this
problem are the nonzero values in M as well as their locations. A minimization
that considers all these variables simultaneously, however, is very complex, and thus
a simple approach is to prescribe the set of nonzeros, or the sparsity pattern S, of
before performing the minimization. The objective function (1.1) can then be
decoupled as the sum of squares of the 2-norms of the n individual columns, i.e.,
in which e j and m j are the jth columns of the identity matrix and of the matrix M ,
respectively. Each least-squares matrix is small, having a number of columns equal
to the number of nonzeros in its corresponding m j . If A is nonsingular, then the
least-squares matrices have full rank.
Thus the approximate inverse can be constructed by solving n least-squares problems
in parallel. However, sparse approximate inverses are attractive for parallel pre-conditioning
primarily because the preconditioning operation is a sparse matrix by
vector product. The cost of constructing the approximate inverse for a large matrix
is usually so high, especially with the adaptive pattern selection strategies described
below, that they are only competitive if they are constructed in parallel.
For diagonally dominant A, the entries in A 1 decay rapidly away from the diagonal
[17], and a banded pattern for M will produce a good approximate inverse.
This work was performed under the auspices of the U.S. Department of Energy by the Lawrence
Livermore National Laboratory under Contract W-7405-ENG-48.
y Center for Applied Scientic Computing, Lawrence Livermore National Laboratory, L-560, Box
808, Livermore, CA 94551 (echow@llnl.gov).
1 The form for the right approximate inverse is used here, which is notationally slightly clearer. If
the matrix is distributed over parallel processors by rows, a left approximate inverse can be computed
row-wise.
In a general setting, without application-specic information, it is not clear how best
to choose a sparsity pattern for M . Algorithms have been developed that rst compute
an approximate inverse with an initial pattern S; then S is updated and a new
minimization problem is solved either exactly or inexactly. This process is repeated
until a threshold on the residual norm has been satised, or a maximum number of
nonzeros has been reached [12, 15, 22]. We refer to these as adaptive procedures.
One such procedure is to use an iterative method, such as minimal residual,
starting with a sparse initial guess [12] to approximately minimize (1.2), i.e., nd
sparse approximate solutions to
Suppose a sparse initial guess for m j is used. The rst few iterates will be sparse.
To maintain sparsity, a strategy to drop small elements is usually used, either in
the search direction or the iterates. No prescribed pattern S is necessary since the
sparsity pattern emerges automatically. For this method to be e-cient, sparse-sparse
operations must be used: the product of a sparse matrix by a sparse vector with p
nonzeros only involves p columns of the sparse matrix.
Another adaptive procedure, called SPAI [15, 22], uses a numerical test to determine
which nonzero locations should be added to the current sparsity pattern. For
the jth column, the numerical test for adding a nonzero in location k has the form
tolerance
is the residual for a given sparsity pattern of column j, and m j is
the current approximation. The test (1.4) is a lower bound on the improvement in the
square of the residual norm when the pattern of m j is augmented. Entry k is added
if the tolerance is satised, or if the left-hand side of (1.4) is large compared to its
values for other k. The cost of performing this test is a sparse dot product between r
and column k of A for each location to test. Interprocessor communication is needed
to test a k corresponding to a column not on a local processor.
These adaptive algorithms utilize the additional degrees of freedom in minimizing
aorded by the locations of the nonzeros in M and have allowed much more
general problems to be solved than before. Adaptive methods, however, tend to be
very expensive. Thus, this paper focuses on the problem of selecting S in a preprocessing
step so that a sparse approximate inverse can be computed immediately by
minimizing (1.1). Section 2 rst examines the sparsity patterns that are produced by
both non-adaptive and adaptive schemes. Section 3 tests the idea of using the patterns
of powers of sparsied matrices (PSM's) as a priori sparsity patterns for sparse
approximate inverses. Numerical tests are presented in Section 4, with comparisons
to both sequential and parallel versions of some current methods. Finally, Section 5
draws some conclusions.
2. Graph interpretations of approximate inverse sparsity patterns.
2.1. Use of the pattern of A and variants. The structure of a sparse matrix
A of order n is the directed graph G(A), whose vertices are the integers
whose edges (i ! j) correspond to nonzero o-diagonal entries in A. (This notation
usually implies matrices with all nonzero diagonal entries.) A subset of G(A) is a
directed graph with the same vertices, but with a subset of the edges in G(A). The
A PRIORI PATTERNS FOR SPARSE APPROXIMATE INVERSES 3
graph G(A) is a representation of the sparsity pattern of A, and when it is clear from
the context, we will not distinguish between them.
The structure of a vector x of order n is the subset of ng that corresponds
to the nonzero entries in x. When there is an associated matrix of order n, we often
refer to the structure of x as a subset of the vertices of the associated matrix. Notice
then, that the structure of column j of a matrix A is the set of vertices in G(A) that
have edges pointing to vertex j plus vertex j itself. The structure of row j is vertex
plus the set of vertices pointed to by vertex j.
The inverse of a matrix shows how each unknown in a linear system depends
on the other unknowns. The structure of the matrix A shows only the immediate
dependencies. This suggests that in the structure of A 1 there is an edge (i !
whenever there is a directed path from vertex i to vertex j in G(A) [21] (if A
is nonsingular, and ignoring coincidental cancellation). This structure is called the
transitive closure of G(A), and is denoted G (A). For an irreducible matrix, this result
says that the inverse is a full matrix, but it does suggest the possibility of truncating
the transitive closure process to approximate the inverse by a sparse matrix.
A heuristic that is often employed is that vertices closer to vertex j along directed
paths are more important, and should be retained in an approximate inverse sparsity
pattern. This idea is supported by the decay in the elements observed by Tang [30]
in the discrete Green's function for many problems.
These sparsity patterns were rst used by Benson and Frederickson [4] in the
symmetric case, who also dened matrices with these patterns to be q-local matrices.
Given a graph G(A) of a structurally symmetric matrix A with a full diagonal, the
structure of the jth column of a q-local matrix consists of vertex j and its qth level
nearest-neighbors in G(A). A 0-local matrix is a diagonal matrix, while a 1-local
matrix has the same sparsity pattern as A.
The sparsity pattern of A is the most common a priori pattern used to approximate
A 1 . It gives good results for many problems, but can usually be improved, or
fails for many other problems. One improvement is to use higher levels of q. Unfortu-
nately, the storage for these preconditioners grows very quickly when q is increased,
and even impractical in many cases [20].
Huckle [24] proposed similar patterns which may be more eective when A is
nonsymmetric. These include the patterns corresponding to the graphs G((A T
The density of the former, in
particular, grows very quickly with increasing k. Primarily, these patterns are useful
as envelope patterns from which the adaptive SPAI algorithm can select its pattern.
This gives an upper bound on the interprocessor communication required by a parallel
implementation [23].
Cosgrove and Daz [14] proposed augmenting the pattern of A without going
to the full 2-local matrix. They suggested adding nonzeros to m j in a way that
minimizes the number of new rows introduced into the jth least-squares matrix (in
expression (1.2)). The augmented structure is determined only from the structure
of A. Kolotilina and Yeremin [28] proposed similar heuristics for augmenting the
sparsity pattern for factorized sparse approximate inverses.
Sparsication. Instead of augmenting the pattern of A, it is also possible to
diminish the pattern of A when A is relatively full. This can be accomplished by spar-
sication (dropping small elements in the matrix A) and using the resulting pattern.
This was introduced by Kolotilina [26] for computing sparse approximate inverses for
dense matrices (see also [10, 32]), and Kaporin [25] for sparse matrices and factorized
approximate inverses.
Sparsication can be combined with the use of higher level neighbors. Tang [30]
showed that sparsifying a matrix prior to applying the adaptive SPAI algorithm is
eective for anisotropic problems. The observation is that the storage and therefore
operation count required for preconditioners produced this way are much smaller.
This technique can generate patterns that are those of powers of sparsied matrices.
The idea of explicitly combining sparsication with the use of higher level neighbors
was used by Alleon et al. [1], who attributes the technique to Cosnuau [16]. For
approximating the inverse of dense matrices in electromagnetics, however, their tests
showed that higher levels were not warranted. Tang and Wan [31] also used a sparsi-
cation before applying a q-local matrix pattern, for q > 1, for approximate inverses
used as multigrid smoothers. They showed that the sparsication does not cause a
deterioration in convergence rate for their problems. Both the work by Alleon et al.
and Tang and Wan represent the rst uses of PSM patterns.
Instead of applying the sparsication to A, it is also appropriate in some cases to
apply the sparsication to the sparse approximate inverse after it has been computed
[27, 31]. This is useful to reduce the cost of using the approximate inverse when it is
relatively full.
2.2. Insights from adaptive schemes. Adaptive schemes can generate patterns
that are very dierent from the pattern of A, for example, the generated patterns
can be much sparser than A. Nevertheless, the patterns produced by adaptive schemes
can be interpreted using the graph of A.
Consider rst the approximate minimization method described in Section 1. The
following algorithm nds a sparse approximate solution to
minimal residual iterations. A dropping strategy for elements in the search direction
r is encapsulated by the function \drop" in step 4, which may depend on the current
pattern of m.
Algorithm 2.1. Sparse approximate solution to
1. m := sparse initial guess
2. r := e j Am
3. Loop until krk < tol or reached max. iterations
4. d := drop(r)
5. q := Ad
6. := (r;q)
7. m := m+ d
8. r := r q
9. EndLoop
The elements in r not already in m are candidates for new elements in m. The
vector r is generated essentially by the product Am and thus the structure of r is the
set of vertices that have edges pointing to vertices in the structure of m. If the initial
guess for m consists of a single nonzero element at location j, then the structure of
grows outward from vertex j in G(A) with each iteration of the above algorithm.
If the search direction in the iterative method is A T r instead of r, then the kth
entry in the search direction is r T Ae k . A dropping strategy based on the size of the
these entries is similar to one based on the test (1.4) which attempts to minimize the
updated residual norm. In this method, the candidates for new elements are the rst
and second level neighbors of the vertices in m (for nonsymmetric A, the directions
of the edges are important).
A PRIORI PATTERNS FOR SPARSE APPROXIMATE INVERSES 5
This is exactly the graph interpretation for the SPAI algorithm (see Huckle [24]).
When computing column j of M , vertices far from vertex j will not enter into the
pattern, at least initially. Algebraically, this means that the nonzero locations of r
and Ae k do not intersect, and the value of the test is zero. An e-cient implementation
of SPAI uses these graph ideas to narrow down the indices k that need to be checked.
An early parallel implementation of SPAI [18] tested only the rst level neighbors
of a vertex, rather than both the rst and second levels. This is a good approximation
in many cases. This implementation also assumed A is structurally symmetric, so
that one-sided interprocessor communication is not necessary. A more recent parallel
implementation of SPAI [2, 3] implements the algorithm exactly. This code implements
one-sided communication with the Message Passing Interface (MPI), and uses
dynamic load balancing in case some processors nish computing their rows earlier
than others.
3. Patterns of powers of sparsied matrices.
3.1. Graph interpretation. In Section 2, we observed that prescribed sparsity
patterns or patterns generated by the adaptive methods are generally subsets of the
pattern of low powers of A (given that A has a full diagonal) and typically increase in
accuracy with higher powers. Clearly all these methods are related to the Neumann
series or characteristic polynomial for A [24].
The structure of column j of the approximate inverse of A is a subset of the
vertices in the level sets (with directed edges) about vertex j in G(A). Good vertices
to choose are those in the level sets in the neighborhood of vertex j, but the algorithms
dier in how these vertices are selected.
For convection-dominated and anisotropic problems, upstream vertices or vertices
in the preferred directions will have a greater in
uence than others on column j of
the inverse. Figure 3.1 shows the discrete Green's function for a point on a PDE
with convection. The nonsymmetry of the function shows that upstream nonzeros in
a row or column of the exact inverse are greater in magnitude than others. Without
additional physical information such as the direction of
ow, however, it is often
possible to use sparsication to identify the preferred or upstream directions.5152501020300.10.30.50.7
Fig. 3.1. Green's function for a point on a PDE with convection.
We examined the sparsity patterns produced by the adaptive algorithms and tried
to determine if they could have been generated by simpler graph algorithms. For some
simple examples, it turned out that the structures produced are exactly or very close
to the transitive closures of a subset of G(A), i.e., of the structure of a sparsied
6 EDMOND CHOW
matrix. In Figure 3.2 we show the structures of several matrices: (a) ORSIRR2 from
the Harwell-Boeing collection, (b) A 0 , a sparsication of the original matrix, (c) the
transitive closure G (d) the structure produced by the SPAI algorithm. This
latter gure was selected from [2] which shows it as an example of an eective sparse
approximate inverse pattern for this problem. (There are, however, some bothersome
features of this example: the approximate inverse is four independent diagonal blocks.)
Note that we can approximate the adaptively generated pattern (d) very well by the
pattern (c) generated using the transitive closure.
(a)
(b) Sparsied ORSIRR2
(c) Transitive closure of (b)
(d) Pattern from SPAI
Fig. 3.2. The adaptively generated pattern (d) can be approximated by the transitive closure (c).
3.2. How to sparsify. The simplest method to sparsify a matrix is to retain
only those entries in a matrix greater than a global threshold, thresh. In the example
of
Figure
was used. It was important, however, to make sure that
the diagonal elements were retained, otherwise a structurally singular matrix would
have resulted in this case. In general, the diagonal should always be retained.
A PRIORI PATTERNS FOR SPARSE APPROXIMATE INVERSES 7
One strategy for choosing a threshold is to choose one that retains, for example,
one-third of the original nonzeros in a matrix. Fewer nonzeros should be retained
if powers of this sparsied matrix have numbers of nonzeros that grow too quickly.
This is how thresholds were chosen for the small problems tested in Section 4. The
number of levels used may be increased until a preconditioner reaches a target number
of nonzeros. The best choices for these parameters will be problem-dependent. For
special problems, this strategy may not be eective, for example, when a matrix
contains only a few unique values.
When a matrix is to be sparsied using a global threshold, how the matrix is
scaled becomes important. It is often the case that a matrix contains many dierent
types of equations and variables that are not scaled the same way. For example,
consider the matrix@
which has its rst row and column scaled by a large number, Z. If a threshold Z is
chosen, and if the diagonal of the matrix is retained, the third row of the sparsied
matrix has become independent of the other rows. We thus apply the thresholding to
a matrix that has been symmetrically scaled so that it has all ones on its diagonal,
e.g., for the above matrix, the scaled matrix is@
A threshold less than or equal to 1 will guarantee that the diagonal is retained. The
scaling also makes it easier to choose a threshold. This method of scaling is not
foolproof, but does avoid some simple problems.
In the graph of the sparsied matrix A 0 , each vertex should have some connections
to other vertices. This can be accomplished by sparsifying the matrix A such that
one retains at least a xed number of edges to or from each vertex, for example,
the ones corresponding to the largest matrix values. Given a parameter ll, this can
be implemented for column j by selecting the diagonal (jth) element plus the ll 1
largest o-diagonal elements in column j of the original matrix. This guarantees that
there are ll 1 vertices with edges into vertex j. Applied row-wise, this guarantees
ll 1 vertices with edges emanating from vertex j. Again, we choose explicitly to
keep the diagonal of the matrix; thus each column (or row) has at least ll nonzeros.
Choosing ll may be simpler and more meaningful than choosing a threshold on the
matrix values. Dierent values of ll may be used for dierent vertices, depending on
the vertex's initial degree (number of incident edges).
denote the structure of a matrix A 0 that has been sparsied from A.
denote the structure (with the same set of vertices) that has an edge
whenever there is a path of distance i or less in S 0 . The structure S i 1 is a subset
of S i . In matrix form, S
ignoring coincidental cancellation. These are
called level set expansions of a sparsied matrix or patterns of powers of a sparsied
matrix (PSM patterns). Heuristic 3 tested by Alleon et al. [1] is equivalent to S i using
a variable ll at each level to perform sparsication.
We mention that it is also possible to perform sparsication on S i after every
level set expansion. We denote this variant by S
i . For this variant, values need to be
computed, and we propose the following, which stresses the larger elements in A. If
\drop" denotes a sparsication process, then we can dene A
and S
note that S
1 is not generally a subset of
. More complicated strategies are possible; the thresholds can be dierent for each
level i. Note that determining S
i is much more di-cult than determining S i since
values need to be computed.
3.3. Factorized forms of the approximate inverse. Sparse approximate
inverses for the Cholesky or LU factors of A are often used. The analogue of the
least-squares method (minimization of (1.1)) here is the factorized sparse approximate
inverse (FSAI) technique of Kolotilina and Yeremin [28], implemented in parallel by
Field [19]. If the normal equations method is used to solve the least-squares systems,
the Cholesky or LU factors are not required to compute the approximate inverse. This
means, however, that the adaptive pattern selection schemes cannot be used, since
the matrix whose inverse is being approximated is not available. A priori sparsity
patterns must be used instead.
Given sparse approximate inverse approximates U 1 and
sparse matrices G and H , respectively, so that
The patterns for G and H should be chosen such that the pattern of GH is close
in some sense to good patterns for approximating A 1 . Supposing that S is a good
pattern for A 1 , then the upper and lower triangular parts of S are good patterns
for G and H , since the pattern of GH includes the pattern S. These patterns will be
tested in Section 4.
It may also be possible to use the patterns of the powers of the exact or approximate
L and U if they are known. These L and U factors are not discretizations of
PDE's, but their inverses are often banded with elements decaying rapidly away from
the main diagonal. This technique may be appropriate if approximate L and U are
available, for example from a very sparse incomplete LU factorization.
As opposed to the inverses of irreducible matrices, the inverses of Cholesky or
LU factors are often sparse. An ordering should thus be applied to A that gives
factors whose inverses can be well approximated by sparse matrices. Experimentally,
fewer nonzeros in the exact inverse factors translates into lower construction cost and
better performance for factorized approximate inverses computed by an incomplete
biconjugation process [7, 8]. The transitive closure can be used to compute the number
of nonzeros in the exact inverse of a Cholesky factor, based on the height of all
the nodes in the elimination tree. This has lead to reordering strategies that approximately
minimize the height of the elimination tree and thus the number of nonzeros in
the inverse factors, and allows some prediction of how well these approximate inverses
might perform on a given problem [7, 8].
3.4. Approximate inverse of a Schur complement. To determine a good
pattern for a Schur complement matrix, we notice that
is the Schur complement. Thus a good sparsity pattern for
S 1 can be determined from a good sparsity pattern for A 1 ; it is simply the (2,2)
A PRIORI PATTERNS FOR SPARSE APPROXIMATE INVERSES 9
block of a good sparsity pattern for A 1 . B should be of small order compared to the
global matrix or else the method will be overly costly. In a code, it may be possible
to compute the approximate inverse of A and extract the approximation to S 1 , or
compute a partial approximate inverse, i.e., those rows or columns of the approximate
inverse that correspond to S 1 [11]. Again, S should be of almost the same order as
A.
3.5. Parallel computation. Computation of S i is equivalent to structural sparse
matrix-matrix products of sparsied matrices. The computation can also be viewed as
n level set expansions, one for each row or column, which can be performed in parallel.
For vertices that are near other vertices on a dierent processor, some communication
will be necessary. Communication can be reduced by partitioning the graph of the
sparsied matrix among the processors such that the number of edge-cuts is reduced.
Unfortunately, in general, one-sided communication is required to compute sparse
approximate inverses. Processors need to request rows from other processors, and a
processor cannot predict which rows it will need to send. One-sided communication
may be implemented in MPI by having each processor occasionally probe for messages
from other processors. The latency between probes is a critical performance factor
here. In a multithreaded environment, it is possible to dedicate some threads on
a local processing node to servicing requests for rows (server threads), while the
remaining threads compute each row and make requests for rows when necessary
(worker threads).
Consider a matrix A and an approximate inverse M to be computed that are
partitioned the same way by rows across several processors. Algorithm 3.1 describes
one organization of the parallel computation. Each processor computes a level set
expansion for all of its rows before before continuing on to the next level. At each
level, the requests and replies to and from a processor are coalesced, allowing fewer
and larger messages to be used. Like in [3], external rows of A are cached on a
processor in case they are needed to compute other rows. There is no communication
during the numerical phase when the values of M are being computed. This algorithm
was implemented using occasional probing for one-sided communication.
Algorithm 3.1. Parallel level set expansions for computing S i
Communicate rows
1. Initialize the set of vertices V to empty
2. Sparsify all the rows on the local processor
3. Merge the structures of all the locally sparsied rows into V
4. For level
5. For nonlocal k 2 V, request and receive row k
6. Sparsify received rows
7. Merge structures of new sparsied rows into V
8. EndFor
9. For nonlocal k 2 V, request and receive row k
Compute structure of each row
10. For each local row j
11. Initialize V j to a single entry in location j
12. For level
13. For new sparsied structure of row k into V j
14. EndFor
15. EndFor
Compute values of each row
16. For each local row j, nd
has the pattern V j
We also implemented a second parallel code, which has the following features:
multithreaded, to take advantage of multiple processors per shared-memory
node on symmetric multiprocessor computers
uses server and worker threads to more easily implement one-sided commu-
nication
uses a simpler algorithm than Algorithm 3.1: computes each row and performs
all the associated communications before continuing on to the next row; when
multiple threads are used, this avoids worker threads needing to synchronize
and coordinate which rows to request from other nodes; smaller messages are
used, but communication is also spread over the entire execution time of the
algorithm
scalable with its use of memory, but is thus also slower than the rst version
which used direct-address tables (traded memory for faster computation)
Timings for this second code will be reported in the next section. Some limited
timings for the rst code will also be shown. We are also working on a factorized
implementation for symmetric matrices, which will guarantee that the preconditioner
is also symmetric. This implementation makes a simple change in step 16 of Algorithm
3.1, and does not require one-sided communication when the full matrix is stored.
4. Numerical tests.
4.1. Preconditioning quality. First we test the quality of sparsity patterns
generated by powers of sparsied matrices on small problems from the Harwell-Boeing
collection. In particular, we chose problems that were tested with SPAI [22] in order
to make comparisons. We performed tests in exactly the same conditions: we solve the
same linear systems using GMRES(20) to a relative residual tolerance of 10 8 with a
zero initial guess. We report the number of GMRES steps needed for convergence, or
indicate no convergence using the symbol y. Right preconditioning was used.
In
Tables
4.1 to 4.5, we show test results for S i for both unfactored (column 2)
and factored (column 3) forms of the approximate inverse. We compare the results to
the least-squares (LS) method using the pattern of the original matrix A, and FSAI,
the least-squares method for the (nonsymmetric) factored form [28], again using the
pattern of the original matrix A. For the unfactored form, we also display the result
of the SPAI method reported in [22], using their choice of parameters. Adaptive
methods for factored forms are also available [5, 6, 29] but were not tested here.
Global thresholds (shown for each table) on a scaled matrix were used to perform
these sparsications. In the tables, we also show the number of nonzeros nnz in the
unfactored preconditioners (the entry for LS/FSAI is the number of nonzeros in A).
The results show that preconditioners of almost the same quality as the adaptive
SPAI can be achieved using the S i patterns. In some cases, even better preconditioners
can result, sometimes with even less storage (Table 4.1). The results also show that
using the pattern of A does not generally give as good a preconditioner for these
problems.
SHERMAN2 is a relatively hard problem for sparse approximate inverses. The
result reported in [22] shows that SPAI could reduce the residual norm by 10 5 (the
preconditioner with 26327 nonzeros in 7 steps. The results
are similar with S i patterns, but the full residual norm reduction can be achieved
with an approximate inverse that is denser. Note that in this case, the sparsication
A PRIORI PATTERNS FOR SPARSE APPROXIMATE INVERSES 11
Table
Iteration counts for ORSIRR2,
Pattern unfactored factored nnz
LS/FSAI 335 383 5970
SPAI 84 5318
Table
Iteration counts for SHERMAN1,
Pattern unfactored factored nnz
LS/FSAI 145 456 3750
SPAI
threshold was applied to the original matrix rather than to the diagonally scaled
matrix, although the matrix has values over 27 orders of magnitude; the diagonal
scaling is not foolproof. Factorized approximate inverses were not eective for this
problem with these patterns.
SAYLR4 is a relatively hard problem for GMRES. Grote and Huckle [22] report
that SPAI could not solve the problem with GMRES, but could with BiCGSTAB.
This is also true for S i patterns; the results in Table 4.5 are with BiCGSTAB.
There are, of course, many problems that are di-cult for PSM-patterned approximate
inverses. These include NNC666 and GRE1107 from the Harwell-Boeing
collection, and FIDAP problems from Navier-Stokes simulations. These problems,
however, can be solved using adaptive methods [12]. These problems pose di-culties
for PSM patterns because the Green's function heuristic is invalid; the problems either
are not PDE problems, or have been modied (e.g., the FIDAP problems used a
penalty formulation).
In
Tables
4.6 and 4.7, we show test results for S
i for the unfactored form of the
approximate inverse. S
0 and the LS patterns were the same for these problems. Since
is not generally a superset of S
there is no guarantee that S
is a better pattern
than S
in terms of the norm of R = I AM . To show this, we also display these
matrix residual norms. The parameter ll (shown for each table) was used to sparsify
these matrices after each level set expansion. Again, the results show that the a priori
methods can approach the quality of the adaptive methods very closely.
4.2. Parallel timing results. The results above show that the preconditioning
quality for PDE problems is not signicantly degraded by using the non-adaptive
schemes based on powers of sparsied matrices. In this section we illustrate the main
advantage of these preconditioners: their very low construction costs compared to the
adaptive schemes.
In this and Section 4.3 we show results using the multithreaded version of our
code, ParaSAILS (parallel sparse approximate inverse, least squares). Like the parallel
version of SPAI [3] with which we make comparisons, ParaSAILS is implemented
Table
Iteration counts for SHERMAN2,
Pattern unfactored factored nnz
LS/FSAI y y 23094
SPAI y 26327
Table
Iteration counts for PORES3,
Pattern unfactored factored nnz
LS/FSAI y y 3474
SPAI 599 16745
as a preconditioner object in the ISIS++ solver library [13]. Both these codes generate
a sparse approximate inverse partitioned across processors by rows; thus left
preconditioning is used. In all the codes, the least-squares problems that arose were
solved using LAPACK routines for QR decomposition. For problems with relatively
full approximate inverses, solving these least-squares problems takes the majority of
the computing time.
Tests were run on multiple nodes of an IBM RS/6000 SP supercomputer at the
Lawrence Livermore National Laboratory. Each node contains four 332 MHz PowerPC
CPU's. Timings were performed using user-space mode, which is much more
e-cient than internet-protocol mode. However, nonthreaded codes can only use one
processor per node in user-space mode. We tested SPAI with one processor per node,
and ParaSAILS with up to four processors per node. The iterative solver and matrix-vector
product codes were also nonthreaded, and used only one processor per node.
The rst problem we tested is a nite element model of three concentric spherical
shells with dierent material properties. The matrix has order 16881 and has
nonzeros. The SPAI algorithm using the default parameters (target
residual norm for each row < 0:4) produced a much sparser precondi-
tioner, with 171996 nonzeros, and solved the problem using GMRES(50) to a tolerance
steps. For comparison purposes, we chose parameters for ParaSAILS
that gave a similar number of nonzeros in the preconditioner. In particular, S 3 with
an ll parameter of 3 gave a preconditioner with 179550 nonzeros, and solved the
problem in 331 steps. Figure 4.1 shows the two resulting sparsity patterns. Table
4.8 reports, for various numbers of nodes (npes), the wall-clock times for the preconditioner
setup phase (Precon), the iterative solve phase (Solve), and the total time
(Total). The time for constructing the preconditioner in each code includes the time
for determining the sparsity pattern. Due to the relatively small size of this (and
the next) problem, only one worker and one server thread was used per node (i.e.,
two processors per node) in the ParaSAILS runs; one processor was used in the SPAI
runs.
For comparison, in Table 4.9 we show the results using the rst (nonthreaded,
occasional MPI probes for one-sided communication) version of the code. This code
A PRIORI PATTERNS FOR SPARSE APPROXIMATE INVERSES 13
Table
Iteration counts for SAYLR4,
Pattern unfactored factored nnz
LS/FSAI y y 22316
SPAI 67 84800
Table
Iteration counts for SHERMAN3,
steps nnz kRkF
LS y 20033 17.3620
SPAI 264 48480
is much faster because it uses direct-address arrays to quickly merge the sparsity
patterns of rows; direct-address arrays have length the global size of the matrix and
are not scalable.
(a) ParaSAILS (b) SPAI
Fig. 4.1. Structure of the sparse approximate inverses for the concentric shells problem.
We tested a second, larger problem which models the work-hardening of metal by
squeezing it to make it pancake-like In this particular
example, the pattern of a sparsied matrix (S 0 ) with an ll parameter of 3 lead to a
good preconditioner. ParaSAILS produced a preconditioner with 141848 nonzeros and
solved the problem in 142 steps; SPAI produced a preconditioner with 120192 nonzeros
14 EDMOND CHOW
Table
Iteration counts for SHERMAN4,
steps nnz kRkF
LS 199 3786 6.2503
SPAI 86 9276
Table
Timings for concentric shells problem.
ParaSAILS SPAI
npes Precon Solve Total Precon Solve Total
and solved the problem in 139 steps. Results for varying numbers of processing nodes
are shown in Table 4.10. The results show that the non-adaptive ParaSAILS algorithm
implemented here is many times faster than the adaptive SPAI algorithm.
4.3. Implementation scalability. In this section, we experimentally investigate
the implementation scalability of constructing sparse approximate inverses with
ParaSAILS. Let T (n; p) be the time to construct an approximate inverse of order n
on a parallel computer using p processors. We dene the scaled e-ciency to be
E(n; p) T (n; 1)=T (pn; p):
If E(n; then the implementation is perfectly scalable, i.e., one
could double the size of the problem and the number of processors without increasing
the execution time. However, as long as E(n; p) is bounded away from zero for a xed
n as p is increased, we say that the implementation is scalable.
We consider the 3-D constant coe-cient PDE
in
@
discretized using standard nite dierences on a uniform mesh, with the anisotropic
parameters This problem has been used to test the
scalability of multigrid solvers [9]. The problems are a constant size per compute node,
from 10 3 to local problem sizes. Node topologies of 1 3 to 5 3 were used. Thus the
largest problem was over a cube with (60 unknowns. Each node
used all four processors (4 worker threads, 1 server thread) for this problem in the
preconditioner construction phase (i.e., 500 processors in the largest conguration).
A threshold for ParaSAILS was chosen so that only the nonzeros along the
strongest (z) direction are retained, and the S 3 pattern was used. Although this is a
symmetric problem, the preconditioner is not symmetric, and we used GMRES(50)
as the iterative solver with a zero initial guess. The convergence tolerance was 10 6 .
A PRIORI PATTERNS FOR SPARSE APPROXIMATE INVERSES 15
Table
Concentric shells problem: Timings for preconditioner setup using direct addressing.
npes Precon
Table
Timings for work-hardening problem.
ParaSAILS SPAI
npes Precon Solve Total Precon Solve Total
Table
4.11 shows the results, including wall-clock times for constructing the preconditioner
and the iterative solve phase, the number of iterations required for convergence,
the average time for one iteration in the solve phase, and E p and E s , the scaled e-cien-
cies for constructing the preconditioner and for one step in the solve phase. Figures
4.2 and 4.3 graph the scaled e-ciencies E p and E s , respectively. The implementation
seems scalable for all values of p that may be encountered. For comparison, in Table
4.12, we show results for SPAI using a 40 3 local problem size. Larger problems led to
excessive preconditioner construction times. Again, one processor per node was used
for SPAI.
5. Conclusions. This paper demonstrates the eectiveness of patterns of powers
of sparsied matrices for sparse approximate inverses for PDE problems. As opposed
to many existing methods for prescribing sparsity patterns, PSM patterns use
both the values and structure of the original matrix, and very sparse patterns can be
produced. PSM patterns allow simpler direct methods of constructing sparse approximate
inverse preconditioners to be used, with comparable preconditioning quality to
adaptive methods, but with signicantly less computational cost. The numerical tests
show that the additional eort of adaptive sparsity pattern calculations is not always
required.
Acknowledgments
. The author is indebted to Wei-Pai Tang, who was one of
the rst to use sparsication for computing approximate inverse sparsity patterns.
John Gilbert was instrumental in directing attention to the transitive closure of a
matrix, and motivating the possibility of nding good patterns a priori. Michele
Benzi made helpful comments and directed the author to [24]. The author is also
grateful for the ongoing support of Robert Clay, Andrew
Esmond Ng, Ivan Otero, Yousef Saad, and Alan Williams, and nally for the cogent
comments of the anonymous referees.
Table
Timings, iteration counts, and e-ciencies for ParaSAILS.
local problem size
npes N Precon Solve Iter Solve/Iter Ep Es
local problem size
npes N Precon Solve Iter Solve/Iter Ep Es
local problem size
npes N Precon Solve Iter Solve/Iter Ep Es
local problem size
npes N Precon Solve Iter Solve/Iter Ep Es
50 50 50 local problem size
npes N Precon Solve Iter Solve/Iter Ep Es
local problem size
npes N Precon Solve Iter Solve/Iter Ep Es
Table
Timings, iteration counts, and e-ciencies for SPAI.
local problem size
npes N Precon Solve Iter Solve/Iter Ep Es
A PRIORI PATTERNS FOR SPARSE APPROXIMATE INVERSES 17
1200.10.30.50.70.9Number of nodes
Scaled
efficiency
Fig. 4.2. Implementation scalability of ParaSAILS preconditioner construction phase.
1200.10.30.50.70.9Number of nodes
Scaled
efficiency
Fig. 4.3. Implementation scalability of one step of iterative solution.
--R
An MPI implementation of the SPAI preconditioner on the T3E
A portable MPI implementation of the SPAI preconditioner in ISIS
Iterative solution of large sparse linear systems arising in certain multidimensional approximation problems
An ordering method for a factorized approximate inverse pre- conditioner
Semicoarsening multigrid on distributed memory machines
On a class of preconditioning methods for dense linear systems from boundary ele- ments
Approximate inverse techniques for block-partitioned matrices
Etude d'un pr
Decay rates for inverses of band matrices
Parallel implementation of a sparse approximate inverse preconditioner
Predicting structure in sparse matrix computations
Parallel preconditioning with sparse approximate inverses
A preconditioned conjugate gradient method for solving discrete analogs of
Explicit preconditioning of systems of linear algebraic equations with dense matrices
Factorized sparse approximate inverse preconditionings.
Factorized sparse approximate inverse precondition- ings I
Iterative Methods for Sparse Linear Systems
Towards an e
Sparse approximate inverse smoother for multi-grid
Preconditioning for boundary integral equations
--TR
--CTR
Kai Wang , Sang-Bae Kim , Jun Zhang , Kengo Nakajima , Hiroshi Okuda, Global and localized parallel preconditioning techniques for large scale solid Earth simulations, Future Generation Computer Systems, v.19 n.4, p.443-456, May
Robert D. Falgout , Jim E. Jones , Ulrike Meier Yang, Conceptual interfaces in hypre, Future Generation Computer Systems, v.22 n.1, p.239-251, January 2006
Robert D. Falgout , Jim E. Jones , Ulrike Meier Yang, Pursuing scalability for hypre's conceptual interfaces, ACM Transactions on Mathematical Software (TOMS), v.31 n.3, p.326-350, September 2005
Kai Wang , Jun Zhang , Chi Shen, Parallel Multilevel Sparse Approximate Inverse Preconditioners in Large Sparse Matrix Computations, Proceedings of the ACM/IEEE conference on Supercomputing, p.1, November 15-21,
Chi Shen , Jun Zhang, Parallel two level block ILU Preconditioning techniques for solving large sparse linear systems, Parallel Computing, v.28 n.10, p.1451-1475, October 2002
Dennis C. Smolarski, Diagonally-striped matrices and approximate inverse preconditioners, Journal of Computational and Applied Mathematics, v.186 n.2, p.416-431, 15 February 2006
Edmond Chow, Parallel Implementation and Practical Use of Sparse Approximate Inverse Preconditioners with a Priori Sparsity Patterns, International Journal of High Performance Computing Applications, v.15 n.1, p.56-74, February 2001
Oliver Brker , Marcus J. Grote, Sparse approximate inverse smoothers for geometric and algebraic multigrid, Applied Numerical Mathematics, v.41 n.1, p.61-80, April 2002
Luca Bergamaschi , Giorgio Pini , Flavio Sartoretto, Computational experience with sequential and parallel, preconditioned Jacobi--Davidson for large, sparse symmetric matrices, Journal of Computational Physics, v.188
Michele Benzi , Miroslav Tma, A parallel solver for large-scale Markov chains, Applied Numerical Mathematics, v.41 n.1, p.135-153, April 2002
Anwar Hussein , Ke Chen, Fast computational methods for locating fold points for the power flow equations, Journal of Computational and Applied Mathematics, v.164-165 n.1, p.419-430, 1 March 2004
P. K. Jimack, Domain decomposition preconditioning for parallel PDE software, Engineering computational technology, Civil-Comp press, Edinburgh, UK, 2002
E. Flrez , M. D. Garca , L. Gonzlez , G. Montero, The effect of orderings on sparse approximate inverse preconditioners for non-symmetric problems, Advances in Engineering Software, v.33 n.7-10, p.611-619, 29 November 2002
Michele Benzi, Preconditioning techniques for large linear systems: a survey, Journal of Computational Physics, v.182 n.2, p.418-477, November 2002 | preconditioned iterative methods;graph theory;sparse approximate inverses;parallel computing |
351534 | Mesh Independence of Matrix-Free Methods for Path Following. | In this paper we consider a matrix-free path following algorithm for nonlinear parameter-dependent compact fixed point problems. We show that if these problems are discretized so that certain collective compactness and strong convergence properties hold, then this algorithm can follow smooth folds and capture simple bifurcations in a mesh-independent way. | Introduction
. The purpose of this paper is to extend the results in [4], [5], and [16] on
mesh-independent convergence of the GMRES, [21], iterative method for linear equations to a
class of matrix-free methods for solution of parameter dependent nonlinear equations of the form
In Banach space.
We present an algorithm for numerical path following and detection of simple bifurcations
together with conditions on a sequence of approximate problems,
(with G consistency of notation) that imply that the performance of the algorithm is independent
of the level h of the discretization. Hence, for such problems, methods, and discretizations,
the difficulties raised in [23] and [24] will not arise.
In this section we set the notation and specify the kinds of singlarities that we will consider.
This discussion, and that of algorithms for path following in x 2.1 and detection of simple bifurcation
and branch switching in x 3, do not depend on the discretization and we use the notation G
for both G 0 and G h for h ? 0. When we describe our assumptions on the discretization in x 4 and
present an example in x 5, the distinction between G 0 and G h becomes important and the notation
in those sections reflects that difference.
1.1. Notation and Simple Singularities. We let
denote the solution path in X \ThetaIR of pairs (u; -) that satisfy (1.1), G u denotes the Fr-echet derivative
with respect to u, and G - the derivative with respect to the scalar -.
Version of May 19, 1998.
y Department of Applied Mathematics, National Chiao-Tung University, Hsin-Chu 30050, Taiwan.
(ferng@math.nctu.edu.tw). The research of this author was supported by National Science Council, Taiwan.
z North Carolina State University, Department of Mathematics and Center for Research in Scientific Computation,
Box 8205, Raleigh, N. C. 27695-8205 (Tim Kelley@ncsu.edu). The research of this author was supported by
National Science Foundation grant #DMS-9700569.
Throughout this paper we make
ASSUMPTION 1.1. G is continuously Fr- echet differentiable in (u; -). G u is a Fredholm
operator of index zero, and
Here COM denotes the space of compact operators.
For A 2 L(X), the space of bounded operators on X , we let N (A) be the null space of A and
R(A) the range of A. Assumption 1.1 implies that R(G u (u; -)) is closed in X and the dimension
of N (G u (u; -)) is the co-dimension of R(G u (u; -)).
Following [14], we say that a point (u;
ffl is a regular point if G u (u; -) is nonsingular,
ffl is a simple singularity if 0 is an eigenvalue of G u with algebraic and geometric multiplicity
one,
- is a simple fold if it is a simple singularity and G - 62 R(G u ).
- is a simple bifurcation point if it is a simple singularity and G - 2 R(G u ).
2. Algorithms for Path Following and Branch Switching. In this section we describe the
iterative methods for path following, detection of bifurcation, and branch switching that we analyze
in the subsequent sections and discuss some alternative approaches. Our approach is matrix-free,
which means that we use no matrix storage or matrix factorizations at all. However, there are
many related methods to which our mesh-independence analysis is applicable. We refer the reader
to [13] and [7] for other methods for path following and branch switching and to [20], [25], and
[10] for iterative methods for detection of bifurcation.
2.1. Path Following and Arclength Continuation. For path following in the absence of singularities
the standard approach is a predictor-corrector method. The methods differ in the manner
in which u 0 Assume that solution point and we want
to compute at a nearby - 1 .
If the Jacobian G u nonsingular, then the implicit function theorem insures the existence
of a unique smooth arc of solutions (u(-), through
more, with the smoothness assumption on G, it follows that u 0
d-
u(-) exists and differentiation
of (1.1) with respect to - gives the equation for u 0 .
G u
The Euler predictor uses u 0 directly and approximates
While the results in this paper can be applied to the solution of (2.1) by GMRES, we choose
to avoid the addtional solve and approximate u 0 with a difference. The secant predictor is
solution pair with -
CONTINUATION METHODS 3
The nonlinear iteration is an inexact Newton iteration [6] in which
where d k satisfies the inexact Newton condition
kG
The inexact Newton condition can be viewed as a relative residual termination criterion for an
iterative linear solver applied to the equation for the Newton step
G
When GMRES is used as the linear iteration, as it is in this paper, the nonlinear solver is referred
to as Newton-GMRES.
The same procedure can be used if there are simple folds if the problem is expanded by using
pseudo-arclength continuation, [12], [13]. Here we introduce a new parameter s and solve
is a normalization equation.
We will use the secant normalization
when two points on the path are available and the norm-based normalization
to begin the path following. Both normalizations are independent of the discretization. This independence
plays a role in the analyis that follows.
It is known, [14], that arclength continuation turns simple folds in (u; -) to regular points in
3. Singular Point Detection and Branch Switching. Although a continuation procedure incorporated
with a pseudo-arclength normalization can circumvent the computational difficulties
caused by turning points and usually jump over bifurcation points, it is usually desirable and important
to be able to detect and locate the singularities in both cases. In the case where direct
methods are used for linear algebra a sign change in the determinant of F can be used to detect
simple bifurcation and a change in sign of d-=ds to detect simple folds. We use a variation
of the approach in [10], which requires only the largest (in magnitude) solution of a generalized
eigenvalue problem.
Suppose that (x(s a ); s a ) and (x(s b ); s b ) are two regular points and the path following procedure
is going from s a to s b . Let A(s a
A(s a ) and A(s b ) are nonsingular. Recall that if (x(s); s) is a bifurcation point on the solution path,
then the null space N dimensional and there exists a nonzero vector v such that
Applying the Lagrange interpolation to
A(s a
where the matrix E(s) denotes the perturbation matrix in the interpolation. Then instead of solving
the usually nonlinear eigenvalue problem (3.1) we make a linear approximation
A(s a
which leads to a generalized eigenvalue problem
A(s a
with
Therefore, an s value which causes the Jacobian F x (x(s); s) to be singular can be approximated
by
Our approach differs from that in [10] in that the roles of s a and s b are interchanged. The
method of [10] solves
where
In the approach of [10], F x (x; s) is factored at s a in order to compute dx=ds for the predictor; that
factorization is used again to precondition an Arnoldi iteration for the corrector, and one last time
in the formulation of the eigenvalue problem to detect singularities. Since we do not factor F x at
all, the roles of s a and s b can be interchanged. We found in our experiments that our approach,
using (3.3), gave a better approximation of the location of the singularity that one using (3.6).
Equation (3.4) implies that a bifurcation point s closest to the interval [s a ; s b ] corresponds to
the largest eigenvalue in magnitude oe of (3.3). A negative oe signals that there is a bifurcation
point between s a and s b , indicates that there is a bifurcation point close behind s a ,
and a "large" positive oe means that a bifurcation point is approaching. When oe - 1, it should be
interpreted that no bifurcation point is nearby.
regular point and A(s b ) is nonsingular, the generalized eigenvalue problem
(3.3) can be solved via the equivalent linear eigenvalue problem
(3.
CONTINUATION METHODS 5
where only the largest eigenvalue and corresponding eigenvector are needed. We will solve (3.8)
with the Arnoldi method and prove that if s a and s b are not too near a singlar point then the
eigenvalue problem is well conditioned in a mesh independent way.
A simple fold point can be predicted in a similar way with s replaced by - and A(s) replaced
by
At a simple bifurcation point two branches of solutions intersect nontangentially. We next
describe how the information obtained from the solution of the eigenvalue problem can be used for
branch switching. Suppose a bifurcation point x on the primary branch
is determined. Since the eigenvector w corresponding to the largest eigenvalue oe for the eigenvalue
problem described above is an approximation for a null vector of F x
On the other hand, the tangent vector -
which is the solution of
G
is also a null vector of [G u
x 0 are
linearly independent it is recommended in [10], where G u is factored and dx=ds is computed
using (3.9), that N ([G u approximated by spanf -
wg. In the matrix-free case
considered in this paper, we approximate the tangent vector by a secant approximation
We approximate the tangent direction of the new branch by orthogonalizing w against ffi x
To obtain a regular point on the secondary branch, we solve the following augmented nonlinear
system
where
and ffl is a "switching factor". This switching factor is problem dependent and should be chosen
large enough so that the solution of (3.11) does not fall back to the primary branch [14]. The
nonlinear equation (3.11) can be solved by Newton-type method with initial iterate x
After
moving onto the secondary branch, the continuation procedure for path following on the secondary
branch is identical to the one on the primary branch.
In [10] the linear systems for computing tangent vectors (3.9) are solved with a preconditioned
Arnoldi iteration and the eigenvalues of the Hessenberg matrix produced in the Arnoldi iteration
are used to predict singular points. Since a factorization of A(s a ) is used as the preconditioner
in [10], the prediction comes with very little cost. In our approach, Jacobians are not factored
and the tangent vector is not computed, instead, a secant vector is used as an approximation.
The eigenvalue problem for singularity prediction is performed separately using some iterative
algorithm. This leaves us the flexibility of choosing any robust iterative solver and the associated
preconditioner for the continuation procedure.
Other methods for singularity detection based on solution of eigenvalue problems have been
proposed in [25], [9], and [20].
4. Approximations and Mesh Independence. This is the only section in which the properties
of the discrete problems are explicitly addressed. We assume, as is standard in the integral
equations literature [1], [3], that the discretization used in (1.2) has been constructed, by interpolation
if necessary, so that G h has the same domain and range as G.
Recall [1] that a family of operators fT ff g ff2A is collectively compact if the set
is precompact in X . In (4.1) B is the unit ball in X . In the special case that the indexing set A is an
interval [0; h 0 ], we will denote the index by h. We say that strongly as h ! 0 and write
in the norm of X for all u 2 X .
All of the results are based on the following results from [1] and some simple consequences.
THEOREM 4.1. Let fT h g be a family of collectively compact operators on X that converge
strongly to T 2 COM(X). Assume that I \Gamma T is nonsingular. Then there is h 0 ? 0 such that
nonsingular for all h - h 0 and converges strongly to
A simple compactness argument implies uniform bounds for parameter dependent families of
collectively compact strongly convergent operators.
COROLLARY 4.2. Let a ! b be given and assume that
is a collectively compact family of operators such that T h
for each fixed s.
Assume that fT h (s)g is uniformly Lipschitz continuous in s and that I \Gamma T (s) is nonsingular for
all s 2 [a; b]. Then there are M; h 0 ? 0 such that
for all h - h 0 and s 2 [a; b].
We let
We assume that
ASSUMPTION 4.1.
1. G h (u; -) ! G(u; -) for all (u; -) 2 X \Theta IR as h ! 0.
2. G h is Lipschitz continuously Fr- echet differentiable and the Lipschitz constants of G h
u and
are independent of h.
3. For all (u; -) 2 X \Theta IR
(4.
CONTINUATION METHODS 7
4. There are
then the families of operators
are collectively compact.
We augment G with a mesh-independent Lipschitz continuously differentiable arclength
normalization defining
0:
Examples of normalizations N that do not depend on h are (2.6) and (2.7). We let
Consistently with the notation in the previous sections, we let u h (-) and x h solutions
to G h (u;
4.1. The Corrector Equation and Simple Folds. At regular points when - is used as the
continuation parameter the equation for the Newton step is
The theory in [1] asserts that if G u (u; -) is nonsingular, so is G h
sufficiently small h and
strongly convergent to G u (u; -) \Gamma1 . However, this convergence is not uniform and in
order to invoke the results of [1] and [5] we must take care to remain away from singular points.
One can treat simple folds as regular points by means of arclength continuation. If we solve the
augmented system, (2.5), and this system has only regular points, then
x
k is bounded on finite
segments of \Gamma. If, moreover, the normalization N(u; -; s) does not depend on the discretization, as
it will not if (2.6) or (2.7) are used, then our assumptions imply that the finite dimensional problems
are as well conditioned as the infinite dimensional problems.
THEOREM 4.3. Let Assumptions 1.1 and 4.1 hold. Let [s a ; s b ] be such that
is a single smooth arc and F x is nonsingular on - \Gamma(s) then there are ffi 1 , K, and h 0 such that
Proof. Assumption 1.1 and the mesh independence of N imply that we may apply Corollary
4.2 to T h
x (x(s); s). Hence, there is such that for all h - h 0 and
x
Let L be the (h-independent) Lipschitz constant of F h
x
. Then if
Moreover
and hence if M Lffi ! 1 the Banach Lemma implies that F h
is nonsingular and
1=(2ML) the proof is complete with
The bound, (4.7), part 2 of Assumption 4.1, and the Kantorovich theorem [11], [15], [17],
imply convergence of x h to x.
COROLLARY 4.4. Let Assumptions 1.1 and 4.1 hold. Let [s a ; s b ] be such that f -
[s a ; s b ]g is a single smooth arc with at most simple fold singularities. Then there is a unique
solution arc for F h ,
and x h
So, for h sufficiently small, the secant predictor
converges to x (0) uniformly in (s Hence if the steps in arclength fffi n
s
are independent of h and the secant predictor is used, then the accuracy of the initial iterate to the
corrector equation and by Theorem 4.3 the condition of the linear equation for the Newton step is
independent of h. Hence the performance of the nonlinear iteration is independent of h.
As for the GMRES iteration that computes the Newton step, the methods from [16], [4],
and [5], may be extended to show that the GMRES iteration for the Newton step converges r-
superlinearly in a manner that is independent of h, x, and s. All that one needs is an uniform
clustering of the eigenvalues of F h
on M(ffi) that is independent of x, s, and h. This follows
directly from a theorem in [1]. In the results that follow, we count eigenvalues by multiplicity and
order them by decreasing distance from 0.
THEOREM 4.5. Let fT h g be a family of collectively compact operators on X that converge
strongly to K 2 COM(X). Let f- h
be the eigenvalues of T h and f- j g 1
be the eigenvalues
of T . Let ae ? 0 and assume that . Then there is
then
. Moreover, if - 1 has algebraic and geometric multiplicity one, then so does
. Moreover there is a sequence of eigenfunctions fw h g of T h corresponding to the
eigenvalue - h
1 so that w h ! w, and eigenfunction of T correpsonding to the eigenvalue - 1 .
With this in hand, the eigenvalue clustering result can be obtained in the same way that Theorem
4.3 was derived from Theorem 4.1. We begin with the analog of Corollary 4.2.
COROLLARY 4.6. Let the assumptions of Theorems 4.3 and 4.5 hold. Let fmu h
be
the eigenvalues, counted by multiplicity, of I \Gamma F x (x; s). Let ae ? 0 be given. There are h
such that if h - h 0 then
CONTINUATION METHODS 9
THEOREM 4.7. Let the assumptions of Theorems 4.3 and 4.5 hold. Let fmu h
be the
eigenvalues, counted by multiplicity, of I \Gamma F x (x; s). Let ae ? 0 be given. There are
such that if h - h 0 and (x; s) 2 N
for all
Before we state our r-superlinear convergence theorem we must set some more notation. We
write the equation for the Newton step for the corrector equation as
We let d h
denote the kth GMRES iteration and r h
the kth GMRES residual. As is
standard in nonlinear equations, d h
s). The proof of Theorem 4.8 is
similar to that of the similar results in [4].
THEOREM 4.8. Let Assumptions 1.1 and 4.1 hold. Let [s a ; s b ] be such that -
[s a ; s b ]g is a single smooth arc and F x is nonsingular on -
\Gamma(s) Then there are
a continuous function M on (0; 1) such that for all ae 2 (0; 1) and (x; s) ae N (ffi 3 ) \Theta [s a ; s b ] and
Theorem 4.8 states that any desired rate of linear convergence can be obtained in a mesh
independent way and hence, for (x; s) fixed, the convergence is r-superlinear in a mesh independent
way.
4.2. Simple Bifurcation. Now consider a path -
which has a single simple bifurcation at be the branch of solutions
that intersects -
\Gamma 0 at (x(s c ); s c ). We will denote solutions on - \Gamma 1 by y and the arclength parameter
on -
t. Hence, for
We set (y(t c ); t c ) be the bifurcation point on Hence
For any ffi ? 0, the results in x 4.1 hold for the paths
We define
0 in a similar way. Since x h
the results in the previous section, simple bifurcation from - \Gamma hcan only arise for s 2
In this section we describe how the prediction of the bifurcation point, the conditioning of the
generalized eigenproblem, the solution of that eigenproblem, and the accurate tracking of the other
branch depend on h. Since perturbation of G, even in finite dimension, can change the structure
of -
\Gamma 0 from two intersecting arcs to two disconnected arcs [18], [19], we cannot show that the
bifurcation will be preserved, however if s c is far enough away from s a and s b , the new other path
will be detected even if it does not correspond to a bifurcation for the finite dimensional problem.
For
will be nonsingular and we can consider the eigenvalue problems
where the largest eigenvalue in magnitude is sought. If (4.11) is solved via the Arnoldi method, the
matrix-vector products of F h
with a vector will each require a product
of F h
e. a linear solve. If this solve is performed with GMRES,
then Theorem 4.8 implies that the number of iterations required to approximate that matrix-vector
product can be bounded independently of h; x; s.
Our assumptions imply, if s b \Gamma s a is sufficiently small, that oe = oe 0 is a simple eigenvalue with
geometric multiplicity one. Since
is a collectively compact and strongly convergent sequence, we can apply Theorem 4.5 to conclude
that for h sufficiently small, oe h is a simple eigenvalue with geometric multiplicity one as well.
Moreover oe h will be well separated from the next largest eigenvalue in magnitude, hence the
conditioning of the eigenvalue problem is independent of h.
5. Numerical Results. In the example presented in this section we use the normalization
equation
are two previously computed solutions and
used in the experiments. We used a step of
in s for path following.
We apply the secant predictor (2.3) to generate the initial iterate. For the corrector, we choose
a forward-difference Newton-GMRES algorithm [15]. The outer iteration that generates the sequence
when the nonlinear residual norm satisfies
where the absolute error tolerance - were used. To
avoid oversolving on the linear equation for the Newton step z k the forcing terms j k such that
CONTINUATION METHODS 11
were adjusted with a method from [8]. We use the l 2 norm and scale the norm by a factor of
1=N for differential and integral equations so that the results are independent of the computational
mesh.
The generalized eigenvalue problem (3.3) that characterizes the singularity prediction and
branch switching need not be solved at each new point on the path is computed (i. e. s
is too frequent). In the example considered in this section, we solved the eigenvalue problem to
make a predicition of bifurcation after a step of
had been taken. We used a simple version of the Arnoldi, [2], method based on modified Gram-Schmidt
orthogonalization to solve (3.3) with reorthogonalization at each iteration. The Jacobian
of the inflated system is usually neither symmetric nor positive definite, we treat the generalized
eigenvalue problem as linear eigenvalue problem and solve the linear system involved explicitly
with preconditioned GMRES.
After j steps the Arnoldi method produces
is an upper Hessenberg matrix with the h ij 's as its nonzero entries and Q j is orthogonal
with the q j 's as columns. If ('; y) is an eigenpair of H j and y, then
provides a computable error bound. The Ritz pair, ('; w), is used to approximate the eigenpair of
We terminated the Arnoldi iteration when
5. The prediction -
- is accepted when - lies in between - a and - b , or
We found this strategy sufficient for the example here and were able to use (3.10) to move onto the
new branch. Of course, the values of \Delta eig is problem dependent (as it is for other methods) as is
the criteria for accepting a prediction.
As an example we consider the following two-point boundary value problem [14].
Uniform mesh of spacing
is used to define the grid points ft j g N
as
order finite difference discretization leads to the discrete nonlinear system of order
FIG. 5.1. Solution u( 1) versus -.
Solution
at the boundaries.
Starting from the trivial solution the algorithm traces the solution path, locates
bifurcation points, switches branches, and captures the secondary solution curves. Indeed there are
total two bifurcation points at - 81; \Gamma81 and eight turning points at - \Sigma11; \Sigma110; \Sigma336
in the region of interest. The primary solution branch represents the symmetric solutions and
the secondary solution branch represents nonsymmetric periodic solutions bifurcating from the
primary branch. Figure 5.1 plots the solution u( showing the folds and bifurcations on
the curve.
In
Table
5.1 we report the bifurcation predictions along the solution path going toward -
\Gamma81. One can see that (3.3) is a more accurate predictor than (3.6). Tables 5.2 and 5.3 illustrate
the mesh independence of the linear and nonlinear iterations at various points on the primary (5.2)
and secondary ((5.3) branches. The iteration statistics remain virtually unchanged as the mesh is
refined.
Table
5.4 lists the total number of preconditioned GMRES iterations and Arnoldi iterations
corresponding to three representative prediction intervals. In Table 5.4 we show how both the
GMRES iterations needed to approximate the product of F h
u with G u in the case where we predict a turning point) and the overall number of Arnoldi
iterations is independent of the mesh. Each Arnoldi step requires about 12-14 GMRES iterations.
This is similar to the numbers listed in Table 5.2 and 5.3. The residual in the table is the Arnoldi
residual when the iteration terminates.
CONTINUATION METHODS 13
Prediction of bifurcation point along the primary branch going toward - \Gamma81.
- a - b prediction (3.3) prediction (3.6)
6.0998e+00 1.0574e+01 1.2614e+02 -9.3188e+03
8.4208e+00 1.1361e+00 -2.2855e+01 -3.5346e+01
1.1361e+00 -7.1699e+00 -3.6599e+01 -4.7426e+01
Total number of Newton and preconditioned GMRES iterations on the primary branch.
problem size - value Newton P-GMRES
Total number of Newton and preconditioned GMRES iterations at switching point and on the secondary branch.
problem size - value Newton P-GMRES
switching 7 21
switching 7 22
switching 6 20
Total number of preconditioned GMRES and Arnoldi iterations required at different sections on the path when a
prediction procedure is performed.
path N - a - b P-GMRES Arnoldi residual
turning 64 7.0814e+00 1.0632e+01 53 5 2.9490e-02
point 128 6.0998e+00 1.0574e+01 50 5 1.9156e-05
near-by 256 9.1379e+00 1.0883e+00
regular 64 -3.1304e+01 -4.0215e+01
point 128 -2.4500e+01 -3.3295e+01 69 5 3.3738e-05
bifurcation
point 128 -6.8734e+01 -7.7621e+01 56 4 1.3263e-06
detected 256 -7.4823e+01 -8.3605e+01 62 4 1.3251e-06
Acknowledgments
. The authors wish to thank Gene Allgower for making us aware of reference
[22], Klaus B-ohmer for providing us with a copy, and Alastair Spence for many other pointers
to the literature. We are also grateful to Dan Sorensen for his council on the solution of eigenvalue
problems.
--R
Collectively Compact Operator Approximation Theory
The principle of minimized iterations in the solution the matrix eigenvalue problem
A survey of numerical methods for Fredholm integral equations of the second kind
GMRES and the minimal polynomial
Convergence estimates for solution of integral equations with GMRES
Lecture Notes on Numerical Analysis of Bifurcation Problems.
Choosing the forcing terms in an inexact Newton method
WANG, Nonequivalence deflation for the solution of matrix latent value problems
A new algorithm for numerical path following applied to an example from hydrodynamic flow
Functional Analysis
Constructive methods for bifurcation and nonlinear eigenvalue problems
Lectures on Numerical Methods in Bifurcation Theory
Iterative methods for linear and nonlinear equations
GMRES and integral operators
Iterative Solution of Nonlinear Equations in Several Variables
Column buckling-an elementary example of bifurcation
Algorithms for the nonlinear eigenvalue problem
GMRES a generalized minimal residual algorithm for solving nonsymmetric linear systems
Andwendung von Krylov-Verfahren auf Verzweigungs-und Fortsetzungsprobleme
An adaption of Krylov subspace methods to path following problems
Numerical path following and eigenvalue criteria for branch switching
--TR | fold point;bifurcation;arnoldi method;singularity;matrix-free method;GMRES;path following;collective compactness;mesh independence |
351540 | Robustness and Scalability of Algebraic Multigrid. | Algebraic multigrid (AMG) is currently undergoing a resurgence in popularity, due in part to the dramatic increase in the need to solve physical problems posed on very large, unstructured grids. While AMG has proved its usefulness on various problem types, it is not commonly understood how wide a range of applicability the method has. In this study, we demonstrate that range of applicability, while describing some of the recent advances in AMG technology. Moreover, in light of the imperatives of modern computer environments, we also examine AMG in terms of algorithmic scalability. Finally, we show some of the situations in which standard AMG does not work well and indicate the current directions taken by AMG researchers to alleviate these difficulties. | Introduction
. Algebraic multigrid (AMG) was rst introduced in the early
1980's [11, 8, 10, 12], and immediately attracted substantial interest [32, 28, 30, 29].
Research continued at a modest pace into the late 1980's and early 1990's [18, 14, 21,
25, 20, 26, 22]. Recently, however, there has been a major resurgence of interest in the
eld, for \classical" AMG as dened in [29], as well as for a host of other algebraic-
type multilevel methods [3, 16, 34, 6, 2, 4, 5, 15, 33, 17, 35, 36, 37]. Largely, this
resurgence in AMG research is due to the need to solve increasingly larger systems,
with hundreds of millions or billions of unknowns, on unstructured grids. The size
of these problems dictates the use of large-scale parallel processing, which in turn
demands algorithms that scale well as problem size increases. Two dierent types of
scalability are important. Implementation scalability requires that a single iteration
be scalable on a parallel computer. Less commonly discussed is algorithmic scalability,
which requires that the computational work per iteration be a linear function of the
problem size and that the convergence factor per iteration be bounded below 1 with
bound independent of problem size. This type of scalability is a property of the
algorithm, independent of parallelism, but is a necessary condition before a scalable
implementation can be attained.
Multigrid methods are well known to be scalable (both types) for elliptic problems
on regular grids. However, many modern problems involve extremely complex
geometries, making structured geometric grids extremely di-cult, if not impossible,
to use. Application code designers are turning in increasing numbers to very large
unstructured grids, and AMG is seen by many as one of the most promising methods
for solving the large-scale problems that arise in this context.
This study has four components. First, we examine the performance of \classical"
AMG on a variety of problems having regular structure, with the intent of determining
its robustness. Second, we examine the performance of AMG on the same suite of
problems, but now with unstructured grids and/or irregular domains. Third, we
study the algorithmic scalability of AMG by examining its performance on several of
Center for Applied Scientic Computing (CASC), Lawrence Livermore National Laboratory,
Livermore, CA. Email:fcleary, rfalgout, vhenson, jjonesg@llnl.gov
y Department of Applied Mathematics, University of Colorado, Boulder, CO. Email: ftmanteuf,
stevemg@boulder.colorado.edu
z USS Florida (SSBN-728), Naval Submarine Base, Silverdale, WA, Email: JerryTrish@aol.com
x Front Range Scientic, Boulder, CO. Email: jruge@sobolev.Colorado.EDU
the problems using grids of increasing sizes. Finally, we introduce a new method for
computing interpolation weights, and we show that in certain troublesome cases it
can signicantly improve AMG performance.
Our study diers from previous reports on the performance of AMG (e.g., [29, 30])
primarily by our examination of algorithmic scalability, our emphasis on unstructured
grids, and the introduction of a new algorithm for computing interpolation weights.
In Section 2, a description of some details of the AMG algorithm is given to provide
an understanding of the results and later discussion. In Section 3, we present results
of AMG applied to a range of symmetric scalar problems, using nite element discretizations
on structured and unstructured 2D and 3D meshes. AMG is also tested
on nonsymmetric problems, on both structured and unstructured meshes, and the results
are presented in Section 4. A version of AMG designed for systems of equations
is tested, with the focus on problems in elasticity. Results are discussed in Section
5. In Section 6, we introduce and report on tests of a new method for computing
interpolation weights. We concluding with some remarks in Section 7.
2. The Scalar AMG Algorithm. We begin by outlining the basic principles
and techniques that comprise AMG. Detailed explanations may be found in [29].
Consider a problem of the form
(1)
where A is an n n matrix with entries a ij . For convenience, the indices are iden-
tied with grid points, so that u i denotes the value of u at point i, and the grid is
denoted
by
ng. In any multigrid method, the central idea is that error
e not eliminated by relaxation must be removed by coarse-grid correction. Applied to
elliptic problems, for example, simple relaxations (Jacobi, Gauss-Seidel) reduce high
frequency error components e-ciently, but are very slow at removing smooth compo-
nents. However, the smooth error that remains after relaxation can be approximated
accurately on a coarser grid. This is done by solving the residual equation
on a coarser grid, then interpolating the error back to the ne grid and using it to
correct the ne-grid approximation. The coarse-grid problem itself is solved by a recursive
application of this method. One iteration of this process, proceeding through
all levels, is known as a multigrid cycle. In geometric multigrid, standard uniform
coarsening and linear interpolation are often used, so the main design task is to choose
a relaxation scheme that reduces errors the coarsening process cannot approximate.
One purpose of AMG is to free the solver from dependence on geometry, so AMG
instead xes relaxation (normally Gauss-Seidel), and its main task is to determine a
coarsening process that approximates error that this relaxation cannot reduce.
An underlying assumption in AMG is that smooth error is characterized by small
residuals, that is, Ae 0, which is the basis for choosing coarse grids and dening
interpolation weights. For simplicity of discussion here, we assume that A is a symmetric
positive-denite M-matrix, with a ii > 0; a ij 0 for j 6= i, and
This assumption is made for convenience; AMG will frequently work well on matrices
that are not M-matrices. To dene any multigrid method, several components are
required. Using superscripts to indicate level number, where 1 denotes the nest level
so that A
and
the components that AMG needs are as follows:
1.
\Grids"
M .
2. Grid operators A 1 ; A
3. Grid transfer operators:
Interpolation I k
Restriction I k+1
4. Relaxation scheme for each level.
Once these components are dened, the recursively dened cycle is as follows:
Algorithm: MV k
Otherwise:
times on A k u
Perform coarse grid correction:
Set
\Solve" on level k+1 with MV k+1
Correct the solution by
times on A k u
For this cycle to work e-ciently, relaxation and coarse-grid correction must work
together to eectively reduce all error components. This gives two principles that
guide the choice of the components:
P1: Error components not e-ciently reduced by relaxation must be
well approximated by the range of interpolation.
P2: The coarse-grid problem must provide a good approximation to
ne-grid error in the range of interpolation.
Each of these aects a dierent set of components: given a relaxation scheme,
P1 determines the coarse grids and interpolation, while P2 aects restriction and
the coarse grid operators. In order to satisfy P1, AMG takes an algebraic
relaxation is xed, and the coarse grid and interpolation are automatically chosen so
that the range of the interpolation operator accurately approximates slowly diminishing
error components (which may not always appear to be \smooth" in the usual
sense). P2 is satised by dening restriction and the coarse-grid operator by the
Galerkin formulation:
I k+1
and A
(2)
When A is symmetric positive denite, this ensures that the correction from the
exact solution of the coarse-grid problem is the best approximation in the range of
interpolation [23], where \best" is meant in the A-norm: by jjvjj A hAv; vi 1=2 .
The choice of components in AMG is done in a separate preprocessing step:
1.
2.
Partition
into disjoint sets C k and F k .
(a)
(b) Dene interpolation I k
.
3. Set I k+1
and A
.
4.
If
is small enough, set
set go to step 2.
Step 2 is the core of the AMG setup process. Since the focus is on coarsening a
particular level k, such superscripts are omitted here and c and f are substituted for
necessary to avoid confusion. The goal of the setup phase is to
choose the set C of coarse-grid points and, for each ne-grid point
small set C i C of interpolating points. Interpolation is then of the form:
I f
c u c
2.1. Dening Interpolation Weights. To dene the interpolation weights
recall that slow convergence is equivalent to small residuals, Ae 0. Thus,
we focus on errors satisfying
a ii e i
Now, for any a ij that is relatively small, we could substitute e i for e j in (4) and
this approximate relation would still hold. This motivates the denition of the set
of dependencies of a point i, denoted by S i , which consists of the set of points j for
which a ij is large in some sense. Hence, i depends on such j because, to satisfy the ith
equation, the value of u i is aected more by the value of u j than by other variables.
The denition used in AMG is
( a ik )
with typically set to be 0.25. We also dene the set S T
that is, the
set of points j that depend on point i, and we say that
i is the set of in
uences of
point i. Note: our terminology here diers from the classical use in [29], which refers
to i as being strongly connected to or strongly dependent on j if j 2 S i and which uses
no specic terminology for
A basic premise of AMG is that relaxation smoothes the error in the direction of
in
uence. Hence, we may select C as the set of interpolation points for i,
and adhere to the following criterion while choosing C and F :
P3: For each i 2 F , each j 2 S i is either in C or S j \ C i 6= ;.
That is, if i is a ne point, then the points in
uencing i must either be coarse points
or must themselves depend on the coarse points used to interpolate u i . This allows
approximations necessary to dene interpolation. For can be rewritten as:
a ii e i
a ik e k
AMG interpolation is dened by making the following approximation in (6):
a jk e k
a jk
otherwise.
Substituting this into (6) and solving for e i gives the desired interpolation weights for
point
2.2. Selecting the Coarse Grid. The coarse grid is chosen to satisfy the criterion
above, while attempting to control its size. We employ the two-stage process
described in [29], modied slightly to re
ect our modied terminology. The grid is
rst \colored", providing a tentative C/F choice. Essentially, a point with the largest
number of in
uences (\in
uence count") is colored as a C point. The points depending
on this C point are colored as F points. Other points in
uencing these F points
are more likely to be useful as C points, so their in
uence count is increased. The
process is repeated until all points are either C or F points.
Details of the initial C=F choice are as follows:
Repeat until
k . Set
(the number of
points depending on the point i).
Pick an i 2 U with maximal i . Set
T fig and
fig.
For
(points depending on fig) do:
S fjg and
For all k 2 S j
T U set (Increment the
for points that in
uence the new F -points).
For
T U set
Next, a second pass is made, in which some F points may be recolored as C points
to ensure that P3 is satised. In this pass, each F -point i is examined. The coarse
interpolatory set C
S C is dened. Then, if i depends on another F -point, j,
the points in
uencing j are scanned, to see if any of them are in C i . If this is not the
case then j is tentatively converted into a C-point and added to C i . The dependencies
of i are then examined anew. If all F -points depending on i now depend on a point
in C i then j is permanently made a C-point and the algorithm proceeds to the next
F -point and repeats. If, however, the algorithm nds another F -point dependent on
i that is not dependent on a point in C i then i itself is made into a C-point and j
returned to the pool of F -points. This procedure is followed to minimize the number
of F -points that are converted into C-points.
We make a brief comment about the computational and storage costs of the
setup phase. Unlike geometric multigrid, these costs cannot be predicted precisely.
Instead, computational cost must be estimated based on the average \stencil size"
over all grids, the average number of interpolation points per F -point, the ratio of the
total number of gridpoints on all grids to the number of points on the ne grid (grid
complexity), and the ratio of the number of nonzero entries in all matrices to that of
the ne-grid matrix (operator complexity). While a detailed analysis is beyond the
scope of this work [29], a good rule of thumb is that the computational eort for the
setup phase is typically equivalent to between four and ten V -cycles.
3. Results for Symmetric Problems. In this section, results for AMG applied
to symmetric scalar problems are presented. Initially, constant-coe-cient diusion
problems in 2D are tested as a baseline for comparison as we begin to introduce
complications, including unstructured meshes, irregular domains, and anisotropic and
discontinuous coe-cients. Results for 3D problems follow. All problems are run using
the same AMG solver with xed parameters. On many problems, it is possible to
improve our results by tuning some of the input parameters (there are many), but
the purpose here is to show AMG's basic behavior and robustness over a range of
problems.
The primary indicator of the speed of the algorithm is the asymptotic convergence
factor per cycle. This is determined by applying 20 cycles to the homogeneous prob-
lem, starting with a random initial guess, then measuring the reduction in the norm
of the residual from one cycle to the next (we use the homogeneous problem to avoid
contamination by machine representation). Generally, this ratio starts out very small
for the rst few cycles, then increases to some asymptotic value after 5-10 cycles,
when the most slowly converging components become dominant. This asymptotic
value is also a good indicator of the actual error reduction from one cycle to the next.
We use the 2-norm of the residual, although it is easy to show that the asymptotic
convergence factor is just the spectral radius of the AMG V -cycle iteration operator,
and hence is independent of the choice of norm.
The times given are for the setup and a single (1,1) V -cycle. Setup time is what
it takes to choose the coarser grids, dene interpolation, and compute the coarse grid
matrices. Cycle time is for one cycle, not the full solution time. Three machines are
used in this study. The majority of the smaller tests are performed on a Pentium
166MHz PC, although some are performed on a Sun Sparc Ultra 1. For the larger
problems that demonstrate scalability, we use a DEC Alpha. For this reason, timings
should be compared only within individual problems. Additionally, timings for the
smallest problems can have a high relative error, so the larger tests should give a
better picture of performance. Grid complexity is dened as
the number of grid points on level k. This gives an idea of how quickly the grids
are reduced in size. For comparison, in standard multigrid, the number of points is
reduced by a factor of 4 in 2D and 8 in 3D, yielding grid complexities of 4/3 and
8/7, respectively. AMG tends to coarsen more slowly. Operator complexity, which is
a better indicator of the work per cycle, is dened as
is the
average number of non-zero entries per row (or \stencil size") on level k. Thus, the
operator complexity is the ratio of the total number of nonzero matrix entries on all
levels to those on the nest level. Since relaxation work is proportional to the number
of matrix entries, this gives a good idea of the total amount of work in relaxation
relative to relaxation work on the nest grid, and also of the total storage needed
relative to that required for the ne grid matrix. In geometric multigrid, the grid and
operator complexities are equal, but in AMG, operator complexity is usually higher
since average stencil sizes tend to grow somewhat on coarser levels. Note that the
convergence factors and complexities are entirely independent of the specic machine
on which a test is performed.
In the tests reported here, the focus is on nite element discretizations of
r
d 21 (x; y) d 22 (x; y)
Several dierent meshes and diusion coe-cients D are used.
3.1. Regular domains, structured and unstructured grids. The rst ve
problems are 2D Poisson equations, with d
Dierent domains and meshes are used to demonstrate the behavior of AMG with
simple equations.
We begin with the simplest 2D model problem. The success of AMG on the
regular-grid Poisson problem is well-documented [30, 28, 29], so our purpose here is
more to assess its scalability.
Problem 1 This is a simple 5-point Laplacian operator with homogeneous Dirichlet
boundary conditions on the unit square. The experiment is run for uniform meshes
Convergence factor vs. log of problem size
Cycle Times vs. log of problem size
vs. log of problem size
Fig. 1. Top Left: Convergence factors, as a function of number of mesh points, for Problem 1
the uniform-mesh 5-point Laplacian. Top Right: Log-log plots of setup times (circles) and cycle times
(triangles) for the uniform-mesh 5-point Laplacian. The dotted line, for reference, shows perfectly
linear scaling. Bottom: Operator (circles) and grid (triangle) complexities for the uniform-mesh
5-point Laplacian.
with n n interior grid points, yielding mesh sizes
90000, 250000, and 490000.
Results for Problem 1 are displayed in Figure 1. The convergence factor (per cy-
cle) is very stable at approximately 0.04 for all problem sizes. Both the setup and cycle
time are very nearly linear in N (compare with the dotted line depicting a perfectly
linear hypothetical data set). Here, setup time averages roughly the time of 6 cycles.
As noted before, the operator complexities are higher than the corresponding grid
complexities, but both appear to be unaected by problem size. These data indicate
that AMG (applied to the uniform-mesh Laplacian) is algorithmically scalable: the
computational work is O(N) per cycle and the convergence factor is O(1) per cycle.
An important component of our study is to determine to what extent this algorithmic
scalability is retained as we increase problem complexity.
Problem 2 This is the same equation as Problem 1 ( discretized
on an unstructured triangular mesh. These meshes are obtained from uniform triangulations
by randomly choosing 15-20% of the nodes and \collapsing" them to neighboring
nodes, then smoothing the resulting mesh. The resulting operators might be
represented by M-matrices in some cases, but this is not generally the case. We use
meshes with typical example is shown at
top left in Figure 2.
Results of the experiments are displayed in Figure 2. On the unstructured meshes,
convergence factors tend to show some dependence on mesh size, growing to around
0.35 on the nest grid. It should be noted, however, that these grids tend to be less
structured than many found in practice, and no care was taken to ensure a \good"
mesh; the meshes may have diering characteristics (such as aspect ratios), as there is
a large degree of randomness in their construction. Complexities are also higher with
the unstructured meshes, and the setup time increases correspondingly. The main
point here is that AMG can deal eectively with unstructured meshes without too
much degradation in convergence over the uniform case.
3.2. Irregular domains. We continue to use the Laplacian, but now with irregular
domains. Since our emphasis here is the eects of this irregularity, we restrict our
tests to two representative mesh sizes that give just a snapshot of algorithm scalability.
Problem 3 The computational domain is an unstructured triangular discretization
of the torus 0:05
dierent mesh sizes were used, resulting in
grids with conditions around the hole are
imposed, with Neumann conditions on the outer boundary.
Problem 4 The domain for this problem is shown in Figure 3. The boundary conditions
are Neumann except that a Dirichlet condition is imposed around the small
hole on the right. The meshes are uniform, with resulting in
meshes with respectively. The domain does not easily admit
much coarser meshes.
Problem 5 The domain for this problem is shown on the bottom in Figure 3. Dirichlet
conditions are imposed on the exterior boundary, and Neumann conditions are on
the interior boundaries. A triangular unstructured mesh is used.
Results for Problems 3{5 are given in Table 1. Among these problems, Problem 3
has the simplest domain, but the least structured mesh and the slowest convergence.
This indicates that domain conguration generally has little eect on AMG behavior,
while the structure (and perhaps the quality) of the mesh is more important.
Table
1. Results for Problems 3{5.
Poisson problem on unstructured meshes, irregular domains
Convergence
Problem N factor/cycle (sec) (sec) complexity complexity
Convergence Factor vs. log of problem size
Cycle Times vs. log of problem size
vs. log of problem size
Fig. 2. Top Left: A typical unstructured grid for Problem 2, obtained by randomly deleting
15% of the nodes in a regular grid and smoothing the result. Top Right: Convergence factors, as
a function of number of mesh points, for the unstructured-mesh 5-point Laplacian. Bottom Left:
Log-log plots of setup times (circles) and cycle times (triangles) for the unstructured-grid 5-point
Laplacian. The dotted line, for reference, shows perfectly linear scaling. Bottom Right: Operator
(circles) and grid (triangle) complexities for the unstructured-grid 5-point Laplacian.
Fig. 3. Domain (Top Left) and typical grid (Top Right) for Problem 4. Note that the mesh
size necessary to display the triangulation is too coarse to observe the Dirichlet hole. Finer meshes
are used for the calculations. Bottom: Typical grid for Problem 5.
3.3. Isotropic diusion. The next problem set deals with isotropic diusion:
Discontinuous d(x; y) can cause problems for
many solution methods, including standard multigrid methods, although it is possible
to get good results either by aligning the discontinuities along coarse grid lines, or
by using operator-dependent interpolation [1]. In AMG, nothing special is required,
since it is based on operator-dependent interpolation. The problems are categorized
according to the diusion coe-cient used. The unit square is discretized on four
meshes: two structured meshes with and two unstructured
meshes, with 54518. The diusion coe-cients are dened in terms of
a parameter, c, allowed to be either 10 or 1000, as follows:
Problem 6 d(x;
Problem 7 d(x;
Problem 8 d(x;
1:0 0:125 max (jx 0:5j; jy 0:5j) 0:25;
c otherwise:
Problem 9 d(x;
1:0 0:125
c otherwise:
Results for these problems are presented in Table 2, which contains observed
convergence factors and operator complexities for the various combinations of grid
size and type, diusion coe-cient function, and discontinuity jump size. The overall
results are fairly predictable. Convergence factors are fairly uniform. On the structured
meshes, they tend to grow slightly with increasing grid size. They are noticeably
larger for unstructured grids, and they appear to grow somewhat with increasing grid
size. (As noted before, comparison among unstructured grids of various sizes must
take into account that their generation involves some randomness, so they may dier
in important ways.) The convergence factor does not seem to depend signicantly
on the size of the jump in the diusion coe-cient. In many cases, results were better
with Indeed, AMG has been applied successfully
to problems with much larger jumps [28]; see also Problem 17. Note that there are
only minor variations in operator complexity for the dierent problems and dierent
grid sizes. The only signicant eect on operator complexity appears to be whether
the grid is structured or unstructured, with the latter showing complexity increases
of about 30{40%. It should be noted, however, that even in these cases, the entire
operator hierarchy can be stored in just over three times the storage required for the
ne-grid alone.
Table
2. Results for Problems 6{9.
Poisson problem, variable and discontinuous coe-cients
Uniform mesh size Unstructured mesh size
Problem 16642 66049 13755 54518
# c conv. cmplxty conv. cmplxty conv. cmplxty conv. cmplxty
6 1000 0.097 2.21 0.180 2.2 0.264 3.32 0.369 3.36
9 1000 0.171 2.35 0.168 2.30 0.234 3.31 0.298 3.40
Problems 10{13 are designed to examine the case in which the diusion coe-cient
is discontinuous and to determine whether the \scale" of the discontinuous regions
aects performance. Accordingly, Problems 10{12 use a \checkerboard" pattern:
even and i
where Specically,
Problem 2.
Problem
Problem
For the last problem of this group, we have
Problem 13 d(x;
Results for Problems 10{13 are displayed in Table 3. The overall trend is similar
to the results for Problems 6{9, showing convergence factors that grow slightly with
problem size and that are noticeably larger for unstructured grids.
Table
3. Results for Problems 10{13.
Poisson problem, variable and discontinuous coe-cients
Uniform mesh size Unstructured mesh size
Problem 16642 66049 13755 54518
# c conv. cmplxty conv. cmplxty conv. cmplxty conv. cmplxty
The best results for the isotropic diusion problems are obtained for Problem 6,
where the coe-cient is continuous. The worst convergence factor is obtained for the
50 50 \checkerboard" pattern of Problem 12. Note that AMG performs well on
Problem 8, where \smooth" functions are approximately constant in the center of the
region, zero in the high-diusion zone near the boundary, and smoothly varying in be-
tween. Good interpolation in the low-diusion band is essential to good convergence.
AMG appears to work quite well with discontinuous diusion coe-cients,
even when they vary randomly by a large factor from point to point, as in Problem
13.
3.4. Anisotropic diusion. The next series of problems deals with anisotropic
diusion, which can arise in several ways. Anisotropy can be introduced by the mesh
being rened dierently in each directions, perhaps to resolve a boundary layer or
some other local phenomenon. Another case is a tensor product grid used in order to
rene some area [x with the mesh size small for x 2 [x
large elsewhere. This maintains a logically rectangular mesh, but causes
anisotropic discretizations in dierent parts of the domain. This is relatively easy to
deal with in geometric multigrid, where line relaxation and/or semi-coarsening can
be used [9]. Non-aligned anisotropy, which is more di-cult to handle with standard
multigrid, arises from the operator itself, such as the case of the full potential operator
in transonic
ows. The performance of AMG on grid-induced (aligned) anisotropy
has been reported previously [29], so we instead focus here on non-aligned anisotropy.
Both types of anisotropy can be written in terms of the diusion equation using the
coe-cient matrix:
cos sin sin 2
When is constant, this gives the operator @ +@ , where is in the direction . On
a rectangular grid with mesh sizes h x and h y , the usual Poisson equation corresponds
to the diusion equation with
x .
Problem 14 This problem features a non-aligned anisotropic operator on the unit
square, with Dirichlet boundary conditions at conditions
on the other two sides. The cases were both examined
with and =4. Each such combination is discretized on a uniform
square mesh (with bilinear elements) and both uniform and unstructured triangular
meshes (with linear elements). The uniform meshes have
the unstructured meshes have
Convergence factors for Problem 14, as shown in Figure 4, generally degrade with
increasing . This is to be expected, as it indicates lessened alignment of anisotropy
with the grid directions. The strong anisotropy case yields convergence factors as
high as 0.745. As noted above, the non-aligned case is very di-cult, even for standard
multigrid, and is the subject of ongoing study. One encouraging result is that
the unstructured grid formulations are relatively insensitive to grid anisotropy, with
convergence factors that hover between 0.3 and 0.5 in all cases except
these results indicate that AMG is rather robust for anisotropic problems, although
convergence factors are somewhat higher than those typically obtained with AMG on
isotropic problems.
Convergence factor vs. Convergence factor vs.
0.80.20.6Fig. 4. Convergence factors, plotted as a function of anisotropy direction . Left: Moderate
anisotropy, In each plot, the solid lines are the
uniform meshes with square discretizations, the dotted lines are uniform meshes with triangular
discretizations, and the dashed lines are the unstructured triangulations. In each case, the larger
grid size is indicated by a symbol (\o", \+", or \").
Problem 15 We use the operator of Problem 4, r (Dru), but with
This yields a discretization such that, on any circle
centered at the origin, there are dependencies in the tangential direction, but none in
the radial direction.
This problem is very di-cult to solve by conventional methods. Using the same
meshes as in Problem 14, AMG produced the convergence factors given in Table 4.
Table
4. Results for Problem 15: Circular diusion coe-cient
uniform mesh (square) uniform mesh (triangular) unstructured mesh
The convergence factors illustrate the di-culty with this problem, which cannot be
handled easily by geometric methods, even on regular meshes. A polar-coordinate
mesh would allow block relaxation over strongly coupled points, but would suer from
the di-culties of polar-coordinate grids (e.g., singularity at the origin) and would be
useless for more general anisotropies. While convergence of AMG here is much slower
than what we normally associate with multigrid methods, this example shows that
AMG can be useful even for extremely di-cult problems.
3.5. 3D problems. Turning our attention to three dimensions, we do not expect
special di-culties here, since AMG is based on the algebraic relationships between
the variables.
Problem This is a 3D Poisson problem on the unit cube. Discretization is by
trilinear nite elements on a rectangular mesh. Dirichlet boundary conditions are
imposed at conditions are imposed at the other
boundaries. Mesh line spacing and the number of mesh intervals were both varied
to produce several grids with dierent spacings and extents in the three coordinate
directions. The various combinations of mesh sizes used, convergence factors, and
operator complexities are shown in Table 5.
Table
5. Results for Problem 16.
3D Poisson problem, regular rectangular mesh
Convergence Operator
Nx hx Ny hy Nz hz N factor/cycle Complexity
Problem 17 This is a 3D unstructured mesh problem, generated by a code used at
Lawrence Livermore National Laboratory, for the diusion problem
g(~x). The domain is a segment of a sphere, from
and to =2. The coe-cient a(~x) is a large constant for r 0:05 and a small
constant for r > 0:05, with a step discontinuity of 1:010 26 . The boundaries
and are Dirichlet, while the boundaries are surfaces of
symmetry. Discretization is by nite elements using hexahedral elements.
Figure
5 shows the locations of the nodes at the element corners for one of the prob-
lems. Three problem sizes are given, with Convergence
factors and operator complexities for these problems are given in Table 6.
Table
6. Results for Problem 17.
3D unstructured diusion problem
Convergence Operator
8000 0.166 2.84
AMG apparently works quite well for 3D problems, including those with discontinuous
coe-cients. The convergence factors are good in all cases. Complexity varies
signicantly, with the highest values for the uniform grids, but decreasing markedly
with increasing grid anisotropy. This may be taken as further evidence that AMG
automatically takes advantage of directions of in
uence.
performed well on this suite of symmetric scalar test problems.
Many of these problems are designed to be very di-cult, often unrealistically so,
especially those with the circular anisotropic diusion pattern and the random diu-
sion coe-cients. Recall that the same AMG algorithm, with no parameter tuning,
was used in all cases. There are a number of tools for increasing the e-ciency of
AMG, especially on symmetric problems, that have proved useful in many cases. One
0.10.010.030.050.07Fig. 5. Nodes at element corners, 3D diusion problem. Top Left: View of all the node
locations. Top Right: View from directly overhead, showing radial lines of nodes in the azimuthal
direction. Bottom: Distribution of nodes within a plane of constant azimuth.
is the so-called V ? cycle [30], in which the coarse-grid corrections are multiplied by
an optimal parameter, determined by minimizing the A-norm of the corrected error.
Another, which has been successful in applying AMG to Maxwell's equations [27], is
to use an outer conjugate gradient iteration, with AMG cycling as a preconditioner.
Often, when AMG fails to perform well, the problem lies in a small number of components
that are not reduced e-ciently by relaxation or coarse-grid correction, and
conjugate gradients can be very e-cient in such cases. Other methods for improving
e-ciency include the F cycle [7, 31] and the full multigrid (FMG) method, whose
applicability to AMG is the subject for future research.
4. AMG applied to Nonsymmetric Scalar Problems. Although much of
the motivation and theory for AMG is based on symmetry of the matrix, this is not
at all a requirement for good convergence behavior. Mildly nonsymmetric problems
behave essentially like their symmetric counterparts. Such cases arise when a nonsymmetric
discretization of a symmetric problem is used or when the original problem is
predominantly elliptic. An important requirement for current versions of AMG is that
point Gauss-Seidel relaxation converge, however slowly. Thus, central dierencing of
rst-order terms, when they dominate, cannot be used because of severe loss of diagonal
dominance. Even in these cases, successful versions of AMG can be developed
using Kaczmarz relaxation [9]. Nevertheless, we restrict ourselves here to upstream
dierencing so that we can retain our use of Gauss-Seidel relaxation.
Problem This is a convection-diusion problem of the form
sin
with Dirichlet boundary conditions. Triangular meshes are used, both structured
unstructured 54518). The diusion term
is discretized by nite elements. The convection term is discretized using upstream
dierencing, that is, the integral of the convection term is computed over the triangle
and added to the equation corresponding to the node with the largest coe-cient (the
node \most upstream"). Note that this can result in a matrix that has o-diagonal
entries of both signs. Two choices for are employed:
are conducted with
Results are presented in Figure 6. The curves in the top left graph are for the
structured grid results are displayed with solid lines, and the unstructured-grid results
are displayed with dashed lines. For each pair of curves, the curve with the marker (\o"
or \") indicates the mesh with larger N . The convection-dominated case
is shown at the top right (structured grids) and on the bottom (unstructured grids).
In each case, the smaller N is shown with solid lines and the larger N with dashed
lines. Note that convergence is generally good and fairly uniform, particularly for the
unstructured cases. Results on the smaller uniform mesh are especially good when the
ow is aligned with the directions or =2. This is due to the triangulation:
to obtain the uniform mesh, the domain is partitioned into squares, and then each
square is split into two triangles, with the diagonal going from the lower left to the
upper right; the \good" directions are aligned with the edges of the triangles. This
also has an eect on the quality of the discretization, and on convergence, when the
ow is in the direction 3=4 and 7=4. Here, the discretization used for the convection
term causes a rather severe loss of positivity in the o-diagonals. This is more the
fault of the discretization than AMG. For with the smaller uniform mesh,
Convergence factor vs. Convergence factor vs.
structured grid
Convergence factor vs.
unstructured grid
Fig. 6. Convergence factors, plotted as a function of direction, , for the nonsymmetric prob-
lem. Top Left: Weaker convection case, 0:1 The solid lines are the uniform meshes, and the
dashed lines are the unstructured triangulations. In each case the larger grid size is indicated by
the placement of a symbol (\o" or \"). Top Right: Convection-dominated case, structured
grid. The solid line is the smaller N , the larger N is indicated by the dashed line. Bottom:
Convection-dominated case, unstructured grid. The solid line is the smaller N , the
larger N is indicated by the dashed line.
AMG was unable to handle the discretization produced and failed in the setup phase.
Often in multigrid applications, problems in convergence indicate problems with the
discretization, as is the case here. Note that for unstructured grids where
ow cannot
align with (or against) the grid, convergence is generally more uniform. An interesting
point is that, in many cases, a smaller diusion coe-cient reduces the convergence
factor. This is particularly striking with the unstructured meshes. Finally, note that
there is generally not much dierence between results for and for + , so there
is no benet from accidental alignment of relaxation with the
ow direction, and,
conversely, there is no slowing of convergence due to upstream relaxation. This is
due to the C/F ordering of relaxation. These tests show that AMG can be applied
to nonsymmetric problems. While the convergence factors in this test are generally
less than 0:2{0:25, which is certainly acceptable, some concern may be raised about
the scalability issue, since the convergence factors for the larger unstructured mesh
are noticeably greater than those for the smaller unstructured grids. It remains to be
determined if the convergence factors continue to grow with increasing problem size,
or if they reach an asymptotic limit.
5. AMG for Systems of Equations. The extension of AMG to \systems"
problems, where more than one function is being approximated, is not straightforward.
Many dierent approaches can be formulated. Consider a problem with two unknown
functions of the form
u
The scalar algorithm could work in special circumstances (for example, if B and C are
relatively small in some sense), but, generally, the scalar ideas of smoothness break
down. One approach would be to iterate in a block fashion on the two equations,
with two separate applications of AMG, one using A as the matrix (solving for u,
holding v xed) and one using B (solving for v, holding u xed), and repeating until
convergence. This is often very slow. An alternative is to use this block iteration as
a preconditioner in an outer conjugate gradient solution process.
Another fairly simple alternative is to couple the block iteration process on all
levels, that is, to coarsen separately for each function, obtaining two interpolation
operators I u and I v , then dene a full interpolation operator of the form
I
The Galerkin approach can then be used to construct the coarse grid operator,
I T
I T
Once the setup process is completed, multigrid cycles are performed as usual. We will
call this the function approach since it treats each function separately in determining
coarsening and interpolation. When u and v are dened on the same grid, it is
also possible to couple the coarse-grid choices for both, allowing for nodal relaxation,
where both unknowns are updated simultaneously at a point. Following are results
for the function approach applied to several problems in 2D and 3D elasticity.
Problem 19 This problem is plane-stress elasticity:
where u and v are displacements in the x and y directions, respectively. This can
be a di-cult problem for standard multigrid methods, especially when the domain is
long and thin. The problem is discretized on a rectangular grid using bilinear nite
elements, and several dierent problem sizes and domain congurations are used.
Problem 20 This problem is 3D elasticity:
where u, v, and w are displacements in the three coordinate directions. The problem
is discretized on a 3D rectangular grid using trilinear nite elements. Several dierent
problem sizes and domain congurations are used.
In all tests, we take 0:3. The function approach with (1,1) V -cycles is used in
all tests. Results for Problem 19 are contained in Table 7, and for Problem 20 in Table
8. Note that complexities are stable in 2D, with some dependence on problem size
in 3D. Convergence depends fairly heavily on the number of xed boundaries in both
2D and 3D, with convergence degrading as the number of free boundaries increases.
Table
7. Results for 2D elasticity, Problem 19
# of xed Convergence Operator
Table
8. Results for 3D elasticity, Problem 20
# of xed Convergence Operator
6. Iterative Interpolation Weights. Occasionally, we encounter situations
where convergence of AMG is poor, yet no specic reason is apparent. Our experience
leads us to believe that the fundamental problem, in many cases, stems from the
limitation of the matrix entry a jk to re
ect the true "smoothness" between e j and
e k . Often the true in
uences between variables are not clear. One case where this
limitation is quite evident is where nite elements with extreme aspect ratios are
used, especially in cases of extreme grid anisotropy or of thin-body elasticity. As a
simple example, consider the 2D nine-point negative Laplacian based on quadrilateral
elements that are stretched in the x-direction. The stencil changes as
y !1:
The limiting case is no longer an M-matrix. Indeed, even moderate aspect ratios
(e.g., o-diagonal entries of both signs. It is not immediately clear
how the neighbors to the east and west of the central point should be treated. Do
they in
uence the central point? Should they be in S i ? Even if they are not treated
as in
uences, similar questions arise about how the corner points relate to the central
point. Geometric intuition indicates they are decoupled from the center, and should
not be treated as in
uences. Yet, for the most common choice of in (5), AMG treats
them as in
uences. Another di-culty arises when two F points i and j in
uence each
other. Then e j must be approximated in the second sum on the right side of (6) to
determine the weights for i, while e i must be approximated, in (6) but with the roles
of i and j reversed, to determine the weights for j. However, since both e i and e j are
to be interpolated (being F -point values), it makes sense to use the interpolations to
obtain these approximations, that is, the approximations for e i and e j in (6) should
be
and e i
respectively. Note that the approximations for any points in C are unchanged in these
equations.
This gives an implicit system for the interpolation weights, which is solved by
an iterative scheme with the initial approximation . The new interpolation
weights are then calculated in a Gauss-Seidel-like manner, using the most recently
computed weights to make the approximations in (12). Two sweeps are generally
su-cient. An important addition to the process is that, after the rst sweep, the
interpolation sets are modied by removing from C i any point for which a negative
interpolation weight is computed. The second sweep is then used to compute the nal
interpolation weights.
We present results of two experiments illustrating the eectiveness, on certain
types of problems, of using this iterative weight denition scheme. Other examples
may be found in [24].
Problem 21 This operator is the \stretched quadrilateral" Laplacian mentioned
above, discretized on an 10000. The
stretching factor " represents the ratio of the x-dimension to the y-dimension of the
quadrilateral. Values used are In each case, the convergence
factor is computed for the choices 0:5. In the latter case, the corner
points are not treated as in
uences, and AMG selects a semi-coarsened coarse grid,
which is the method geometric multigrid would take.
Table
9. Results for problem 21,
Convergence Factor
standard weights iterative weights
100 900 0.83 0.53 0.82 0.23
100 10000 0.93 0.55 0.93 0.28
Results are displayed in Table 9. On the smallest problem, iterative weights have
no eect. However, the convergence rate on that problem is quite good, even for
standard weights. For moderate stretching (" < 100), the eect of iterative weighting
is to correct for misidentied in
uence (i.e., improvement for and to improve
the results even for correctly identied in
uence 0:5). For extreme stretching,
only the latter eect applies.
Problem 22 Here we use the unstructured 3D diusion operator from Problem 17,
whose grid is displayed in Figure 5. Problem sizes are 8000. The
problem includes a very large jump discontinuity, O(10 26 ), in the diusion coe-cients.
Table
10. Results for problem 22
Convergence Factor
standard weights iterative weights
Results are displayed in Table 10. Again we see that, on the smallest problem, iterative
weights have no eect, but that convergence there is fairly good anyway, even for
standard weights. On the two larger problems, iterative weights produce signicant
improvement for both choices of . Apparently, iterative weighting is countering the
eects of both poor element aspect ratios near the boundaries and jump discontinuities
in identifying in
uences among variables.
On these problems, and similar problems characterized by coe-cient discontinuities
and/or extreme aspect ratios in the elements, iterative weight denition proves
to be quite eective. However, iterative weighting is not always eective at improving
slow AMG convergence, and in a few cases it can actually cause very minor
degradation in performance [24]. We study a new approach in [13], called element
interpolation (AMGe), which has the promise of overcoming the di-culties associated
with poor aspect ratios, misidentied in
uences, and thin-body elasticity, provided
the individual element stiness matrices are available.
7. Conclusions. The need for fast solvers for many types of problems, especially
those discretized on unstructured meshes, is a clear indication that there is a market
for software with the capabilities that AMG oers. Our study here demonstrates the
robustness of AMG as a solver over a wide range of problems. Our tests indicate
that it can be further extended, and that robust, e-cient codes can be developed
for problems that are very di-cult to solve by other techniques. AMG is also shown
to have good scalability on model problems. This scalability does tend to degrade
somewhat with increasing problem complexity, but the convergence factors remain
tractable even in the worst of these situations.
--R
The multi-grid methods for the diusion equation with strongly discontinuous coe-cients
Stabilization of algebraic multilevel iteration
The algebraic multilevel iteration methods - theory and applications
A class of hybrid algebraic multilevel preconditioning methods
Towards algebraic multigrid for elliptic problems of second order
Guide to multigrid development
guide with applications to uid dynamics
Outlines of a modular algebraic multilevel method
Interpolation and related coarsening techniques for the algebraic multigrid method
Additive multilevel-preconditioners based on bi-linear 4interpolation
A note on the vectorization of algebraic multigrid algorithms
Matrix Analysis
Convergence of algebraic multigrid methods for symmetric positive de
Multigrid methods for variational problems: general theory for the V-cycle
Interpolation weights of algebraic multigrid
On smoothing properties of SOR relaxation for algebraic multigrid method
A simple parallel algebraic multigrid
Multigrid methods for solving the time- harmonic Maxwell equations with variable material parameters
An energy-minimizinginterpolation for robust multi-grid methods
--TR
--CTR
Emden Henson , Ulrike Meier Yang, BoomerAMG: a parallel algebraic multigrid solver and preconditioner, Applied Numerical Mathematics, v.41 n.1, p.155-177, April 2002
Chi Shen , Jun Zhang , Kai Wang, Distributed block independent set algorithms and parallel multilevel ILU preconditioners, Journal of Parallel and Distributed Computing, v.65 n.3, p.331-346, March 2005
Randolph E. Bank, Compatible coarsening in the multigraph algorithm, Advances in Engineering Software, v.38 n.5, p.287-294, May, 2007
K. Kraus , J. Schicho, Algebraic Multigrid Based on Computational Molecules, 1: Scalar Elliptic Problems, Computing, v.77 n.1, p.57-75, February 2006
Oliver Brker , Marcus J. Grote, Sparse approximate inverse smoothers for geometric and algebraic multigrid, Applied Numerical Mathematics, v.41 n.1, p.61-80, April 2002
Touihri, Variations on algebraic recursive multilevel solvers (ARMS) for the solution of CFD problems, Applied Numerical Mathematics, v.51 n.2-3, p.305-327, November 2004
J. J. Heys , T. A. Manteuffel , S. F. McCormick , L. N. Olson, Algebraic multigrid for higher-order finite elements, Journal of Computational Physics, v.204
Michele Benzi, Preconditioning techniques for large linear systems: a survey, Journal of Computational Physics, v.182 n.2, p.418-477, November 2002 | scalability;algebraic multigrid;interpolation;unstructured meshes |
351544 | Observability of 3D Motion. | This paper examines the inherent difficulties in observing 3D rigid motion from image sequences. It does so without considering a particular estimator. Instead, it presents a statistical analysis of all the possible computational models which can be used for estimating 3D motion from an image sequence. These computational models are classified according to the mathematical constraints that they employ and the characteristics of the imaging sensor (restricted field of view and full field of view). Regarding the mathematical constraints, there exist two principles relating a sequence of images taken by a moving camera. One is the epipolar constraint, applied to motion fields, and the other the positive depth constraint, applied to normal flow fields. 3D motion estimation amounts to optimizing these constraints over the image. A statistical modeling of these constraints leads to functions which are studied with regard to their topographic structure, specifically as regards the errors in the 3D motion parameters at the places representing the minima of the functions. For conventional video cameras possessing a restricted field of view, the analysis shows that for algorithms in both classes which estimate all motion parameters simultaneously, the obtained solution has an error such that the projections of the translational and rotational errors on the image plane are perpendicular to each other. Furthermore, the estimated projection of the translation on the image lies on a line through the origin and the projection of the real translation. The situation is different for a camera with a full (360 degree) field of view (achieved by a panoramic sensor or by a system of conventional cameras). In this case, at the locations of the minima of the above two functions, either the translational or the rotational error becomes zero, while in the case of a restricted field of view both errors are non-zero. Although some ambiguities still remain in the full field of view case, the implication is that visual navigation tasks, such as visual servoing, involving 3D motion estimation are easier to solve by employing panoramic vision. Also, the analysis makes it possible to compare properties of algorithms that first estimate the translation and on the basis of the translational result estimate the rotation, algorithms that do the opposite, and algorithms that estimate all motion parameters simultaneously, thus providing a sound framework for the observability of 3D motion. Finally, the introduced framework points to new avenues for studying the stability of image-based servoing schemes. | Introduction
Visual Servoing and Motion Estimation
A broad definition of visual servoing amounts to the control of motion on the basis of image analysis. Thus,
a driver adjusting the turning of the wheel by monitoring some features in the scene, a system attempting to
insert a peg through a hole, a controller at some control panel perceiving a number of screens and adjusting a
number of levers, a navigating system trying to find its home, all perform visual servoing tasks. Such systems
must respond "appropriately" to changes in their environment; thus, on a high level, they can be described as
evolving or dynamical systems, which can be represented as a function from states and control signals to new
states; both the states and the control variables may be functions of time. One approach to controlling such a
system is to design an observer or state estimator to obtain an estimate of the state; for example, this estimator
might implement a partial visual recovery process. This estimate is used by a controller or state regulator to
compute a control signal to drive the dynamical system. Ideally, observation and control are separable in the sense
that if we have an optimal controller and an optimal observer then the control system that results from coupling
the two is guaranteed to be optimal [13]. Different agents, i.e., dynamical systems, have different capabilities and
different amounts of memory, with simple reactive systems at one end of the spectrum and highly sophisticated
and flexible systems, making use of scene descriptions and reasoning processes, at the other end.
In the first studies on visual control, efforts concentrated on the upper echelon of this spectrum, trying to
equip systems with the capability of estimating accurate 3D motion and the shape of the environment. Assuming
that this information could be acquired exactly, sensory feedback robotics was concerned with the planning and
execution of the robot's activities. This was characterized as the ``look and move'' approach and the servoing
approaches were "position-or-scene-based." The problem with such separation of perception from action was
that both computational goals turned out to be very difficult. On the one hand, the reconstruction of 3D
motion and shape is hard to achieve accurately and, in addition, many difficult calibration problems had to be
addressed. On the other hand, spatial planning and motion control are very sensitive to errors in the description
of the spatiotemporal environment. After this realization and with the emergence of active vision [3-6], attention
turned to the lower part of the spectrum, minimizing visual processing and placing emphasis on the regulator,
as opposed to the estimator. One of the main lessons learned from research on the static look and move control
strategy was that there ought to be easier things to do with images than using them to compute 3D motion and
shape. This led to formulations of visual servoing tasks which were such that the controller had to act on the
image, i.e., to move the manipulator's joints in such a way that the scene ends up looking a particular way. In
technical terms, the controller had to act in such a way that specific coordinate systems (hand, eye, scene, etc.)
were put in a particular relationship with each other. Furthermore, this relationship could be realized by using
directly available image measurements as feedback for the control loop. This approach is known as image-based
robot servoing [16, 18, 21], and in recent years it has given very interesting research results [14, 23, 24, 28]. Most
of these results deal with the computation of the image Jacobian (i.e., the differential relationship between the
camera frame and the scene frame), along with the camera's intrinsic and extrinsic parameters.
Regardless of the philosophical approach one adopts in visual servoing (scene-based or image-based), the
essential aspects of the problem amount to the recovery of the relationship between different coordinate systems
(such as the ones between camera, gripper, robot, scene, object, etc. This could be the relationship itself or a
representation of the change of the relationship. Since a servoing system is in general moving (or observing moving
parts), a fundamental problem is the recovery of 3D motion from an image sequence. Whether this recovery is
explicit or implicit, without it the relationships between different coordinate frames cannot be maintained, and
servoing tasks cannot be achieved. Thus, it is important to understand how accurately 3D motion can be observed,
in order to understand how successfully servoing can be accomplished. Experience has shown that in practice
3D motion is very difficult to accurately observe, involving many ambiguities and sensitivities. Are there any
regularities in which the ambiguity in 3D motion expresses itself? In trying to estimate 3D motion, do the
introduced errors satisfy any constraints? As 3D rigid motion consists of the sum of a translation and a rotation,
are there differences in the inherent ambiguities governing the recovery of the different components of 3D motion?
In addition, we need to study this question independently of specific optimization algorithms. This is the problem
studied in this paper.
2 The Approach
There is a veritable cornucopia of techniques for estimating 3D motion but our approach should be algorithm
independent, in order for the results to be of general use. Equivalently, our approach should encompass all the
possible computational models that can be used to estimate 3D motion from image sequences. To do so, we need
to classify all possible approaches to 3D motion on the basis of the input used and the mathematical constraints
that are employed.
As a system moves in some environment, every point in the scene has a velocity with regard to the system.
The projection of these velocity vectors on the system's eye constitutes the so-called motion field. An estimate
of the motion field, called optical flow, starts by first estimating the spatiotemporal derivatives of the image
intensity function. These derivatives comprise the so-called normal flow which is the component of the flow along
the local image gradient, i.e., normal to the local edge. A system could start 3D motion estimation using normal
flow as input or it could first attempt to estimate the optical flow, though that is a very difficult problem, and
subsequently 3D motion. This means that an analysis of the difficulty of 3D motion estimation must consider
both inputs, optic flow fields and normal flow fields, and algorithms in the literature use one or the other input.
Regarding the mathematical constraints through which 3D motion is encoded in the input image motion,
extensive work in this area has established that there exist only two such constraints if no information about the
scene is available. The first is the "epipolar constraint" and the second, the "positive depth constraint." The
epipolar constraint ensures that corresponding points in the sequence are the projection of the same point in the
scene. The positive depth constraint requires that the depth of every scene point be positive, since the scene is
in front of the camera. Since knowledge of normal flow does not imply knowledge of corresponding points in the
sequence, the epipolar constraint cannot be used when normal flow is available.
The epipolar constraint has attracted most work. It can only be used when optic flow is available. Since many
measurements are present, one develops a function that represents deviation from the epipolar constraint all over
the image. A variety of approaches can be found in the literature using different metrics in representing epipolar
deviation and using different techniques to seek the optimization of the resulting functions. Furthermore, there
exist techniques that first estimate rotation and, on the basis of the result subsequently estimate translation
[9, 36, 37]; techniques that do the opposite [1, 22, 26, 31, 33, 38, 40]; and techniques that estimate all motion
parameters simultaneously [7, 8, 17, 35, 43]. The positive depth constraint, which has been used for normal flow
fields, is relatively new and is employed in the so-called direct algorithms [8, 20, 27]. One has to search for the 3D
motion that is consistent with the input and produces the minimum amount of negative depth. Put differently,
in these approaches the function representing the amount of negative depth must be minimized. Finally, one may
be able to use the "positive depth constraint" when optic flow is available, but there exist no algorithms yet in
the literature implementing this principle.
Looking at nature we observe that there exists a great variety of eye designs in organisms with vision. An
important characteristic is the field of view. There exist systems whose vision has a restricted field of view. This
is achieved by the corneal eyes of land vertebrates. Human eyes and common video cameras fall in that category
and they are geometrically modeled through central projection on a plane. We refer to these eyes as planar or
camera-type eyes. There exist also systems whose vision has a full 360 degree field of view. This is achieved by
the compound eyes of insects or by placing camera-type eyes on opposite sides of the head, as in birds and fish.
Panoramic vision is adequately modeled geometrically by projecting on a sphere using the sphere's center as the
center of projection. In studying the observability of 3D motion, we investigate both restricted field of view and
panoramic vision. It turns out that they have different properties regarding 3D motion estimation.
In our approach we employ a statistical model to represent the constraints to derive functions representing
deviation from epipolar geometry or the amount of negative depth, for both a camera-type eye and a spherical eye.
All possible, meaningful algorithms for 3D motion estimation can be understood from the minimization of these
functions. Thus, we perform a topographic analysis of these functions and study their global and local minima.
Specifically, we are interested in the relationships between the errors in 3D motion at the points representing
the minima of these surfaces. The idea behind this is that in practical situations any estimation procedure is
hampered by errors and usually local minima of the functions to be minimized are found as solutions.
3 Problem Statement
3.1 Prerequisites
We use the standard conventions for expressing image motion measurements for a monocular observer moving
rigidly in a static environment. We describe the motion of the observer by its translational velocity
and its rotational velocity with respect to a coordinate system OXY Z fixed to the nodal point of
the camera. Each scene point P with coordinates has a velocity -
R relative to the
camera. Projecting -
R onto a retina of a given shape gives the image motion field. If the image is formed on
a plane (Figure 1a) orthogonal to the Z axis at distance f (focal length) from the nodal point, then an image
point f) and its corresponding scene point R are related by
R, where z 0 is a unit vector in the
direction of the Z axis. The motion field becomes
(R
f z 0 \Theta (r \Theta (! \Theta
Z
R
r
Y
Z
x
y
f
O
z 0
r
R
f r
O
r
(a) (b)
Figure
1: Image formation on the plane (a) and on the sphere (b). The system moves with a rigid motion with
translational velocity t and rotational velocity !. Scene points R project onto image points r and the 3D velocity
R of a scene point is observed in the image as image velocity -
r.
with representing the depth. If the image is formed on a sphere of radius f (Figure 1b) having the
center of projection as its origin, the image r of any point R is
jRj , with R being the norm of R (the range),
and the image motion is
R u tr (t)
The motion field is the sum of two components, one, u tr , due to translation and the other, u rot , due to rotation.
The depth Z or range R of a scene point is inversely proportional to the translational flow, while the rotational
flow is independent of the scene in view. As can be seen from (1) and (2), the effects of translation and scene
depth cannot be separated, so only the direction of translation, t=jtj, can be computed. We can thus choose the
length of t; throughout the following analysis f is set to 1, and the length of t is assumed to be 1 on the sphere
and the Z-component of t to be 1 on the plane. The problem of 3D motion estimation then amounts to finding
the scaled vector t and the vector ! from a representation of the motion field. In the following, to make the
analysis easier, for the camera-type eye we will employ a non-vector notation. Then, the first two coordinates of
r denote the image point in the Cartesian system oxy, with oxkOX, oykOY and o the intersection of OZ with
the image plane. Denote by the image point representing the direction of the translation
vector t (referred to as the Focus of Expansion (FOE) or Focus of Contraction (FOC) depending on whether W
is positive or negative). Equations (1) become then the well known equations expressing the flow measurement
fly
where v tr (x;y)
are the translational and rotational flow
components, respectively.
Regarding the value of the normal flow, if n is a unit vector at an image point denoting the orientation of the
gradient at that point, the normal flow v n satisfies
r
Finally, the following convention is employed throughout the paper. We use letters with hat signs to represent
estimated quantities, unmarked letters to represent the actual quantities and the subscript "ffl" to denote errors,
where the error quantity is defined as the actual quantity minus the estimated one. For example, u rot (!) represents
actual rotational flow, u rot ( -
estimated rotational flow, t ffl the translational error vector, x
ff,
etc.
3.2 The model
The classic approach to 3D motion estimation is to minimize the deviation from the epipolar constraint. This
constraint is obtained by eliminating depth (or range) from equation (1) (or (2)). For both planar and spherical
eyes it is
r +! \Theta r) = 0: (5)
Equating image motion with optic flow, this constraint allows for the derivation of 3D rigid motion on the basis
of optic flow measurements. One is interested in the estimates of translation - t and rotation -
which best satisfy
the epipolar constraint at every point r according to some criterion of deviation. The Euclidean norm is usually
used, leading to the minimization [25, 34] of the function 1
Z Z
image
\Theta r)dr: (6)
On the other hand, if normal flow is given, the vector equations (1) and (2) cannot be used directly. The only
constraint is scalar equation (4), along with the inequality Z ? 0 which states that since the surface in view is in
front of the eye its depth must be positive. Substituting (1) or (2) into (4) and solving for the estimated depth
Z or range -
R, we obtain for a given estimate - t; -
! at each point r:
Z(or -
If the numerator and denominator of (7) have opposite signs, negative depth is computed. Thus, to utilize the
positivity constraint one must search for the motion - t; -
that produces a minimum number of negative depth
estimates. Formally, if r is an image point, define the indicator function
I nd
ae
Then estimation of 3D motion from normal flow amounts to minimizing [19, 20, 27] the function
Z Z
image
I nd (r)dr: (8)
Expressing -
r in terms of the real motion from (1) and (2), functions (6) and (8) can be expressed in terms of
the actual and estimated motion parameters t, !, - t and -
(or, equivalently, the actual motion parameters t; !
and the errors t
!) and the depth Z (or range R) of the viewed scene. To conduct any analysis,
a model for the scene is needed. We are interested in the statistically expected values of the motion estimates
resulting from all possible scenes. Thus, as our probabilistic model we assume that the depth values of the scene
are uniformly distributed between two arbitrary values Z min (or R min ) and Zmax (or Rmax
For the minimization of negative depth values, we further assume that the directions in which flow measurements
Because t \Theta r introduces the sine of the angle between t and r, the minimization prefers vectors t close to the center of gravity of
the points r. This bias has been recognized [40] and alternatives have been proposed that reduce this bias, but without eliminating
the confusion between rotation and translation.
are made are uniformly distributed in every direction for every depth. Parameterizing n by /, the angle between
n and the x axis, we thus obtain the following two functions:
Z=Zmin
Z
Z=Zmin
nd dZ d/; (10)
measuring deviation from the epipolar constraint and the amount of negative depth, respectively. Functions
and (10) are five-dimensional surfaces in t ffl the errors in the motion parameters. Finally, since for the scene
in view we employ a probabilistic model, the results are of a statistical nature, that is, the geometric constraints
between ffl at the minima of (9) and (10) that we shall uncover should be interpreted as being likely to occur.
3.3 Negative depth and depth distortion
This section contains a few technical prerequisites needed for the study of negative depth minimization and a
geometric observation that show the relationship of epipolar minimization to minimization of negative depth.
Equation (7) shows the estimated depth -
Z (or range -
R) given normal flow -
r \Delta n and estimates - t; -
! of the
motion. It can be further written as:
Z(or -
with D in the case of noiseless motion measurements (that is, -
of the form
Equation (11) shows how wrong depth estimates are produced due to inaccurate 3D motion values. The distortion
D, multiplies the real depth value to produce the estimate. Equation (12), for a fixed value of D and
describes a surface in (r; Z) (or (r; R)) space which is called an iso-distortion surface. Any such surface is to
be understood as the locus of points in space which are distorted in depth by the same multiplicative factor if
the image measurements are in direction n. If we fix n and vary D, the iso-distortion surfaces of the resulting
family change continuously as D varies. Thus all scene points giving rise to negative depth estimates lie between
the 0 and \Gamma1 distortion surfaces. The integral over all points (for all directions) giving rise to negative depth
estimates we call the negative depth volume. In Section 5 we will make use of the iso-distortion surfaces and
the negative depth volume to study in a geometric way function (10) resulting from minimization of the negative
depth values.
Let us now examine the two different minimizations from a geometric perspective. When deriving the deviation
from the epipolar constraint we consider integration over all points and depth values of the expression
or, equivalently,
denotes the vector perpendicular to u tr ( - t) in the plane of the image coordinates. When deriving the
number of negative depth values we consider integration over all points and depth values of the angle ff between
the two vectors
An illustration is given in Figure 2. The two measures considered in the two different minimizations are clearly
related to each other. In the case of negative depth we consider the angle ff, whereas in the case of the epipolar
constraint we consider the squared distance a 2 , which amounts to (sin ffku tr ( - t)k
The major difference in using the two measures arises for angles ff ? 90 ffi . The measure for negative depth
is monotonic in ff, but the measure for the deviation from the epipolar constraint is not, because it does not
differentiate between depth estimates of positive and negative value.
The fact that the computed depth has to be positive has not been considered in past approaches employing
minimization of the epipolar constraint. As this fact provides an additional constraint which could be utilized in
a
negative depth
a
a
Figure
2: Different measures used in minimization constraints. (a)
6 ff in the minimization of negative depth
values from projections of motion fields. (b) a 2 in the epipolar constraint.
the future, maybe in conjunction with the epipolar constraint, we are interested in the influence of this constraint
on 3D motion estimation as well. Thus, in addition to minimization of the functions (9) and (10) discussed
above, we study the minimization of a third function. This function amounts to the number of values giving
rise to negative depth when full flow measurements (optical flow) are assumed and the analysis is presented in
Appendix
B.
3.4 A model for the noise in the flow
In the analysis on the plane we will also consider noise in the image measurements, that is, we will consider the
flow values of the form (u
The choice of the noise model is
motivated by the following considerations: First, the noise should be such that no specific directions are favored.
Second, assuming that noise is both additive and multiplicative, there should be a dependence between noise and
depth, because the translational flow component is proportional to the inverse depth. Therefore we define noise
using two stochastic variables
Z
with the first and second moments being
and all stochastic variables being independent of each other, independent of image position, and independent of
the depth.
3.5 Overview of the paper, summary of results and related work
Our approach expresses functions (9) and (10) in terms of t, !, t ffl and ! ffl and finds the conditions that t ffl
ffl satisfy at local minima which represent solutions of the different estimation algorithms. Procedures for
estimating 3D motion can be classified into those estimating either the translation or rotation as a first step and the
remaining component (that is, the rotation or translation) as a second step, and those estimating all components
simultaneously. Procedures of the former kind result when systems utilize inertial sensors which provide them
with estimates of one of the components of the motion, or when two-step motion estimation algorithms are used.
Thus, three cases need to be studied: the case were no prior information about 3D motion is available and
the cases where an estimate of translation or rotation is available with some error. Imagine that somehow the
rotation has been estimated, with an error ! ffl . Then our functions become two-dimensional in the variables t ffl and
represent the space of translational error parameters corresponding to a fixed rotational error. Similarly, given
a translational error t ffl , the functions become three-dimensional in the variables ! ffl and represent the space of
rotational errors corresponding to a fixed translational error. To study the general case, one needs to consider the
lowest valleys of the functions in 2D subspaces which pass through 0. In the image processing literature, such local
are often referred to as ravine lines or courses. Each of the three cases is studied for four optimizations:
epipolar minimization for the sphere and the plane (full field of view and restricted field of view vision) and
minimization of negative depth for the sphere and the plane. Thus, there are twelve (four times three) cases, but
since the effects of rotation on the image are independent of depth, it makes no sense to perform minimization of
negative depth assuming an estimate of translation is available. Thus, we are left with ten different cases which
are studied below. These ten cases represent all the possible, meaningful motion estimation procedures.
The analysis shows that:
1. In the case of a camera-type eye (restricted field of view) for algorithms in both classes which estimate all
motion parameters simultaneously (i.e., there is no prior information), the obtained solution will have an
error in the translation and (ff ffl ; in the rotation such that x0 ffl
and
means that the projections of the translational and rotational errors on the image are perpendicular to each
other and that the rotation around the Z axis has the least ambiguity. We refer to this constraint as the
"orthogonality constraint." In addition, the estimated translation (-x lie on a line passing from the
origin of the image and the real translation
. We refer to this second constraint
as the "line constraint." Similar results are achieved for the case where translation is estimated first and on
that basis rotation is subsequently found, while the case where rotation is first estimated and subsequently
translation provides different results. The work is performed both in the absence of error in the image
measurements-in which case it becomes a geometric analysis of the inherent confusion between rotation
and translation-and in the case where the image measurements are corrupted by noise that satisfies the
model from Section 3.4. In this case, we derive the expected values of the local and global minima. As will
be shown, the noise does not alter the local minima, and the global minima fall within the valleys of the
function without noise. Thus, the noise does not alter the functions' overall structure.
2. In the case of panoramic vision, for algorithms in both classes which estimate all motion parameters si-
multaneously, the obtained solution will have no error in the translation and the rotational error
will be perpendicular to the translation. In addition, for the case of epipolar minimization from optic flow,
given a translational error t ffl , the obtained solution will have no error in the rotation (! while for the
case of negative depth minimization from normal flow, given a rotational error the obtained solution will
have no error in the translation. In other cases ambiguities remain. In the spherical eye case the analysis is
simply performed for noiseless flow.
A large number of error analyses have been carried out [2, 11, 12, 15, 29, 39, 41, 42] in the past for a
camera-type eye, while there is no published research of this kind for the full field of view case. None of the
existing studies, however, has attempted a topographic characterization of the function to be minimized for
the purpose of analyzing different motion techniques. All the studies consider optical flow or correspondence
as image measurements and investigate minimizations based on the epipolar constraint. Often, restrictive
assumptions about the structure of the scene or the estimator have been made, but the main results obtained
are in accordance with our findings. In particular, the following two results already occur in the literature.
(a) Translation along the x axis can be easily confounded with rotation around the y axis, and translation
along the y axis can be easily confounded with rotation around the x axis, for small fields of view and
insufficient depth variation. This fact has long been known from experimental observation, and has been
proved for planar scene structures and unbiased estimators [10]. The orthogonality constraint found
here confirms these findings, and imposes even more restrictive constraints. It shows, in addition, that
the x-translation and y-rotation, and y-translation and x-rotation, are not decoupled. Furthermore,
we have found that rotation around the Z axis can be most easily distinguished from the other motion
components.
(b) Maybank [32, 33] and Jepson and Heeger [29] established before the line constraint, but under much
more restrictive assumptions. In particular, they showed that for a small field of view, a translation
far away from the image center and an irregular surface, the function in (9) has its minima along a
line in the space of translation directions which passes through the true translation and the viewing
direction. Under fixation the viewing direction becomes the image center.
4 Epipolar Minimization: Camera-type Eye
the field of view is small, the quadratic terms in the image coordinates
are very small relative to the linear and constant terms, and are therefore ignored. All the computations are
carried out with the symbolic algebraic computation software Maple, and, for abbreviation, intermediate results
are not given. First the case of noise-free flow is studied.
Considering a circular aperture of radius e, setting the focal length
the function
in
Z=Zmin
e
Z
Z
Z
r
oe
dr dOE dZ
where (r; OE) are polar coordinates sin OE). Performing the integration, one obtains
y
'Z min
(a) Assume that the translation has been estimated with a certain error t 0). Then the relationship
among the errors in 3D motion at the minima of (14) is obtained from the first-order conditions @E1
= 0, which yield
It follows that ff ffl =fi
(b) Assuming that rotation has been estimated with an error (ff among the errors is
obtained from @E1
In this case, the relationship is very elaborate and the translational error depends
on all the other parameters-that is, the rotational error, the actual translation, the image size and the depth
interval. See Appendix A.
(c) In the general case, we need to study the subspaces in which E 1 changes least at its absolute minimum;
that is, we are interested in the direction of the smallest second derivative at 0, the point where the motion errors
are zero. To find this direction, we compute the Hessian at 0, that is the matrix of the second derivatives of
respect to the five motion error parameters, and compute the eigenvector corresponding to the smallest
eigenvalue. The scaled components of this vector amount to
As can be seen, for points defined by this direction, the translational and rotational errors are characterized by
the "orthogonality constraint" ff ffl =fi
and by the "line constraint" x 0 =y
Next we consider noise in the flow measurements. We model the noise (N x ; N y ) as defined in Section 3.4 and
derive E(E ep ), the expected value of E ep , which amounts to
Z=Zmin
Z
e
Z
ae'
r
oe
dr dOE dZ (16)
and thus
'Z min
(a) If we fix x
and y 0 ffl
and solve for
we obtain the same relationship as for noiseless flow described in (15).
This shows that the noise does not alter the expected behavior of techniques which in a first step minimize the
translation.
(b) If we fix ff ffl , fi ffl , and fl ffl , and solve for
@y
we obtain as before a complicated relationship between the translational error, the actual translation, and the
rotational error.
(c) To analyze the behavior of techniques which minimize for all 3D motion parameters, we study the global minimum
of E(E). From minimization with regard to the rotational parameters, we obtained (15) (the orthogonality
constraint and Substituting (15) into (16) and solving for
@y
we get, in addition,
\GammaZ
Thus the absolute minimum of E(E ep ) is to be found in the direction of smallest increase in E 1 , and is described
by the constraint by the orthogonality constraint, and by the line constraint.
5 Minimization of Negative Depth Volume: Camera-type Eye
In the following analysis we study the function describing the negative depth values geometrically, by means of the
negative depth volumes, that is, the points corresponding to negative depth distortion as defined in Section 3.3.
This allows us to incrementally derive properties of the function without considering it with respect to all its
parameters at once. For simplicity, we assume that the FOE and the estimated FOE are inside the image, and
we do not consider the exact effects resulting from volumes of negative depth in different directions being outside
the field of view. We first concentrate on the noiseless case. If, as before, we ignore terms quadratic in the image
coordinates, the 0 distortion surface (from equation (11)) becomes
and the \Gamma1 distortion surface takes the form
The flow directions (n x alternatively be written as (cos /; sin /), with / 2 [0; -] denoting the angle
between and the x axis. To simplify the visualization of the volumes of negative depth in different
directions, we perform the following coordinate transformation to align the flow direction with the x axis: for
every / we rotate the coordinate system by angle /, to obtain the new coordinates [x
cos / sin /
sin / cos /
Equations (17) and (18) thus become
(a) To investigate techniques which, as a first step, estimate the rotation, we first study the case of fl
then extend the analysis to the general case of fl ffl 6= 0.
If the volume of negative depth values for every direction / lies between the surfaces
The equation
describes a plane parallel to the y 0 Z plane at distance - x 0
0 from the origin, and
describes a plane parallel to the y 0 axis of slope 1
, which intersects the x 0 y 0 plane in the
. Thus we obtain a wedge-shaped volume parallel to the y 0 axis. Figure 3 illustrates the volume
through a slice parallel to the x 0 Z plane.
The scene in view extends between the depth values Z min and Zmax . We denote by A/ the area of the cross
section parallel to the x 0 Z plane through the negative depth volume in direction /.
As can be seen from Figure 3, to obtain the minimum, -
0 has to lie between x
then
Z=Zmin
which amounts to
If we fix fi 0
ffl and solve for x
0, we obtain
-that is, the 0 distortion surface has to intersect the \Gamma1 distortion surface in the middle of the depth interval
in the plane
.
depends only on the depth interval, and thus is independent of the direction, /. Therefore, the negative
depth volume is minimized if (20) holds for every direction. Since fi ffl
sin /y 0 ffl , we obtain x0 ffl
If the \Gamma1 distortion surface becomes
This surface can be most easily understood by slicing it with planes parallel to the x 0 y 0 plane. At every depth
value Z, we obtain a line of slope \Gamma1
which intersects the x 0 axis at x Figure 4a). For any
given Z the slopes of the lines in different directions are the same. An illustration of the volume of negative depth
is given in Figure 4b.
Z min
Z
Figure
3: Slice parallel to the x 0 Z plane through the volume of negative estimated depth for a single direction.
z4002000
volume
volume
(a) (b)
Figure
4: (a) Slices parallel to the x 0 y 0 plane through the 0 distortion surface (C 0 ) and the \Gamma1 distortion surface
at depth values
0: volume of negative depth values
between the 0 and \Gamma1 distortion surfaces.
Let us express the value found for - x 0
0 in the case of fl
I . In order to derive the position
of -
0 that minimizes the negative depth volume for the general case of fl ffl 6= 0, we study the change of volume as
0 changes from x
Referring to Figure 5, it can be seen that for any depth value Z, a change in the position of -
0 to -
assuming Z 6= 0, causes the area of negative depth values to change by A c , where A
and y 0
1 and y 0
2 denote the y 0 coordinates of the intersection point of the \Gamma1 distortion contour at depth Z with
the 0 distortion contours x I and x d. These coordinates are
and y 0
Therefore
Z
The change V c in negative depth volume for any direction is given by
Zmin
A c dZ
which amounts to
Substituting for Z I = Zmin+Zmax
, it can be verified that in order for V c to be negative, we must have
\Gammasgn(d).
We are interested in the d which minimizes V c . By solving
@d
we obtain
Thus
depends only on the depth interval, the total negative depth volume is obtained if the volume in
every direction is minimized. Therefore, for any rotational error (ff independent of fl ffl , we have the
orthogonality constraint:
A comment on the finiteness of the image is necessary here. The values A c and V c have been derived for an
infinitely large image. If fl ffl is very small or some of the depth values Z in the interval [Z min are small, the
coordinates of the intersections y 0
1 and y 0
2 do not lie inside the image. The value of A c can be at most the length
of the image times d. Since the slope of the \Gamma1 distortion contour for a given Z is the same for all directions, this
will have very little effect on the relationship between the directions of the translational and rotational motion
errors. It has an effect, however, on the relative values of the motion errors. Only if the intersections are inside
the image can (22) be used to describe the value of x 0
as a function of fi 0
ffl and the interval of depth values of the
scene in view.
(b) Next we investigate techniques which minimize both the translation and rotation at once. First consider a
certain translational error and change the value of fl ffl . An increase in fl ffl decreases the slope of the \Gamma1 distortion
surfaces, and thus, as can be inferred from Figure 4a, the area of negative depth values for every direction /
and every depth Z increases. Thus In addition, from above, we know that x0 ffl
. The exact
relationship of fi 0
ffl to x 0
is characterized by the locations of local minima of the function A/ , which in the image
processing literature are often referred to as "courses." (These are not the courses in the topographical sense
[30].) To be more precise, we are interested in the local minima of A/ in the direction corresponding to the
largest second derivative. We compute the largest eigenvalue, - 1 , of the Hessian, H, of A/ , that is the matrix of
the second derivatives of A/ with respect to x 0
and fi 0
ffl , which amounts to
and obtain
. The corresponding eigenvector, e 1 , is (fi 0
Solving for
we obtain
Last in an analysis of the noiseless case, we consider the effects due to the finiteness of the aperture. As before,
we consider a circular aperture. We assume a certain amount of translational error
, and we seek the
direction of translational error that results in the smallest negative depth volume.
Independent of the direction of translation, (23) describes the relationship of x
and fi ffl for the smallest negative
depth volume. Substituting (23) into (19), we obtain the cross-sections through the negative depth volume as a
function of x
0 and the depth interval. The negative depth volume for every direction / amounts to A/ l / , where
l / denotes the average extent of the wedge-shaped negative depth volume in direction /. The total negative depth
volume is minimized if
Considering a circular aperture, this minimization is achieved
if the largest A/ corresponds to the smallest extent l / and the smallest A/ corresponds to the largest l / . This
happens when the line constraint holds, that is, x0
y0 (see
Figure
6).
It remains to be shown that noise in the flow measurements does not alter the qualitative characteristics of
the negative depth volume and thus the results obtained.
First we analyze the orthogonality constraint. The analysis is carried out without considering the size of the
aperture; as will be shown, this analysis leads to the orthogonality constraint.
First, let Ignoring the image size, we are interested in E(V ), the expected value of the integral of the
cross sections A/ , which amounts to
Z=Zmin
Z
or
Z=Zmin
dZA
We approximate the expectation of the integrand by performing a Taylor expansion at 0 up to second order,
which gives
x
y
Figure
5: A change of - x 0
0 to -
causes the area
of negative depth values A c to increase by A 1 and to
decrease by A 2 . This change amounts to A
OE
A y 2
l y 1
l y 2
Figure
Cross-sectional view of the wedge-shaped negative
depth volumes in a circular aperture. The minimization
of the negative depth volume for a given
amount of translational error occurs when x0
y0 .
and l / i
denote the areas of the cross sections and
average extents respectively, for two angles / 1 and / 2 .
The two circles bounding A/ i are given by the equations
ffl Z min and x
ffl Zmax .
We are interested in the angle - between the translational and rotational error which minimizes the negative
depth volume. If we align the translational error with the x axis, that is, y
0, and if we express the rotational
error as sin -), we obtain
Z=Zmin
\Delta0
Solving for @E(V )
) to be perpendicular to (ff ffl ; fi ffl ), if
minimizations considered.
Next, we allow fl ffl to be different from zero. For the case of a fixed translational error, again, the volume
increases as fl ffl increases, and thus the smallest negative depth volume occurs for For the case of a
fixed rotational error we have to extend the previous analysis (studying the change of volume when changing the
estimated translation) to noisy motion fields: The \Gamma1 distortion surface becomes x
and the 0 distortion surfaces remain the same. Therefore, VC becomes
\Gammasgn
and E(VC ) takes the same form as VC in (21). We thus obtain the orthogonality constraint for
Finally, we take into account the limited extent of a circular aperture for the case of global minimization.
As noise does not change the structure of the iso-distortion surfaces, as shown in Figure 6, in the presence of
noise, too, the smallest negative depth volume is obtained if the FOE and the estimated FOE lie on a line passing
through the image center. This proves the orthogonality constraint for the full model as well as the line constraint.
The global minimum of the negative depth volume is thus described by the constraint the orthogonality
constraint and the line constraint.
6 Epipolar Minimization: Spherical Eye
The function representing deviation from the epipolar constraint on the sphere takes the simple form
Rmin
Z Z
sphere
ae'
r \Theta (r \Theta t)
dAdR
where A refers to a surface element. Due to the sphere's symmetry, for each point r on the sphere, there exists
a point with coordinates \Gammar. Since u tr when the integrand is expanded
the product terms integrated over the sphere vanish. Thus
Rmin
Z Z
sphere
t \Theta - t
dAdR
(a) Assuming that translation - t has been estimated, the ! ffl that minimizes E ep is since the resulting
function is non-negative quadratic in ! ffl (minimum at zero). The difference between sphere and plane is already
clear. In the spherical case, as shown here, if an error in the translation is made we do not need to compensate
for it by making an error in the rotation (! while in the planar case we need to compensate to ensure that
the orthogonality constraint is satisfied!
(b) Assuming that rotation has been estimated with an error ! ffl , what is the translation - t that minimizes
Since R is uniformly distributed, integrating over R does not alter the form of the error in the optimization.
consists of the sum of two terms:
Z Z
sphere
t \Theta - t
dA and
Z Z
sphere
are multiplicative factors depending only on R min and Rmax . For angles between t; - t and - t; ! ffl in
the range of 0 to -=2, K and L are monotonic functions. K attains its minimum when
Fix the distance between t and - t leading to a certain value K, and change the position of - t. L takes its minimum
as follows from the cosine theorem. Thus E ep achieves its minimum when - t lies on the
great circle passing through t and ! ffl , with the exact position depending on j! ffl j and the scene in view.
(c) For the general case where no information about rotation or translation is available, we study the subspaces
changes the least at its absolute minimum, i.e., we are again interested in the direction of the smallest
second derivative at 0. For points defined by this direction we calculate, using Maple, t.
7 Minimizing Negative Depth Volume on the Sphere
(a) Assuming that the rotation has been estimated with an error ! ffl , what is the optimal translation - t that
minimizes the negative depth volume?
Since the motion field along different orientations n is considered, a parameterization is needed to express all
possible orientations on the sphere. This is achieved by selecting an arbitrary vector s; then, at each point r of
the sphere, s\Thetar
ks\Thetark defines a direction in the tangent plane. As s moves along half a circle, s\Thetar
ks\Thetark takes on every
possible orientation (with the exception of the points r lying on the great circle of s). Let us pick ! ffl perpendicular
to s (s
We are interested in the points in space with estimated negative range values -
R.
ks\Thetark and s
the estimated range -
R amounts to -
. -
where sgn(x) provides the sign of x. This constraint divides the surface of the sphere into four areas, I to IV,
whose locations are defined by the signs of the functions ( - t \Theta s) \Delta r, (t \Theta s) \Delta r and (! ffl \Delta r)(s \Delta r), as shown in
Figure
7.
s
I 0
IV
III I
II IV
s
IIV
I
II
IV
III
area location constraint on R
I sgn(t \Theta s) \Delta
Figure
7: Classification of image points according to constraints on R. The four areas are marked by different
colors. The textured parts (parallel lines) in areas I and III denote the image points for which negative depth
values exist if the scene is bounded. The two hemispheres correspond to the front of the sphere and the back of
the sphere, both as seen from the front of the sphere.
For any direction n a volume of negative range values is obtained consisting of the volumes above areas I, II
and III. Areas II and III cover the same amount of area between the great circles (t \Theta s) \Delta
and area I covers a hemisphere minus the area between (t \Theta s) \Delta If the scene in view
is unbounded, that is, R 2 [0; +1], there is for every r a range of values above areas I and III which result in
negative depth estimates; in area I the volume at each point r is bounded from below by
(! ffl \Deltar)(s\Deltar) , and in
area III it is bounded from above by
. If there exist lower and upper bounds R min and Rmax in
the scene, we obtain two additional curves C min and Cmax with C
and we obtain negative depth values in area I only between Cmax
in area III only between C min and (! ffl \Theta r)(s \Theta r) = 0. We are given ! ffl and t, and we
are interested in the - t which minimizes the negative range volume. For any s the corresponding negative range
volume becomes smallest if - t is on the great circle through t and s, that is, (t \Theta s) \Delta - as will be shown next.
Let us consider a - t such that (t \Theta s) \Delta - t 6= 0 and let us change - t so that (t \Theta s) \Delta - changes, the area
of type II becomes an area of type IV and the area of type III becomes an area of type I. The negative depth
volume is changed as follows: It is decreased by the spaces above area II and area III, and it is increased by
the space above area I (which changed from type III to type I). Clearly, the decrease is larger than the increase,
which implies that the smallest volume is obtained for s; t; - t lying on a great circle. Since this is true for any s,
the minimum negative depth volume is attained for t. 2
(b) Next, assume that no prior knowledge about the 3D motion is available. We want to know for which
configurations of - t and ! ffl the negative depth values change the least in the neighborhood of the absolute minimum,
that is, at From the analysis above, it is known that for any ! ffl 6= 0, t. Next, we show that ! ffl
is indeed different from zero: Take t 6= - t on the great circle of s and let ! ffl , as before, be perpendicular to s.
the curves Cmax and C min can be expressed as C
6 (t;s)
0, where sin
denotes the angle between vectors t and s. These curves consist of the great circle
and the circle sin
6 (t;s)
parallel to the great circle (s Figure 8). If sin
6 (t;s)
this circle disappears.
s
I
IV
Figure
8: Configuration for t and - t on the great circle of s and ! ffl perpendicular to s. The textured part of area I
denotes image points for which negative depth values exist if the scene is bounded.
Consider next two flow directions defined by vectors s 1 and s 2 with
and - t.
For every point r 1 in area III defined by s 1 there exists a point r 2 in area I defined by s 2 such that the negative
estimated ranges above r 1 and r 2 add up to Rmax \Gamma R min . Thus the volume of negative range obtained from s 1
and s 2 amounts to the area of the sphere times (Rmax \Gamma R min ) (area II of s 1 contributes a hemisphere; area III of
area I of s 2 together contribute a hemisphere). The total negative range volume can be decomposed into
A word of caution about the parameterization used for directions
ks\Thetark is needed. It does not treat all orientations equally
(as s varies along a great circle with constant speed, s \Theta r accelerates and decelerates). Thus to obtain a uniform distribution,
normalization is necessary. The normalization factors, however, do not affect the previous proof, due to symmetry.
three components: a component V 1 originating from the set of s between t and - t, a component V 2 originating
from the set of s symmetric in t to the set in V 1 , and a component V 3 corresponding to the remaining s, which
consists of range values above areas of type I only. If for all s in V 3 , sin
6 (t;s)
zero. Thus for
all
Rmax , the negative range volume is equally large and amounts to the area on the sphere
times (Rmax \Gamma R min ) times
takes on values different from zero.
This shows that for any t ffl 6= 0, there exist vectors ! ffl 6= 0 which give rise to the same negative depth volume
as However, for any such ! ffl 6= 0 this volume is larger than the volume obtained by setting
follows that t. From Figure 7, it can furthermore be deduced that for a given ! ffl the negative depth volume,
which for only lies above areas of type I, decreases as t moves along a great circle away from ! ffl , as the
areas between C min and Cmax and between C min and (t \Theta s) \Delta decrease. This proves that in addition to
Conclusions
The preceding results constitute a geometric statistical investigation of the observability of 3D motion from
images. On their basis, a number of striking conclusions can be achieved. First, they clearly demonstrate the
advantages of panoramic vision in the process of 3D motion estimation. Table 1 lists the eight out of ten cases
which lead to clearly defined error configurations. It shows that 3D motion can be estimated more accurately
with spherical eyes. Depending on the estimation procedure used-and systems might use different procedures
for different tasks-either the translation or the rotation can be estimated very accurately. For planar eyes, this is
not the case, as for all possible procedures there exists confusion between the translation and rotation. The error
configurations also allow systems with inertial sensors to use more efficient estimation procedures. If a system
utilizes a gyrosensor which provides an approximate estimate of its rotation, it can employ a simple algorithm
based on the positive depth constraint for only translational motion fields to derive its translation and obtain a
very accurate estimate. Such algorithms are much easier to implement than algorithms designed for completely
unknown rigid motions, as they amount to searches in 2D as opposed to 5D spaces [19]. Similarly, there exist
computational advantages for systems with translational inertial sensors in estimating the remaining unknown
rotation.
Since the positive depth constraint turns out to be very powerful and since epipolar minimization does not
consider depth positivity, an interesting research question that arises for the future is how to couple the epipolar
constraint with the positive depth constraint. We attempted a first investigation into this problem through a
study of negative depth on the basis of optic flow for the plane in Appendix B, which gave very interesting results.
Specifically, we found that estimating all motion parameters simultaneously by minimizing negative depth from
optic flow provides a solution with no error in the translation. However, the rotation cannot be decoupled from the
translation, which makes it clear that for cameras with restricted fields of view the problem of rotation/translation
confusion cannot be escaped.
Camera-type eyes are found in nature in systems that walk and perform sophisticated manipulation because
such systems have a need for very accurate segmentation and shape estimation and thus high resolution in a
limited field of view. Panoramic vision, either through compound eyes or a pair of camera-type eyes positioned
on opposite sides of the head is usually found in flying systems which have the obvious need for a larger field of
view but also rely on accurate 3D motion estimation as they always move in an unconstrained way. When we
face the task of equipping robots with visual sensors, we do not have to necessarily copy nature, and we also do
not have to necessarily use what is commercially available. Instead, we could construct new, powerful eyes by
taking advantage of both the panoramic vision of flying systems and the high-resolution vision of primates. An
eye like the one in Figure 9, assembled from a few video cameras arranged on the surface of a sphere, can easily
estimate 3D motion since, while it is moving, it is sampling a spherical motion field!
Such an eye not only has panoramic properties, allowing very accurate determination of the transformations
relating multiple views, but it has the unexpected benefit of making it easy to estimate image motion with high
accuracy. Any two cameras with overlapping fields of view also provide high-resolution stereo vision, and this
collection of stereo systems makes it possible to locate a large number of depth discontinuities. Given scene
discontinuities, image motion can be estimated very accurately. As a consequence, having accurate 3D motion
Table
1: Summary of results
I II
Spherical Eye Camera-type Eye
Epipolar minimization,
given optic flow
(a) Given a translational error
ffl , the rotational error
(b) Without any prior infor-
mation,
(a) For a fixed translational
error
), the rotational
error
the form
(b) Without any a priori information
about the mo-
tion, the errors satisfy
Minimization of negative
depth volume, given
normal flow
(a) Given a rotational error
ffl , the translational error
(b) Without any prior infor-
mation,
(a) Given a rotational error,
the translational error is of
the form \Gammax
(b) Without any error infor-
mation, the errors satisfy
and image motion, the eye in Figure 9 is very well suited to developing accurate models of the world necessary
for many robotic/servoing applications.
Figure
9: A compound-like eye composed of conventional video cameras, arranged on a sphere and looking
outward.
--R
Determining 3D motion and structure from optical flow generated by several moving objects.
Inherent ambiguities in recovering 3-D motion and structure from a noisy flow field
Active vision.
Active perception.
Principles of animate vision.
Rigid body motion from depth and optical flow.
A Computational Approach to Visual Motion Perception.
Estimating 3-D egomotion from perspective image sequences
On the Error Sensitivity in the Recovery of Object Descriptions.
Analytical results on error sensitivity of motion estimation from two views.
Understanding noise sensitivity in structure from motion.
Planning and Control.
Simultaneous robot-world and hand-eye calibration
Robustness of correspondence-based structure from motion
A new approach to visual servoing in robotics.
Motion and structure from motion from point and line matches.
Direct perception of three-dimensional motion from patterns of visual motion
Qualitative egomotion.
Subspace methods for recovering rigid motion I: Algorithm and implemen- tation
Visual guided object grasping.
Robot Vision.
Relative orientation.
A tutorial on visual servo control.
Subspace methods for recovering rigid motion II: Theory.
The interpretation of a moving retinal image.
Algorithm for analysing optical flow based on the least-squares method
A Theoretical Study of Optical Flow.
Theory of Reconstruction from Image Motion.
Estimation of three-dimensional motion of rigid objects from noisy observations
Egomotion and relative depth map from optical flow.
Determining instantaneous direction of motion from optical flow generated by a curvilinear moving observer.
Processing differential image motion.
Optimal computing of structure from motion using point correspondence.
Optimal motion estimation.
Understanding noise: The critical role of motion error in scene reconstruction.
Statistical analysis of inherent ambiguities in recovering 3-D motion from a noisy flow field
--TR
Algorithm for analysing optical flow based on the least-squares method
Inherent Ambiguities in Recovering 3-D Motion and Structure from a Noisy Flow Field
Estimating 3D Egomotion from Perspective Image Sequence
Relative orientation
Estimation Three-Dimensional Motion of Rigid Objects from Noisy Observations
Analytical results on error sensitivity of motion estimation from two views
Planning and control
Subspace methods for recovering rigid motion I
Statistical Analysis of Inherent Ambiguities in Recovering 3-D Motion from a Noisy Flow Field
Principles of animate vision
Two-plus-one-dimensional differential geometry
Qualitative egomotion
Robot Vision
--CTR
John Oliensis, The least-squares error for structure from infinitesimal motion, International Journal of Computer Vision, v.61 n.3, p.259-299, February/March 2005
Tao Xiang , Loong-Fah Cheong, Understanding the Behavior of SFM Algorithms: A Geometric Approach, International Journal of Computer Vision, v.51 n.2, p.111-137, February
Jan Neumann , Cornelia Fermller , Yiannis Aloimonos, A hierarchy of cameras for 3D photography, Computer Vision and Image Understanding, v.96 n.3, p.274-293, December 2004
Abhijit S. Ogale , Cornelia Fermuller , Yiannis Aloimonos, Motion Segmentation Using Occlusions, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.6, p.988-992, June 2005 | positive depth constraint;error analysis;epipolar constraint;3D motion estimation |
351545 | First-Order System Least-Squares for the Helmholtz Equation. | This paper develops a multilevel least-squares approach for the numerical solution of the complex scalar exterior Helmholtz equation. This second-order equation is first recast into an equivalent first-order system by introducing several "field" variables. A combination of scaled L2 and H-1 norms is then applied to the residual of this system to create a least-squares functional. It is shown that, in an appropriate Hilbert space, the homogeneous part of this functional is equivalent to a squared graph norm, that is, a product norm on the space of individual variables. This equivalence to a norm that decouples the variables means that standard finite element discretization techniques and standard multigrid solvers can be applied to obtain optimal performance. However, this equivalence is not uniform in the wavenumber k, which can signal degrading performance of the numerical solution process as k increases. To counter this difficulty, we obtain a result that characterizes the error components causing performance degradation. We do this by defining a finite-dimensional subspace of these components on whose orthogonal complement k-uniform equivalence is proved for this functional and an analogous functional that is based only on L2 norms. This subspace equivalence motivates a nonstandard multigrid method that attempts to achieve optimal convergence uniformly in k. We report on numerical experiments that empirically confirm k-uniform optimal performance of this multigrid solver. We also report on tests of the error in our discretization that seem to confirm optimal accuracy that is free of the so-called pollution effect. | Introduction
. The scalar Helmholtz equation with exterior radiation boundary
conditions describes a variety of wave propagation phenomena. One such phenomenon
is the electromagnetic scattering of time-harmonic waves, which, since the
advent of stealth technology, has helped propel an interest in e#cient numerical solution
techniques. However, such boundary value problems are challenging because
they are both indefinite and non-self-adjoint. Hence, standard numerical procedures
for solving them generally su#er from poor discretization accuracy and slow convergence
of the algebraic solver. Accuracy can be improved by taking meshes that are
refined enough for the given wavenumber, but such refinements are impractical for
many problems. Also, convergence of the iterative solver can be improved by preconditioning
the algebraic system, but finding an appropriate preconditioner is problematic
and, in general, robust iterative solvers for indefinite and non-self-adjoint problems
are di#cult to design.
# Received by the editors May 29, 1998; accepted for publication (in revised form) February 16,
1999; published electronically April 28, 2000. This work was sponsored by the National Science Foundation
under grant DMS-9706866 and the Department of Energy under grant DE-FG03-93ER25165.
http://www.siam.org/journals/sisc/21-5/33977.html
Center for Applied Scientific Computing, Lawrence Livermore National Laboratory, P.O. Box
808, L-661, Livermore, CA 94551 (lee123@llnl.gov).
Applied Math Department, Campus Box 526, University of Colorado at Boulder, Boulder,
CO 80309-0526 (tmanteuf@boulder.colorado.edu, stevem@boulder.colorado.edu, jruge@boulder.
colorado.edu).
1928 LEE, MANTEUFFEL, MCCORMICK, AND RUGE
Standard multigrid methods are no exception. Helmholtz problems tax multi-grid
methods by admitting certain highly oscillatory error components that yield
relatively small residuals. Because these components are oscillatory, standard coarse
grids cannot represent them well, so coarsening cannot eliminate them e#ectively.
Because they yield small residuals, standard relaxation methods cannot e#ectively
reduce them. Compounding these di#culties is the property that the dimension of
the subspace of troublesome components increases with increasing wavenumber k.
One approach to ameliorate these di#culties (cf. [9], [10], [21], and [23]) is to introduce
ray-like basis functions on the coarser grids, via exponential interpolation and
weighting, and to use multiple coarsening, where several coarse grids are used at a
given discretization level to resolve some of these error components. Introduction
of ray basis functions allows the oscillatory error components to be represented on
coarser grids, and successive coarsening with an increasing but controlled number of
grids allows a full range of these components to be resolved on the coarser levels.
Since the magnitude of the wavenumber k dictates the oscillatory nature of the error
components, then judicious introduction of exponential interpolation and weighting
and of new coarse grid problems as a function of k leads to multigrid schemes whose
convergence is k-uniform.
Direct application of such a multigrid algorithm to the scalar Helmholtz equation
would not generally achieve optimal discretization accuracy that is uniform in k. Because
k-uniform optimality of the discretization and multigrid solver is the central aim
of this paper, the multigrid algorithm will be applied instead to a carefully designed
first-order system least-squares (FOSLS) (cf. [11], [12], [21], and [22]) formulation
of the Helmholtz problem. This FOSLS methodology involves recasting the scalar
equation into a first-order system by introducing "field" variables, deriving boundary
conditions for these new variables, and applying a least-squares approach to the
resulting first-order system boundary value problem.
An important consideration in the numerical solution of Helmholtz problems is
the so-called pollution e#ect. The results in [3], [5], and [19] show that, for large
wavenumber k, even when kh # 1, the dispersive nature of the Galerkin finite element
discretization introduces a large phase lead error term and, in fact, a sharp H 1 error
estimate is contaminated by a pollution term unless k 2 h # 1. A stabilized Galerkin
least-squares finite element method was developed in [3], [5], and [19] to reduce the
pollution e#ect and, hence, to ameliorate the e#ects of this restrictive condition.
In the work presented below, because the FOSLS methodology leads to a minimization
principle, a Rayleigh-Ritz principle can be applied. Thus, standard bases
can be used to achieve O(kh) discretization error bounds in the least-squares norm
(that is, the norm determined by the functional itself) for the FOSLS formulation.
Our numerical results will suggest that this pollution-free performance is achieved by
the FOSLS formulation-not only the least-squares norm but in a scaled H 1 norm as
well.
Thus, the aim of this paper is two-fold: k-uniform multigrid convergence factors
and pollution-free discretization accuracy. Since a fast k-uniformly convergent solver
has been the most di#cult to obtain historically, this will be the primary focus of our
work. However, we will also show in our final numerical experiment that the FOSLS
discretization appears to be pollution free.
Substantial literature is available on computational electromagnetics. Algorithms
that apply to the time-dependent Maxwell equations or div-curl systems in general
include those described in [13], [20], [24], [26], and [27].
FOSLS FOR HELMHOLTZ 1929
Studies based on first-order system formulations of the Helmholtz equation include
[16], [18], [21], and [23]. The approach described in [16] does not include a curl
term in the functional, so its discretizations are somewhat restrictive and its standard
multigrid methods could not perform well with large wavenumbers, as expected.
The functional in [18] incorporates a curl expression but not in a way that achieves
uniformity in the discretization (or the multigrid solver, had they analyzed it). In
fact, except for [21] and [23] which is the basis of the work presented in the present
paper, none of the methods cited above were shown to achieve optimal discretization
accuracy and multigrid convergence.
For general literature on FOSLS, see [8], [11], [12], [21], and [22], which treat
least-squares functionals based on L 2 and L 2
(that is, a combination of L 2 and
products. The least-squares functionals described in the next section are
also based on these norms. In fact, the analytical techniques used in these papers are
also used to derive some of the theoretical results presented below. These techniques
enable the establishment of several uniform norm equivalence results that would not
be possible by way of the more restrictive Agmon-Douglis-Nirenberg theory (cf. [2]).
This paper is organized as follows. Section 2 introduces the functional setting, the
first-order system, and the L 2 and L 2
least-squares formulations. In section 3,
a coercivity result for the L 2
defined over an appropriate Hilbert
space is established using a compactness argument. Unfortunately, this coercivity
estimate does not hold uniformly with respect to the wavenumber, so in section 4 we
introduce an appropriate subspace on which we do prove k-uniform coercivity. These
estimates provide an intuitive basis for the nonstandard multigrid method developed
in section 5. In the final section, we report on numerical tests that demonstrate k-uniform
pollution-free optimality of the discretization technique and multigrid solver.
2. FOSLS formulation. Let D # 2 be a bounded domain, which for technical
reasons (e.g., scaled H 1 equivalence of the L 2 FOSLS functional; cf. Theorems 3.2
and 4.2) will be assumed to have a C 1,1 or convex polygonal boundary # i of positive
measure. Consider the exterior Helmholtz boundary value problem
D,
Here, k is the wavenumber, f is in L 2
loc
#r
denotes the derivative in
the radial direction used in the asymptotic Sommerfeld radiation condition (cf. [14]).
To solve (2.1) numerically, the unbounded domain is truncated and the Sommerfeld
radiation condition is approximated. Thus, let B be a ball containing D (D # B)
with C 1,1 boundary # e , so that the region bounded by # i # e is annulus-like.
Assuming now that f # L
2(# , consider the reduced boundary value problem
in# ,
Here, a first-order approximation to the Sommerfeld radiation condition is used.
Assume henceforth that (2.2) has a unique solution in H
Introducing the "field" variable
k #p, (2.2) can be recast into the first-order
system
in# ,
or
where L is the first-order operator corresponding to (2.3). Note that (2.3) is essentially
a scaled form of Maxwell's equations that has been reduced by assuming a transverse
magnetic wave solution. Variable p gives the electric field and variable u gives the
magnetic field.
To analyze (2.3) and the corresponding least-squares formulations, several functional
spaces are required. With s being a nonnegative integer, let H
s(# denote the
usual Sobolev space of order s with norm # s . With denote the
usual Sobolev trace space with norm # t (cf. [1]). Let # denote the L 2 (#) inner
product. The following spaces and scaled H 1 norm will be needed:
0,d
The boundary condition imposed in the definition of
W(# is to be taken in the
trace sense. Finally, let H
0,d denote the completion of L
2(# in the
respective inverse norms
(, w)
#w# 1,k
FOSLS FOR HELMHOLTZ 1931
and
0,d
(, w)
#w# 1,k
It is well known (cf. [17]) that [H(div; # H
1(# is a Hilbert
space in the norm
where
and
Thus, it is also a Hilbert space in the norm
Using the continuity of the trace operators from H
2 (#) and H(div;
(#), it is easy to see that W is also a Hilbert space in the # k norm
(cf. [22]).
Define the L 2 and L 2
least-squares functionals by
f
and
f
#-1,d
respectively. Both functionals incorporate curl terms that, with essentially no increase
in complexity for F and only moderate increase for G, allow for simplified finite
element discretization and multigrid solution processes.
For brevity, the equivalence results of the next two sections focus on G; we will
state but not prove the analogous H 1,k equivalence results for F (cf. [21] for detailed
proofs of similar results).
3. Nonuniform coercivity:
W(#3 A compactness argument (cf. [7] and [8])
is used here to establish equivalence between the homogeneous part of the L 2
least-squares functional, G(u, p; 0), and the square of the product norm [#] 2
# 1,k .
In what follows, C denotes a generic constant that may change meaning at each
occurrence but is independent of k. The notation C k will be used (in this section
when dependence on k may occur.
Theorem 3.1. There exist constants C and C k such
for all (u, p) # W
1932 LEE, MANTEUFFEL, MCCORMICK, AND RUGE
Proof. Let (u, p) #
W(# . To prove the upper bound, note that the triangle
inequality yields
-1,d
# .
Integration by parts, the triangle and Cauchy-Schwarz inequalities, the trace theorem,
and the #-inequality give
0,d
#w# 1,k
0,d
u, #w # 1
#w# 1,k
0,d
#w# 1,k
0,d
#w# 1,k
0,d
Also, integration by parts and the boundary condition on H 1
give
#w# 1,k
#w# 1,k
#w# 1,k
#u# .
Finally,
#p# -1,d #p#
#p# 1,k .
The upper bound then follows from (3.1)-(3.4).
To prove the lower bound, let
W(# be the completion of
W(# in the #u#
norm. To prove first that
FOSLS FOR HELMHOLTZ 1933
on
W(#5 note that the triangle inequality used twice and (3.4) yield
# .
Hence, (3.5) would follow if it could be shown that
To this end, note that integration by parts producesk 2 #p#
Moreover, since 1
real, then the right-hand side of (3.8) must be real, which,
together with the triangle inequality, complex modulus arithmetic, and the Cauchy-Schwarz
and #-inequalities, yieldsk 2 #p#
so bound (3.5) holds on all of
W(#1 Now, to prove that
1934 LEE, MANTEUFFEL, MCCORMICK, AND RUGE
on
W(# by contradiction, suppose that (3.9) does not hold. Then there exists a
sequence
W(# such that
and
then the Rellich selection theorem (cf. [7])
implies that p j # p for some p in L
2(#3 which, with (3.5) and (3.10), shows that
is a Cauchy sequence in the #u#p# 1,k
has a limit {(u, p)} #
. Also, consider the scaled Helmholtz equation
in# ,
with the corresponding bilinear form
0,d .
Integration by parts gives
Convergence of p j to p in the # 1,k norm, the Cauchy-Schwarz and #-inequalities,
and (3.10) then imply that
lim
FOSLS FOR HELMHOLTZ 1935
or, equivalently, because of the solution uniqueness of (3.11). Finally, by (3.5),
it follows that
so that
which is a contradiction. Thus, (3.9) must hold. The theorem now follows by restricting
this bound to the subspace
The analogous result for F is essentially obtained using a Helmholtz decomposition
of u. Its proof is very similar to the proofs of Thorems 3.5 and 4.5 of [21], and,
thus, will not be included here.
Theorem 3.2. There exist constants C and C k such
for all (u, p) #
W(# .
These theorems show that the homogeneous parts of functionals F and G are
bounded (with constant C) and coercive (with constant 1
Ck ) in
W(# . Unfortunately,
the coercivity constant depends on the wavenumber k, which means that standard
numerical solution processes can degrade in performance as k increases. In the next
section, a subspace of
W(# is introduced on which we prove the coercivity constant
to be independent of k.
4. Uniform coercivity: Z
#2 A fundamental problem with the Helmholtz
equation is that approximate Fourier components with wavenumbers near k produce
relatively small residuals; that is, oscillatory components are in the near nullspace of
(2.1). This degradation of the usual sense of ellipticity is also present in system (2.3),
which means that the homogeneous parts of respective functionals G and F are not
equivalent to the square of the product norms [#] 2
# 1,k and [# 1,k uniformly
in k. However, as we will see, these troublesome functions can be treated specially in
the multigrid coarsening and finite element approximation processes. To guide the
design of these processes, this section will establish equivalence results on a subspace
of
W(# that excludes these near-nullspace components. To this end, note that
1936 LEE, MANTEUFFEL, MCCORMICK, AND RUGE
is a dense subspace of H 1
0,d (see [6]) that corresponds to the range of the inverse
Laplace operator for
in# ,
(that is,
N(#07 The operator in (4.1) is
self-adjoint, so there exists an L
with lim j#
in# ,
To isolate the near-nullspace components of H 1
0,d(#2 let # (0, 1) be given and define
the spaces
k or #2
Z
Here we use span to mean finite linear combinations and overbar to denote closure in
the H 1,k norm.
The subspace Z
# of
W(#0 which is closed in the #| k norm, avoids the
oscillatory near-nullspace error components of system (2.3) that yield small residuals.
Hence, in this subspace, uniform equivalence statements can be made between the
least-squares norms corresponding to G and F (that is, the square roots of their
homogeneous parts) and the respective product norms [#] 2
Theorem 4.1. Let # (0, 1) be given. Then there exists a constant C, independent
of # and k, such
for all (u, p) # Z
# .
Proof. The upper bound follows from that of Theorem 3.1 and the fact that
Z
W(# .
The proof of the lower bound rests on the observation that Z
# is contained in
the closure of
Z 0,#
in the product norm [#] 2
# 1,k . A proof analogous to that of the upper bound
in Theorem 3.1 shows that G(u, p; 0) is continuous on Z
(# in this norm.
It is therefore su#cient to establish the lower bound for (u, p) # Z
0,#3 To this end,
consider the Helmholtz decomposition (cf. [12])
FOSLS FOR HELMHOLTZ 1937
2(# , and # is divergence-free such that n
# e . Substituting this decomposition into G(u, p; 0) and using the L
orthogonality
property of this decomposition and the divergence-free property of # , we have
Also, integration by parts and the Cauchy-Schwarz inequality give
0,d
#w# 1,k
0,d
#w# 1,k
Now, the triangle inequality and (4.4) yield
which gives
But [-#] is positive-definite and self-adjoint on S # and p # S # , so it is easy to
verify the bound
1,k .
Hence, (4.3), (4.5), and (4.6) imply that
for any (u, p) # Z
0,#1 Finally, combining (4.7) with the triangle inequality and the
restriction # < 1, we obtain
(4.
1938 LEE, MANTEUFFEL, MCCORMICK, AND RUGE
The lower bound now follows from (4.7) and (4.8).
We again state the analogous result for F without proof.
Theorem 4.2. Let # (0, 1) be given and k # 1. Then there exists a constant C,
independent of # and k, such
for all (u, p) # Z #
.
Remark 4.1. While the coercivity constant in Theorem 4.2 is uniform in k,
the dimension of Z # grows with k (for fixed #). However, these troublesome
components can still be specially treated in the multigrid coarsening process without
increasing the order of complexity, as we will show in the next section.
5. Nonstandard multigrid. Some of the basic concepts of this section were
taken from or inspired by the work of Achi Brandt (cf. [9] and Brandt and Livshitz [10]).
The L 2 functional F , when it applies, is usually more practical than the L 2
functional G, which involves the somewhat cumbersome negative norms. For this rea-
son, the remaining two sections will focus on minimizing F (u, p; f). A Rayleigh-Ritz
finite element method is used for the discretization process. Let T h be a triangulation
of
domain# into finite elements of maximal length
let W 1 be a finite-dimensional subspace of
W(# having the approximation property
for (v, q) # H
W(# . The discrete fine grid minimization problem is the
following.
. Find
Equivalently, defining the bilinear form
for (u, p), (v, q) #
W(# , the discrete fine grid variational problem is the following.
. Find
f
for all (v 1 ,
A standard multilevel scheme for solving either of these discrete problems is fairly
straightforward. Let
be a conforming sequence of coarsenings of triangulation T h ,
FOSLS FOR HELMHOLTZ 1939
be a set of nested coarse grid subspaces of W 1 , the finest subspace, and
be a suitable (generally local) basis for W j , referred to here as level j. Given an initial
approximation level j, the level j relaxation sweep consists of the following
cycle:
. For each
chosen to minimize
(By #(b, c) is meant (#), where #
f) is a quadratic function in #, this local minimization procedure
is simple and very inexpensive (see [23] and [25]). In fact, this minimization
process is just a block Gauss-Seidel iteration with blocks determined by the choice of
(b recalling that L is the first-order operator corresponding to (2.3),
the block system is expressed as
where
# .
Now, given a fine grid approximation level 1, the level 2 coarse grid
problem is to find a correction
Having obtained the fine grid approximation is corrected according to
Applying this process recursively yields a multilevel scheme in the usual way.
Unfortunately, if the coarser level basis functions are chosen in the natural way,
then this coarse grid correction process may not be very e#ective for the FOSLS
formulation of the Helmholtz equation. Di#culties can arise with error components
for elements in Z # . Specifically, trouble arises from error components that have the
approximate form
sin ## e k(x cos #+y sin #) ,
where # [0, 2#). The main problem is that, on a grid where
of this type are highly oscillatory and, hence, poorly approximated by standard
1940 LEE, MANTEUFFEL, MCCORMICK, AND RUGE
coarse grids, yet they produce small residuals and are therefore poorly reduced by
relaxation. A compounding problem is that there are, in principle, infinitely many
of these components since # can be virtually any angle in [0, 2#). More precisely,
as k increases, so does the dimension of Z # , which consists of elements that are approximately
of the form in (5.6). Fortunately, these problems can be circumvented
by introducing exponential interpolation and multiple coarsening into the multilevel
scheme ([10] and [23]).
Exponential interpolation. Exponential interpolation is used to approximate
the troublesome oscillatory error components of approximate form (5.6). But the ray
e k(x cos #+y sin #) is the problematic factor of this form, so a more useful characterization
of these error components is given by
s(x, y)e k(x cos #+y sin #) ,
where s(x, y) is a smooth function satisfying appropriate boundary conditions.
j be the components of the product space W j , which is assumed
for concreteness to consist only of continuous piecewise bilinear functions.
(These spaces generally di#er over # only at the boundary.) Also, let level j correspond
to a grid that is fine enough relative to k that components (5.7) can be
adequately approximated with bilinear functions, although only marginally so in the
sense that this approximation just begins to deteriorate on level j + 1. Now, exact
coarse level correction from level j is assumed to be e#ective at eliminating the
smooth components left by relaxation on level j - 1. At level however, the
ability of bilinear functions to approximate components (5.7) deteriorates enough to
contaminate two-level performance between levels j. Some heuristics show
that this occurs when 2 To approximate these exponential components
from levels j basis functions are simply rescaled by the ray
elements sin #) . To be more specific, fix # [0, 2#). For each
l,# denote entry number # of element number # of the standard
basis B l and, on level l, define the level j ray element by
l,# ,
l is the element of W #
that agrees with e k(x cos #+y sin #) d at the
level j node values; that is, E #
(d) is the level j nodal interpolant of e k(x cos #+y sin #) d.
This has the e#ect of altering the usual bilinear interpolation formula that relates
coarse to fine node values so that the rays are better approximated. In fact, this
ensures that the level j interpolant of the given ray is in the range of interpolation on
all coarser levels determined by the corresponding #. Figure 5.1 sketches the real part
of a one-dimensional level 3 basis function d 3 and a typical ray basis element E #
determined by an oscillatory level 1 ray (assuming for simplicity that
Note that exponential interpolation is not much more costly than bilinear inter-
polation: a given function d on level l > j is bilinearly interpolated up to level j, then
simply rescaled by e k(x cos #+y sin #) at the level j nodes.
Multiple coarsening. Consider now the case that level j is where approximation
of components (5.7) just begins to deteriorate. It may be enough to introduce one ray
(e.g., with in the sense that an exact correction from level
might then adequately correct "smooth" level j error. However, it is perhaps more
e#ective here to introduce a few coarse grids, one for each of, say, four rays (e.g., with
FOSLS FOR HELMHOLTZ 1941
ray h
Fig. 5.1. One-dimensional real parts of a typical standard d 3 basis function and a E #
basis element.
critically, as each level is coarsened, scale k2 j-1 h doubles and
essentially twice as many rays become oscillatory. To be more specific, suppose that
are angles corresponding to two neighboring level j rays in the sense that
these two rays were used to coarsen level perhaps others, but none were
needed with angles between # 1 and # 2 . Now, on any of the level
will be one for each ray, that is, for each coarsening), the ray with angle #1 +#2will
be poorly reduced in relaxation and poorly approximated by any of the level j
grids. This means that twice as many rays must somehow be used in coarsening to
level which in turn will again require ray doubling to level
This can be accomplished e#ectively by introducing two separate ray coarsenings
on a level that may itself have come from ray coarsening. Suppose that level
was created from a level j ray with angle #, and that the task is now to coarsen level
using rays of angles #. One way to accomplish this is to introduce the level
ray #
j+2,# . But now #
oscillates on level j scale, so applying
an operator to this function will generally require level j evaluations. Alternatively,
these ray coarsenings can be e#ciently introduced using intermediate level
Suppose the level j rays #
j+1,# have already been introduced on level j + 1. Since
e k[x cos(#)+y
then letting E #
j denote the level j interpolant of the ray perturbation
e k{x[cos(#)-cos #]+y[sin(#)-sin #]}
means that the perturbed rays
can be computed directly from level appealing to level j. In particular,
since
is a level
j+2,# can be constructed by exponentially
interpolating the level Figure 5.2 for a sketch
1942 LEE, MANTEUFFEL, MCCORMICK, AND RUGE
of this process for creating a level 3 ray basis function from levels 1 and 2.) Hence,
given an operator M and assuming that
has been constructed for the ray basis of level j +1, then M#
j+2,# can be constructed
by exponentially weighting the M#
's:
M#
c d #
c M#
which is a weighting of elements of (5.9) with exponential weights c . Recursively
applying this exponential weighting process, analogous coarser level calculations can
be computed without appealing to level j.
having solved a coarse grid problem, its solution can be exponentially interpolated
up to level j using the angle perturbations in a recursive way. For example, if
j+2 is the solution of the level problem corresponding to perturbation
#, then s #
j+2 can be interpolated up to level j by first exponentially interpolating
up to level
then exponentially interpolating this result to level j using #. Since exponential interpolation
consists of a standard bilinear interpolation and an exponential scaling, then
this successive interpolation process is easy to implement. Moreover, since the exponential
interpolation operator is the adjoint of the exponential weighting operator,
then exponential weighting is also easy to implement.
Angles (#). The angles cannot be arbitrary, but must be chosen such that
an optimal number of exponential components are adequately approximated by the
perturbed ray elements. To describe this selection procedure, assume that rays are
first introduced on level j + 1. Then, to obtain adequate approximation, a simple but
tedious analysis [21] shows that the perturbation angle # must be chosen so that the
following approximation is within the level j discretization error:
j+2,# .
For example, for
2 , the level 3 angle
perturbations must be #, as depicted in Figure 5.4. The angle perturbations
are simply halved at each successively finer level.
Nonstandard multigrid scheme and computational cost. Exponential in-
terpolation/weighting and multiple coarsening are essentially all that is required to
FOSLS FOR HELMHOLTZ 1943
d
ray 2h
"perturbed" ray hFig. 5.2. Level 3 ray element determined from a perturbation corresponding to angle # of the
level 2 ray element.
Level (j-1)
Level j
Level (j+1)
Level (j+2)
Fig. 5.3. Multiple coarsening.
develop an e#ective nonstandard multigrid scheme for the Helmholtz equation. With
these components, ray basis elements are naturally introduced, and this nonstandard
multigrid scheme has the same form as the standard multigrid scheme defined by (5.2)
and (5.5). To see this, again assume that rays are first introduced on level
that all perturbation angles are determined before the multigrid cycling begins. Here,
we choose j to be the largest integer satisfying
since this guarantees essentially 8 grid points per wavelength in the piecewise bilinear
approximation on level j. Now, for levels l # j, the nested spaces
still consist of products of piecewise bilinear functions, but, for levels l #
must introduce the product ray element spaces
l := span #r
1944 LEE, MANTEUFFEL, MCCORMICK, AND RUGE
Level 3 angles
Level 2 angles
Fig. 5.4. Angle choices (uniformly distributed) for level 2 and level 3 rays.
where t l is the total number of rays introduced on level l, # r is the angle of the
rth ray, and #r
is the corresponding perturbed vector ray basis
introduced on level l. We first introduce four rays on level double that
number for each successively coarser level, so we have t . Note that
l
only if # s is a perturbation of angle # r (that is, only if As before, for
levels l # j, relaxation is defined by (5.2) with (b l , c l ) # the standard basis for W l and,
for levels l < j, the level (l problem is to find (u l+1 , p l+1 ) # W l+1
such that
(v l+1 ,q l+1 )#W l+1
where l ) is the level l approximation. However, for the ray levels l > j, relaxation
is again defined by (5.2); but now with (b l , c l ) # the product ray basis element #r
l,# ,
and, for levels l # j, the two level (l problems corresponding to each
r , are to find (u l+1 ,
(v l+1 ,q l+1 )#W #s
where
l is now the approximation on level l with angle # r . Note that
each level l with angle # r is corrected by two level (l (actually, four
when This gives the structure of the multigrid process. The scheduling
that determines how the various levels are visited is best described by studying the
schematic in Figure 5.5. We will continue to refer to this process as a V -cycle, although
it looks very di#erent from a standard V-cycle below level j.
For this nonstandard multigrid scheme (which can be interpreted as a multilevel
multiplicative Schwarz method; cf. [28]), the level j coarse grid problem involves
subspace corrections from the standard finite element space W j and the ray subspaces
l
l > j. Also, relaxation on the ray spaces is simply a block Gauss-Seidel iteration
FOSLS FOR HELMHOLTZ 1945
Level (j-1)
Level
Level (j+1)
Level (j+2)
Fig. 5.5. V-cycle for the nonstandard multigrid scheme.
(5.3) with (b l , c l ) #r
l,#
. Since L#r
l,#
can be computed e#ciently using exponential
weighting, so can the elemental block matrices or stencils
l, , L#r
l,# .
In fact, these stencils can be computed and stored prior to any multigrid cycling so
that the same relaxation routine can be used for all levels and angles.
Now, assuming that the stencils have been computed and stored before any multi-grid
cycling occurs, the computational cost of such a multiple coarsening algorithm
is generally not excessive. In fact, one such multilevel V-cycle requires only several
fine grid work units (that is, the cost of a fine grid relaxation). For example, consider
the worst case, when four rays are introduced on level 2. (The need for more rays
would signal a level h so coarse relative to k that the discretization accuracy would
be almost meaningless; in fact, the need for four rays on level 2 is already a sign that
level 1 accuracy is quite poor.) Then, for level j # 2, there would be a total of 2 j
level j problems. Also, for each j, if it is assumed that the stepsize is doubled at each
coarsening, then the number of fine grid work units required for relaxation is 2 -2j+2 .
Omitting the cost of ray interpolation (which is substantial but proportional to the
number of points) and other more minor operations, the total number of fine level
work units for a nonstandard V (1, 0)-cycle is then
which is only about double the cost of a standard V (1, 0)-cycle.
6. Numerical examples. For simplicity and to facilitate comparisons, the examples
treated here are only for the unit square. The lack of a Dirichlet boundary is
in fact a more severe test of the methodology in some sense. The results will include
multigrid performance for both the standard and the nonstandard schemes described
1946 LEE, MANTEUFFEL, MCCORMICK, AND RUGE
Table
Standard coarsening: V (1, 0)-cycles.
Table
Standard coarsening: two-level cycles.
in the previous section. We also assess accuracy of the discretization based on ray-type
elements.
To facilitate assessment of the iterative solver, we first restrict ourselves to the homogeneous
boundary value problem (2.2) with The exact solution is of course
but this is useful for measuring asymptotic algebraic convergence factors of
stationary linear iterative methods such as multigrid methods since it avoids the limiting
stagnation caused by machine representation. The unit square is partitioned
into rectangles, and the functional F is minimized over the space of piecewise continuous
bilinear functions (possibly involving rays) that satisfy the boundary condition
on #. In most cases, the finest grid is chosen to satisfy
8 , and
rays are first introduced on level
8 (that is, typically when
The relaxation scheme is nodal Gauss-Seidel relaxation (that is, block Gauss-Seidel
where a block consists of all of the unknowns (u h
# ) at each node #). To assess the
worst-case convergence factors, twenty multigrid V(1,0)-cycles (that is, one relaxation
before and none after coarsening) were performed starting from a random initial guess
). The convergence measure # is then defined as
where the superscript in parentheses denotes the iteration number. Tables 6.1 and 6.2,
respectively, summarize the results of twenty V (1, 0)- and two-level cycles for standard
bilinear elements. Table 6.1 confirms that convergence degrades as a function of k
due to the troublesome oscillatory error components present on the coarser levels.
Table
6.2 shows that Poisson-like (that is, are obtained when kh is not
too large and, comparing it with Table 6.1, indicates that rays should be introduced
as soon as kh > 1
Tables
6.3 and 6.4 show that multigrid convergence factors improve substantially
by introducing ray functions. Here, "exact" rays (that is,
are intro-
FOSLS FOR HELMHOLTZ 1947
Table
Exact ray (#
0)-cycles.
Table
Exact ray (#
cycles.
.3
Table
Approximate ray (#
0)-cycles.
Table
Approximate ray (#
delayed until kh > 1
0)-cycles.
duced when kh > 1using the algorithm described in the previous section. These
factors are fairly uniform in k and compare favorably to the two-level factors for
standard coarsening. Moreover, they degrade only slightly when the exact rays are
approximated using the recursive scheme described in the previous section (see Table
6.5).
To test the e#ect of introducing the rays late in the multigrid solver, we reran
these examples but delayed the use of rays until kh > 1. As expected, the rates
degrade noticeably, as Tables 6.6 and 6.7 show.
As a measure of discretization accuracy, we ran tests to see if our FOSLS scheme
has any pollution e#ects. We specified the exact solution
5 +y sin #) , which
solves (2.2) with except that the boundary condition is inhomogeneous:
5 +y sin #)
5 +y sin #) on #.
1948 LEE, MANTEUFFEL, MCCORMICK, AND RUGE
Table
Approximate ray (#
delayed until kh > 1
0)-cycles.
Table
Discretization error in the least-squares functional norm.
Table
Relative discretization error in the scaled H 1,k norm.
kh\h 11111
We then used our FOSLS scheme to approximate p and the scaled gradient u =k #p. To isolate possible pollution, we chose various h, then varied k so that kh
is constant 1/64). The errors were measured by comparing the
discrete approximation (obtained after applying several multigrid cycles) to the exact
solution in a relative sense in the least-squares and product H 1,k norms. Tables 6.8
and 6.9 essentially show constant discretization errors for constant kh, with about a
factor of 2 decrease as kh is halved, which is consistent with the assertion that our
approach is pollution-free and O(kh), respectively.
--R
New York
Estimates near the boundary for solutions of elliptic partial di
A generalized finite element method for solving the Helmholtz equation in two dimensions with minimal pollution
The partition of unity method
Is the pollution e
Expansions in Eigenfunctions of Selfadjoint Operators
Finite Elements Theory
A least-squares approach based on a discrete minus one inner product for first order systems
Stages in developing multigrid solutions
Finite element method for the solution of Maxwell's equations in multiple media
Inverse Acoustic and Electromagnetic Scattering Theory
Partial Di
On numerical methods for acoustic problems
Finite Element Methods for Navier-Stokes Equations
Finite element solution of the Helmholtz equation with high wave number.
Multilevel first-order system least-squares (FOSLS) for Helmholtz equations
Multilevel Projection Methods for Partial Di
A mixed method for approximating Maxwell's equations
On electric and magnetic problems for vector fields in anisotropic nonhomogeneous media
Iterative methods by space decomposition and subspace correction
--TR
--CTR
Jan Mandel , Mirela O. Popa, Iterative solvers for coupled fluid-solid scattering, Applied Numerical Mathematics, v.54 n.2, p.194-207, July 2005
Jan Mandel, An iterative substructuring method for coupled fluid-solid acoustic problems, Journal of Computational Physics, v.177 n.1, p.95-116, March 20, 2002 | first-order system least-squares;helmholtz equation;nonstandard multigrid |
351604 | Robust Real-Time Periodic Motion Detection, Analysis, and Applications. | AbstractWe describe new techniques to detect and analyze periodic motion as seen from both a static and a moving camera. By tracking objects of interest, we compute an object's self-similarity as it evolves in time. For periodic motion, the self-similarity measure is also periodic and we apply Time-Frequency analysis to detect and characterize the periodic motion. The periodicity is also analyzed robustly using the 2D lattice structures inherent in similarity matrices. A real-time system has been implemented to track and classify objects using periodicity. Examples of object classification (people, running dogs, vehicles), person counting, and nonstationary periodicity are provided. | Introduction
Object motions that repeat are common in both nature and the man-made environment in which we
live. Perhaps the most prevalent periodic motions are the ambulatory motions made by humans and
animals in their gaits (commonly referred to as "biological motion" [16]). Other examples include
a person walking, a waving hand, a rotating wheel, ocean waves, and a flying bird. Knowing that
an object's motion is periodic is a strong cue for object and action recognition [16, 11]. In addition,
periodic motion can also aid in tracking objects. Furthermore, the periodic motion of people can be
used to recognize individuals [20].
1.1 Motivation
Our work is motivated by the ability of animals and insects to utilize oscillatory motion for action
and object recognition and navigation. There is behavioral evidence that pigeons are well adapted to
recognize the types of oscillatory movements that represent components of the motor behavior shown
by many living organisms [9]. There is also evidence that certain insects use oscillatory motion for
navigational purposes (hovering above flowers during feeding) [17]. Humans can recognize biological
motion from viewing lights placed on the joints of moving people [16]. Humans can also recognize
periodic movement of image sequences at very low resolutions, even when point correspondences are
not possible. For example, Figure 1 shows such a sequence. The effective resolution of this sequence
is 9x15 pixels (it was created by resampling a 140x218 (8-bit, 30fps) image sequence to 9x15 and back
to 140x218 using bicubic interpolation). In this sequence, note the similarity between frames 0 and 15.
We will use image similarity to detect and analyze periodic motion.
Figure
1. Low resolution image sequences of a periodic motion (a person walking on a treadmill). The
e#ective resolution is 9x15 pixels.
1.2 Periodicity and motion symmetries
We define the motion of a point #
X(t), at time t, periodic if it repeats itself with a constant period p,
i.e.:
T (t) is a translation of the point. The period p is the smallest p > 0 that satisfies (1); the
frequency of the motion is 1/p. If p is not constant, then the motion is cyclic. In this work, we analyze
locally (in time) periodic motion, which approximates many natural forms of cyclic motion.
Periodic motion can also be defined in terms of symmetry. Informally, spatial symmetry is self-similarity
under a class of transformations, usually the group of Euclidean transformations in the plane
(translations, rotations, and reflections)[36]. Periodic motion has a temporal (and sometimes spatial)
symmetry. For example, Figures 3(a), 4(a), 5(a), and 6(a) show four simple dynamic systems (pendu-
lums). For each system, the motion is such that #
X(t) for a point #
X(t) on the pendulum.
However, each system exhibits qualitatively different types of periodic motion. Figure 5(a) is a simple
planar pendulum with a fixed rod under a gravitational field. The motion of this system gives it a temporal
mirror symmetry along the shown vertical axis. The system in Figure 4(a) is a similar pendulum, but
with a sufficient initial velocity such that it always travels in one angular direction. The motion of this
system gives it a temporal mirror symmetry along the shown vertical axis. The system in Figure 3(a)
is a similar pendulum, but in zero gravity; note it has an infinite number of axes of symmetry that pass
through the pivot of the pendulum. The system in Figure 6(a) consists of a pair of uncoupled and 180 #
out of phase pendulums, a system which is often used to model the upper leg motion of humans [24].
This system has a temporal mirror symmetry along the shown vertical axis, as well as an approximate
spatial mirror symmetry along the same vertical axis (it is approximate because the pendulums are not
identical).
The above examples illustrate that while eq. 1 can be used to detect periodicity, it is not sufficient
to classify different types of periodic motion. For classification purposes, it is necessary to exploit the
dynamics of the system of interest, which we do in Section 3.4.
1.3 Assumptions
In this work, we make the following assumptions: (1) the orientation and apparent size of the segmented
objects do not change significantly during several periods (or do so periodically); (2) the frame
rate is sufficiently fast for capturing the periodic motion (at least double the highest frequency in the
periodic motion).
Contributions
The main contribution of this work is the introduction of novel techniques to robustly detect and
analyze periodic motion. We have demonstrated these techniques with video of the quality typically
found in both ground and airborne surveillance systems. Of particular interest is the utilization of the
symmetries of motion exhibited in nature, which we use for object classification. We also provide
several other novel applications of periodic motion, all related to automating a surveillance system.
Organization of the Paper
In Section 2, we review and critique the related work. The methodology is described in Section 3.
Examples and applications of periodic motion, particularly for the automated surveillance domain, are
given in Section 4. A real-time implementation of the methods is discussed in Section 5, followed by a
summary of the paper in Section 6.
Related Work
There has been recent interest in segmenting and analyzing periodic or cyclic motion. Existing methods
can be categorized as those requiring point correspondences [33, 35]; those analyzing periodicities
of pixels [21, 30]; those analyzing features of periodic motion [27, 10, 14]; and those analyzing the
periodicities of object similarities [6, 7, 33]. Related work has been done in analyzing the rigidity of
moving objects [34, 25]. Below we review and critique each of these methods. Due to some similarities
with the presented method, [33, 21, 30] are described in more detail than the other related work.
Seitz and Dyer [33] compute a temporal correlation plot for repeating motions using different image
comparison functions, dA and d I . The affine comparison function dA allows for view-invariant analysis
of image motion, but requires point correspondences (which are achieved by tracking reflectors on
the analyzed objects). The image comparison function d I computes the sum of absolute differences
between images. However, the objects are not tracked, and thus must have non-translational periodic
motion in order for periodic motion to be detected. Cyclic motion is analyzed by computing the period-
trace, which are curves that are fit to the surface d. Snakes are used to fit these curves, which assumes
that d is well-behaved near zeros so that near-matching configurations show up as local minima of d.
The K-S test is utilized to classify periodic and non-periodic motion. The samples used in the K-S
test are the correlation matrix M and the hypothesized period-trace PT . The null hypothesis is that
the motion is not periodic, i.e., the cumulative distribution function M and PT not are significantly
different. The K-S test rejects the null hypothesis when periodic motion is present. However, it also
rejects the null hypothesis if M is non-stationary. For example, when M has a trend, the cumulative
distribution function of M and PT can be significantly different, resulting in classifying the motion as
periodic (even if no periodic motion present). This can occur if the viewpoint of the object or lighting
changes significantly during evaluation of M (see Figure 19(a)). The basic weakness of this method
is it uses a one-sided hypothesis test which assumes stationarity. A stronger test is needed to detect
periodicity in non-stationary data, which we provide in Section 3.4.
Polana and Nelson [30] recognize periodic motions in an image sequence by first aligning the frames
with respect to the centroid of an object so that the object remains stationary in time. Reference curves,
which are lines parallel to the trajectory of the motion flow centroid, are extracted and the spectral
power is estimated for the image signals along these curves. The periodicity measure of each reference
curve is defined as the normalized difference between the sum of the spectral energy at the highest
amplitude frequency and its multiples, and the sum of the energy at the frequencies half way between.
et. al [35] analyze the periodic motion of a person walking parallel to the image plane. Both
synthetic and real walking sequences are analyzed. For the real images, point correspondences were
achieved by manually tracking the joints of the body. Periodicity was detected using Fourier analysis
of the smoothed spatio-temporal curvature function of the trajectories created by specific points on the
body as it performs periodic motion. A motion based recognition application is described, in which one
complete cycle is stored as a model, and a matching process is performed using one cycle of an input
trajectory.
Allmen [1] used spatio-temporal flow curves of edge image sequences (with no background edges
present) to analyze cyclic motion. Repeating patterns in the ST flow curves are detected using curvature
scale-space. A potential problem with this technique is that the curvature of the ST flow curves is
sensitive to noise. Such a technique would likely fail on very noisy sequences, such as that shown in
Figure
15.
Niyogi and Adelson [27] analyze human gait by first segmenting a person walking parallel to the
image plane using background subtraction. A spatio-temporal surface is fit to the XYT pattern created
by the walking person. This surface is approximately periodic, and reflects the periodicity of the gait.
Related work [26] used this surface (extracted differently) for gait recognition.
Liu and Picard [21] assume a static camera and use background subtraction to segment motion.
Foreground objects are tracked, and their path is fit to a line using a Hough transform (all examples
have motion parallel to the image plane). The power spectrum of the temporal histories of each pixel is
then analyzed using Fourier analysis, and the harmonic energy cause by periodic motion is estimated.
An implicit assumption in [21] is that the background is homogeneous (a sufficiently non-homogeneous
background will swamp the harmonic energy). Our work differs from [21] and [30] in that we analyze
the periodicities of the image similarities of large areas of an object, not just individual pixels aligned
with an object. Because of this difference (and the fact that we use a smooth image similarity metric)
our Fourier analysis is much simpler, since the signals we analyze do not have significant harmonics of
the fundamental frequency. The harmonics in [21] and [30] are due to the large discontinuities in the
signal of a single pixel; our self-similarity metric does not have such discontinuities.
Fujiyoshi and Lipton [10] segment moving objects from a static camera and extract the object bound-
aries. From the object boundary, a "star" skeleton is produced, which is then Fourier analyzed for
periodic motion. This method requires accurate motion segmentation, which is not always possible
(e.g., see
Figure
16). Also, objects must be segmented individually; no partial occlusions are allowed
(as shown in Figure 21(a)). In addition, since only the boundary of the object is analyzed for periodic
change (and not the interior of the object), some periodic motions may not be detected (e.g., a textured
rolling ball, or a person walking directly toward the camera).
Selinger and Wixson [34] track objects and compute self-similarities of that object. A simple heuristic
using the peaks of the 1-D similarity measure is used to classify rigid and non-rigid moving objects,
which in our tests fails to classify correctly for noisy images (e.g., the sequence in Figure 15).
Heisele and Wohler [14] recognize pedestrians using color images from a moving camera. The
images are segmented using a color/position feature space, and the resulting clusters are tracked. A
quadratic polynomial classifier extracts those clusters which represent the legs of pedestrians. The
clusters are then classified by a time delay neural network, with spatio-temporal receptive fields. This
method requires accurate object segmentation. A 3-CCD color camera was used to facilitate the color
clustering, and pedestrians are approximately 100 pixels in height. These image qualities and resolutions
are typically not found in surveillance applications.
There has also been some work done in classifying periodic motion. Polana and Nelson [30] use
the dominant frequency of the detected periodicity to determine the temporal scale of the motion. A
temporally scaled XYT template, where XY is a feature based on optical flow, is used to match the
given motion. The periodic motions include walking, running, swinging, jumping, skiing, jumping
jacks, and a toy frog. This technique is view dependent, and has not been demonstrated to generalize
across different subjects and viewing conditions. Also, since optical flow is used, it will be highly
susceptible to image noise.
Cohen et. al [5] classifies oscillatory gestures of a moving light by modeling the gestures as simple
one-dimensional ordinary differential equations. Six classes of gestures are considered (all circular
and linear paths). This technique requires point correspondences, and has not been shown to work on
arbitrary oscillatory motions.
Area-based techniques, such as the present method, have several advantages over pixel-based tech-
niques, such as [30, 21]. Specifically, area-based techniques allow the analysis of the dynamics of the
entire object, which is not achievable by pixel based techniques. This allows for classification of different
types of periodic motion, such as those given in Section 4.1 and Section 4.4. In addition, area-based
techniques allow detection and analysis of periodic motion that is not parallel to the image plane. All
examples given in [30, 21] have motion parallel to the image plane, which ensures there is sufficient
periodic pixel variation for the techniques to work. However, since area-based methods compute object
similarities which span many pixels, the individual pixel variations do not have to be large. For exam-
ple, our method can detect periodic motion from video sequences of people walking directly toward
the camera. A related benefit is that area-based techniques allow the analysis of low S/N images, such
as that shown in Figure 16, since the S/N of the object similarity measure (such as (5)) is higher than
that of a single pixel.
The algorithm for periodicity detection and analysis consists of two parts. First, we segment the
motion and track objects in the foreground. We then align each object along the temporal axis (using
the object's tracking results) and compute the object's self-similarity as it evolves in time. For periodic
motions, the self-similarity metric is periodic, and we apply Time-Frequency analysis to detect and
characterize the periodicity. The periodicity is also analyzed robustly using the 2-D lattice structures
inherent in similarity matrices.
3.1 Motion Segmentation and Tracking
Given an image sequence I t from a moving camera, we segment regions of independent motion. The
images I t are first Gaussian filtered to reduce noise, resulting in I #
. The image I #
is then stabilized [12]
with respect to image I #
t-#
, resulting in V t,t-# . The images V t,t-# and I #
are differenced and thresholded
to detect regions of motion, resulting in a binary motion image:
(2)
where TM is a threshold. In order to eliminate false motion at occlusion boundaries (and help filter
spurious noise), the motion images M t,# and M t,-# are logically and'ed together:
An example of M t is shown in Figure 21(b). Note that for large values of # , motion parallax will cause
false motion in M t . In our examples (for a moving camera), #=300 ms was used.
Note that in many surveillance applications, images are acquired using a camera with automatic gain,
shutter, and exposure. In these cases, normalizing the image mean before comparing images I t 1
and I t 2
will help minimize false motion due to a change in the gain, shutter, or exposure.
A morphological open operation is performed on M t (yielding M # t ), which reduces motion due to
image noise. The connected components for M # t are computed, and small components are eliminated
(further reducing image noise). The connected components which are spatially similar (in distance)
are then merged, and the merged connected components are added to a list of objects O t to be tracked.
An object has the following attributes: area, centroid, bounding box, velocity, ID number, and age (in
frames). Objects in O t and O t+k , k > 0, are corresponded using spatial and temporal coherency.
It should be noted that the tracker is not required to be very accurate, as the self-similarity metric we
use is robust and can handle tracking errors of several pixels (as measured in our examples).
Also note that when the background of a tracked object is sufficiently homogeneous, and the tracked
object does not change size significantly during several periods, then accurate object segmentation is not
necessary. In these cases, we can allow O t to include both the foreground and background. Examples
of such backgrounds include grassy fields, dirt roads, or parking lots. An example of such a sequence
is given in Figure 15.
3.2 Periodicity Detection and Analysis
The output of the motion segmentation and tracking algorithm is a set of foreground objects, each
of which has a centroid and size. To detect periodicity for each object, we first align the segmented
object (for each frame) using the object's centroid, and resize the objects (using a Mitchell filter [32])
so that they all have the same dimensions. The scaling is required to account for apparent size change
due to change in distance from the object to the camera. Because the object segmentation can be noisy,
the object dimensions are estimated using the median of N frames (where N is the number of frames
we analyze the object over). The object O t 's self-similarity is then computed at times t 1 and t 2 . While
many image similarity metrics can be defined (e.g., normalized cross-correlation, Hausdorff distance
[15], color indexing [2]), perhaps the simplest is absolute correlation:
is the bounding box of object O t 1
. In order to account for tracking errors, the minimal S is
found by translating over a small search radius r:
|dx,dy|<r
For periodic motions, S # will also be periodic. For example, Figure 8(a) shows a plot of S # for all
combinations of t 1 and t 2 for a walking sequence (the similarity values have been linearly scaled to
the grayscale intensity range [0,255]; dark regions show more similarity). Note that a similarity plot
should be symmetric along the main diagonal; however, if substantial image scaling is required, this
will not be the case. In addition, there will always be a dark line on the main diagonal (since an object
is similar to itself at any given time), and periodic motions will have dark lines (or curves if the period
is not constant) parallel to the diagonal.
To determine if an object exhibits periodicity, we estimate the 1-D power spectrum of S #
a fixed t 1 and all values of t 2 (i.e., the columns of S # ). In estimating the spectral power, the columns
of S # are linearly detrended and a Hanning filter is applied. A more accurate spectrum is estimated by
averaging the spectra of multiple t 1 's [31] to get a final power estimate P (f i ), where f i is the frequency.
Periodic motion will show up as peaks in this spectrum at the motion's fundamental frequencies. A peak
at frequency f i is significant if
where K is a threshold value (typically 3), P is the mean of P , and # P is the standard deviation of P .
Note that multiple peaks can be significant, as we will see in the examples.
In the above test, we assume that the period is locally constant. The locality is made precise using
Time-Frequency analysis given in Section 3.3. We also assume that there are only linear amplitude
modulations to the columns of S # (so that linear detrending is sufficient to make the data stationary),
and that any additive noise to S # is Gaussian. Both of these assumption are relaxed in the method given
in Section 3.4.
3.2.1 Fisher's Test
If we assume that columns of S # are stationary and contaminated with white noise, and that any periodicity
present consists of a single fundumental frequency, then we can apply the well known Fisher's test
[29, 3]. Fisher's test will reject the null hypothesis (that S # is only white noise) if P (f i ) is substantially
larger than the average value. Assuming N is even, let
. (7)
To apply the test, we compute the realized value x of E q from S # , and then compute the probability:
0). If this probability is less than #, then we reject the null hypothesis at level
(in practice we use This test is optimal if there exists a single periodic component at a
Fourier frequency f i in white noise stationary data [29]. To test for periodicities containing multiple
frequencies, Seigel's test [29] can be applied.
In practice, Fisher's test, like the K-S test used by [33], works well if the periodic data is stationary
with white noise. However, in most of our non-periodic test data (e.g., Figure 19(a)), which is not
stationary, both Fisher's and the K-S test yield false periodicities with high confidence.
3.2.2 Recurrence Matrices
It is interesting to note that S # is a recurrence matrix [8, 4], without using time-delayed embedded
dimensions. Recurrence matrices are a qualitative tool used to perform time series analysis of non-linear
dynamical systems (both periodic and non-periodic). Recurrence matrices make no assumptions
on the stationarity of the data, and do not require many data points to be used (a few cycles of periodic
data is sufficient). The input for a recurrence matrix is a multi-dimensional temporally sampled signal.
In our use, the input signal is the tracked object image sequence O t , and the distance measure is image
similarity. Given a recurrence matrix, the initial trajectory #
X(t) of a point on an object can be recovered
up to an isometry [23]. Therefore, the recurrence plot encodes the spatiotemporal dynamics of the
moving object. The similarity plot encodes a projection of the spatiotemporal dynamics of the moving
object.
3.3 Time-Frequency Analysis
For stationary periodicity (i.e., periodicity with statistics that don't change with time), the above analysis
is sufficient. However, for non-stationary periodicity, Fourier analysis is not appropriate. Instead,
we use Time-Frequency analysis and the Short-Time Fourier Transform (STFT) [28]:
-#
is a short-time analysis window, and x(u) is the signal to analyze (S # in our case). The
short-time analysis window effectively suppresses the signal x(u) outside a neighborhood around the
analysis time point t. Therefore, the STFT is a "local" spectrum of the signal x(u) around t.
We use a Hanning windowing function as the short-time analysis window. The window length should
be chosen to be long enough to achieve a good power spectrum estimate, but short enough to capture
a local change in the periodicity. In practice, a window length equal to several periods works well for
typical human motions. An example of non-stationary periodicity is given in Section 4.7.
3.4 Robust Periodicity Analysis
In Sections 3.2 and 3.3, we used a hypothesis test on the 1-D power spectrum of S # to determine if
contained any periodic motion. The null hypothesis is that there is only white noise in the spectrum,
which is rejected by eq. 6 if significant periodic motion is present. However, the null hypothesis can
also be rejected if S # contains significant non-Gaussian noise, or if the period is locally non-constant,
or if S # is amplitude modulated non-linearly. We seek a technique that minimizes the number of false
periodicities, while maximizing the number of true periodicities. Toward this end, we devise a test
that performs well when the assumptions stated in Section 3.2 are satisfied, but does not yield false
periodicities when these assumptions are violated.
An alternative technique to Fourier analysis of the 1-D columns of S is to analyze the 2-D power
spectrum of S # . However, as noted in [19], the autocorrelation of S # for regular textures has more
prominent peaks than those in the 2-D Fourier spectrum. Let A be the normalized autocorrelation of
the
A(d x , d y
where
S # R is the mean of S # over the region R,
# RL is the mean of S # over the region R shifted by the
lag (dx, dy), and the regions R and R L cover S # and the lagged S # . If S # is periodic, then A will have
peaks regularly spaced in a planar lattice M d , where d is the distance between the lattice points. In our
examples, we will consider two lattices, a square lattice M S,d (Figure 2(a)), and a 45 # rotated square
lattice M R,d (Figure 2(b)). The peaks P in A are matched to M d using the match error measure e:
is the closest peak to the lattice point M d,i , TD (T D < d/2) is the maximum distance P i can
deviate from M d,i , and is the minimum autocorrelation value that the matched peak may have. M d
matches P if all the following are satisfied:
min
where T e is a match thresholds; [d 1 , d 2 ] is the range of d; TM is the minimum number of points in M d
to match. In practice, we let 0.25. The range
determines the possible range of the expected period, with the requirement 0 < d 1 < d 2 < L,
where L is the maximum lag used in computing A. The number of points in MR and M S can be based
on the period of the expected periodicity, and frame-rate of the camera. The period
is the sampling interval (e.g., ms for NTSC video).
Peaks in A are determined by first smoothing A with Gaussian filter G, yielding A # . A # (i, j) is a
peak if A # (i, j) is a strict maximum in a local neighborhood with radius N . In our examples, G is a
5. Lin et. al [19] provides an automatic method for determining the
optimal size of G.
Square Lattice
d
(a)
2d
(b)
Figure
2. Lattices used to match the peaks of the autocorrelation of S # . (a) Square lattice (b)
rotated square lattice.
4 Examples and Applications
4.1 Synthetic Data
In this section, we demonstrate the methods on synthetic data examples. We generated images of a
periodic planar pendulum, with different initial conditions, parameters, and configurations. Note that
the equation of motion for a simple planar pendulum is
sin
where g is the gravitational acceleration, L is the length of the rigid rod, and # is the angle between the
pendulum rod and vertical axis [22]. In the first example (see Figure 3(a)), we set so that the
pendulum has a circular motion with a constant angular velocity. The diagonal lines in the similarity
plot (
Figure
are formed due to the self-similarity of the pendulum at every complete cycle. The
autocorrelation (Figure 3(c)) has no peaks.
(a)
TSimilarity of Image T 1 and T 2
50 100 150 200 250 300 350 400100200300400
(b)
Autocorrelation of Similarity
TLag
-252575125(c)
Figure
3. (a) Pendulum in zero gravity with a constant angular velocity. The arrows denote the direction
of motion. (b) Similarity plot for pendulum. Darker pixels are more similar. (c) Autocorrelation of
similarity plot.
In the next example, we use the same configuration, but set g > 0 and the initial angular velocity
to be sufficient so that the pendulum still has a single angular direction. However, in this configuration
the angular velocity is not constant, which is reflected in the qualitatively different similarity plot
Figure
4(b)) and autocorrelation (Figure 4(c)). Note that the peaks in A match the lattice structure in
Figure
2(a).
By decreasing the initial angular velocity, the pendulum will oscillate with a changing angular di-
rection, as shown in Figure 5(a). The similarity plot for this system is shown in Figure 5(b), and the
autocorrelation in Figure 5(c). Note that the peaks in A match the lattice structure in Figure 2(b).
(a)
TSimilarity of Image T 1
and T 2
100 200 300 400 500 600200400600
(b)
Autocorrelation of Similarity
TLag
(c)
Figure
4. (a) Pendulum in gravity with single angular direction. The arrows denote the direction and
magnitude of motion; the pendulum travels faster at the bottom of its trajectory than at the top.
(b) Similarity plot for pendulum. (c) Autocorrelation of similarity plot. The peaks are denoted by '+'
symbols.
(a)
TSimilarity of Image T 1 and T 2
100 200 300 400 500 600200400600
(b)
Autocorrelation of Similarity
TLag
(c)
Figure
5. (a) Pendulum in gravity with an oscillating angular direction. The arrows denote the direction
of motion. (b) Similarity plot for pendulum. (c) Autocorrelation of similarity plot. The peaks are
denoted by '+' symbols.
Finally, for the system of two pendulums 180 # out of phase shown in Figure 6(a), the similarity plot
is shown in Figure 6(b), and the autocorrelation is shown in Figure 6(c). Note that the peaks in A match
the lattice structure in Figure 2(b). Also note the lower measures of similarity for the diagonal lines
and the cross-diagonal lines S(t, and the corresponding effect on
A.
(a)
TSimilarity of Image T 1 and T 2
100 200 300 400 500 600200400600
(b)
Autocorrelation of Similarity
TLag
(c)
Figure
6. (a) Two pendulum out of phase 180 # in gravity. The arrows denote the direction of motion.
(b) Similarity plot for pendulums. (c) Autocorrelation of similarity plot. The peaks are denoted by '+'
symbols.
4.2 The Symmetry of a Walking Person
In this example we first analyze periodic motion with no (little) translational motion, a person walking
on a treadmill (Figure 7). This sequence was captured using a static JVC KY-F55B color camera
at 640x480 @ 30fps, deinterlaced, and scaled to 160x120. Since the camera is static and there is no
translational motion, background subtraction was used to segment the motion [6].
The similarity plot S # for this sequence is shown in Figure 8(a). The dark lines correspond to two
images in the sequence that are similar. The darkest line is the main diagonal, since S # (t,
dark lines parallel to the main diagonal are formed since S # (t, kp/2+ t) # 0, where p is the period, and
k is an integer. The dark lines perpendicular to the main diagonal are formed since S # (t, kp/2- t) # 0,
and is due to the symmetry of human walking (see Figure 10).
It is interesting to note that at the intersections of these lines, these images are similar to either (a),
(b), or (c) in Figure 7 (see Figure 8(b)). That is, S # encodes the phase of the person walking, not just
the period. This fact is exploited in the example in Section 4.5.
The autocorrelation A of S # is shown in Figure 9(b). The peaks in A form a rotated square lattice
Figure
2(b)), which is used for object classification (Section 4.4). Note that the magnitude of the peaks
in A (
Figure
9(b)) have a pattern similar to the A in Figure 6(c).
(a) (b) (c)
Figure
7. Person walking on a treadmill.
TSimilarity of Image T 1 and T 2
(a)
TSimilarity of Image T 1 and T 2
AC
AA
(b)
Figure
8. (a) Similarity plot for the person walking in Figure 7. (b) Lattice structure for the upper left
quadrant of (a). At the intersections of the diagonal and cross diagonal lines are images similar to (a),
(b), (c) in
Figure
7. This can be used to determine the phase of the walking person.
Frequency (Hz)
Power
Spectral Power
Power
Mean
(a)
Autocorrelation of Similarity
TLag
(b)
Figure
9. (a) Power spectrum of similarity of a walking person. (b) Autocorrelation of the similarity of
the walking person in Figure 7 (smoothed with a 5 5 filter). The peaks (shown with
white '+' symbols) are used to fit the rotated square lattice in Figure 2(b).
Figure
10. Cycle of a person walking (p = 32). Note the similarity of frame t and p/2 - t, and the
similarity of frame t and p/2 t.
We next analyze the motion of a person who is walking at an approximately 25 # offset to the camera's
image plane from a static camera. (Figure 11(a)). The segmented person is approximately 20 pixels in
height, and is shown in Figure 12(a). The similarity plot (Figure 11(b)) shows dark diagonal lines at a
period of approximately 1 second (32 frames), which correspond to the period of the person's walking.
The lighter diagonal lines shown with a period of approximately 0.5 seconds (16 frames) are explained
by first noting that the person's right arm swing is not fully visible (due to the 25 # offset to the image
plane). Therefore, it takes two steps for the body to be maximally self-similar, while the legs become
very self-similar at every step. The effect of this is that the similarity measure S # is the composition of
two periodic signals, with periods differing by a factor of two. This is shown in Figure 12(b), where
the aligned object image is partitioned in the three segments (the upper 25%, next 25%, and lower
50% of the body), and S # is computed for each segment. The upper 25%, which includes the head and
shoulders, shows no periodic motion; the next 25%, which includes the one visible arm, has a period
double that of the lower 50% (which includes the legs). Figure 12(c) shows the average power spectrum
for all the columns in S # .
(a)
TSimilarity of Image T 1 and T 2
(b)
Figure
11. (a) First image of a 100-image walking sequence (the subject is walking approx. 25 # o#set
from the camera's image plane). (b) Walking sequence similarity plot, which shows the similarity of
the object (person) at times t 1 and t 2 . Dark regions show greater degrees of similarity.
Similarity
Similarity of Image 1 with Image T
(a)
Similarity
Separated Similarity of Image 1 to Image T
upper 25%
next 25%
lower 50%
(b)
Power
Spectral Power of Similarity
Power
Mean
(c)
Figure
12. (a) Column 1 of Figure 11(b), with the corresponding segmented object for the local minima.
(b) Image similarity for upper 25%, next 25%, and lower 50% of body. (c) Average power spectrum
of all columns of Figure 11(b).
4.3 The Symmetry of a Running Dog
In this example, we look at the periodicity of a running dog from a static camera. Figure 13 shows
a complete cycle of a dog (a Black Labrador). Unlike the symmetry of a walking/running person, a
running dog has a lack of similarity for S # (t, kp - t). This results in the similarity plot (Figure 14(a))
having dark lines parallel to the main diagonal, formed by S # (t, kp + t), but no lines perpendicular to
the main diagonal (as with a walking/running person). The similarity plot has peaks (Figure 14(a))
that correspond to poses of the dog at frame 0 in Figure 13. The autocorrelation A of S # is shown in
Figure
14(b); the peaks of the A form a square lattice (Figure 2(a)), which is used in Section 4.4 for
object classification.
Figure
13. Cycle of a running dog (p = 12). Note the lack of similarity for any two frames t 1 and t 2 ,
4.4 Object Classification Using Periodicity
A common task in an automated surveillance system is to classify moving objects. In this example,
we classify three types of moving objects: people, dogs, and other. We use the lattice fitting method
described in Section 3.4 for the classification, which is motivated by texture classification methods.
Specifically, the square lattice M S (Figure 2(a)) is used to classify running dogs, and the 45 # square
lattice MR (Figure 2(b)) is used to classify walking or running people. Note that M R,d is a subset of
M S,d , so if both M S,d and M R,d match, MR is declared the winner. If neither lattice provides a good
match to A, then the moving object is classified as other.
The video database used to test the classification consists of video from both airborne surveillance
(people and vehicles), and ground surveillance (people, vehicles, and dogs). The database consists of
vehicle sequences (25 from airborne video); 55 person sequences (50 from airborne video); and 4
Similarity of Image T 1 and T 2
(a)
Autocorrelation of Similarity
TLag
(b)
Figure
14. (a) Similarity plot of the running dog in Figure 13. Note there there are no dark lines
perpendicular to the main diagonal, as shown in Figure 8(a). (b) Autocorrelation of the similarity plot
(smoothed with a 5 5 filter). The peaks (shown with white '+' symbols) are used
to fit the square lattice in Figure 2(a).
dog sequences (all from ground video).
For the airborne video and dog sequences, the background was not segmented from the foreground
object. For these sequences, the background was sufficiently homogeneous (e.g., dirt roads, parking
lots, grassy fields) for this method to work. For the other sequences (taken with a static camera), the
background was segmented as described in [6].
The airborne video in was recorded from a Sony XC-999 camera (640x240 @ 30fps) at an altitude
of about 1500 feet. There is significant motion blur due to a slow shutter speed and fast camera motion.
Additional noise is induced by the analog capture of the video from duplicated SVHS tape. Figure 15
shows a person running across a parking lot. The person is approximately 12x7 pixels in size (Fig-
ure 16). The similarity plot in Figure 17(a) shows a clearly periodic motion, which corresponds to the
person running. Figure 18 shows that the person is running with a frequency of 1.3Hz; the second peak
at 2.6Hz is due to the symmetry of the human motion described in Section 4.2. The autocorrelation
of S # is shown in Figure 17(b). Figure 19(b) shows the similarity plot for the vehicle in Figure 19(a),
which has no periodicity. The spectral power for the vehicle (Figure 20(b)) is flat. The autocorrelation
of S # has only 2 peaks (Figure 20(a)).
The results of the classifications are shown in Table 1. The thresholds used for the lattice matching
are those given in Section 3.4. Each sequence is 100 images (30 fps); a lag time of images (1 second)
is used to compute A.
Other Person Dog
Other
Person
Table
1. Confusion matrix for person, dog, and other classification.
4.5 Counting People
Another common task in an automated surveillance system is to count the number of people entering
and leaving an area. This task is difficult, since when people are close to each other, it is not always
Figure
15. Person running across a parking lot, viewed from a moving camera at an altitude of 1500'.
(a) (b) (c)
Figure
16. Zoomed images of the person in Figure 15, which correspond to the poses in Figure 7. The
person is 12x7 pixels in size.
Similarity of Image T 1 and T 2
Autocorrelation of Similarity
TLag
(b)
Figure
17. (a) Similarity plot of the running person in Figure 15. (b) Autocorrelation of upper quadrant
of S # . The peaks are used to fit the rotated square lattice in Figure 2(b).
Power
Spectral Power of Similarity
Power
Mean
Figure
18. Spectral power of the running person in Figure 15.
(a)
Similarity of Image T 1 and T 2
(b)
Figure
19. (a) Vehicle driving across a parking lot. (b) Similarity plot of the vehicle.
Autocorrelation of Similarity
TLag
(a)
Power
Spectral Power
power
mean
(b)
Figure
20. (a) Spectral power of the vehicle in Figure 19(a). (b) Autocorrelation of S # of the vehicle in
Figure
19(a) (smoothed with a 5 5 filter). The peaks are denoted by '+' symbols.
simple to distinguish the individuals. For example, Figure 21(a) is a frame from an airborne video
sequence that shows three people running along a road, and the result of the motion segmentation
Figure
21(b)). Simple motion blob counting will give an inaccurate estimate of the number of people.
However, if we know the approximate location of the airplane (via GPS) and have an approximate site
model (a ground plane), we can estimate what the expected image size an "average" person should
be. This size is used to window a region with motion for periodic detection. In this example, three
non-overlapping windows were found to have periodic motion, each corresponding to a person. The
similarity plots and spectral powers are shown in Figure 22.
The similarity plots in Figure 22 can also be used to extract the phase angle of the running person.
The phase angle is encoded in the position of the cross diagonals of S # . In this example, the phase angles
are all significantly different from one another, giving further evidence that we have not over-counted
the number of people.
(a) (b)
Figure
21. (a) Three people running, viewed from a moving camera at an altitude of 1500'. (b)
Segmented motion.
Person 1
Person 2
Person 3
Frequency (Hz)
Power
Frequency (Hz)
Frequency (Hz)
Figure
22. Similarity plots and spectral power for 3 people in Figure 21(a). Note that the frequency
resolution is not as high as in Figure 18, since fewer frames are used to estimate the power.
4.6 Simple Event Detection
In this example, we show how periodicity can be used as input for event detection. Figure 23(a)
shows a person walking through a low contrast area (in a shadow) toward the camera; half way through
the 200 image sequence the person stops swinging his arms and puts them into his pockets. This
action is shown on the similarity plots for the upper and lower portions of the body. Specifically, in
Figure
23(c), a periodic pattern for the upper part of the body is visible for the images [1,100], but
not for [101,200]. This is further shown by the significant peak in the power spectrum for the images
Figure
24(a)) and the lack of significant peaks in the power spectrum for the images [101,200]
Figure
24(b)). Thus, while the image of the person is only 37 pixels high in this sequence and we are
not tracking his body parts, we can deduce that he stopped swinging his arms at about frame 100. An
automated surveillance system can use this technique to help decide if someone is carrying an object.
In [13], we combine periodicity and shape analysis to detect if someone is carrying an object.
4.7 Non-Stationary Periodicity
In this example, a person is walking, and roughly half way through the sequence, starts to run (see
Figure
25(a)). The similarity plot (Figure 25(b)) clearly shows this transition. Using a short-time
analysis windowing Hanning function of length 3300 ms (100 frames), the power is estimated in the
(a)
TSimilarity of Image T 1 and T 2 (Lower 40% of Body)
(b)
TSimilarity of Image T 1 and T 2 (Upper 60% of Body)
(c)
Figure
23. (a) Frame 100 from a low contrast 200 frame sequence; the subject (marked with a white
arrow) puts his hands in his pockets halfway through the sequence. (b) Similarity plot of the lower
40% of the body. (c) Similarity plot of the upper 60% of the body. The periodicity ceases after the
middle of the sequence.
0.20.40.60.8Frequency (Hz)
Power
Spectral Power of Similarity - Upper - Q2
Power Mean
(a)
Power
Spectral Power of Similarity - Upper - Q4
Power
Mean
(b)
Figure
24. (a) Power spectra of the upper left quadrant of Figure 23(c). (b) Power spectra of the lower
right quadrant of Figure 23(c).
walking and running stages (Figure 26).
4.8 Estimating Human Stride Using Periodicity
In [26] and [20], human gait was used for person recognition. In this example, we do not analyze
the gait (which is how people walk or run), but rather estimate the stride length of a walking or running
person. The stride itself can be useful for person recognition, particularly during tracking. For example,
stride length can help object (person) correspondence after occlusions. Stride length can also be used
for input to a surveillance system to detect auto-theft in a parking area (e.g., a person of different size
and stride length drove off with a car than the person who drove in with the car).
Assume the area of surveillance has a site model, and the camera is calibrated. The estimated stride
length is is the ground velocity of the person, and p is the period. For best results, v g
and p should be filtered to reduce the inherent noise in the tracking and period estimation. For example,
in
Figure
25(a), the estimated stride of the person is 22" when walking, and 42" when running, which
is within 2" of the person's actual stride.
(a)
TSimilarity of Image T 1 and T 2
50 100 150 200 250 300100200300
(b)
Figure
25. (a) Person walking, then running. (b) Similarity plot of walking/running sequence.
Frequency (Hz)
Power
Walking
power
mean
Frequency (Hz)
Power
Running
power
mean
Figure
26. Spectral power for walking/running sequence in Figure 25(a).
5 Real-Time System
A real-time system has been implemented to track and classify objects using periodicity. The system
uses a dual Processor 550MHz Pentium III Xeon-based PC, and runs at 15Hz with 640x240 grayscale
images captured from an airborne video camera. The system uses the real-time stabilization results
from [12].
We will briefly discuss how the method can be efficiently implemented to run on a real-time system.
In computing S # , for each new frame, only a single column that corresponds to the new frame needs to
be recomputed; the remaining entries can be reused (shifted) for the updated S # . Therefore, for each
new frame, only O(N) S # (i, computations need to be done, where N is the number of rows and
columns in S # .
For computing A, the 2D FFT can be utilized to greatly decrease the computational cost [18].
Finally, SIMD instructions, such as those available on the Pentium III, can be utilized for computing
as well as A (either directly or using the FFT).
6 Conclusions
We have described new techniques to detect and analyze periodic motion as seen from both a static
and moving camera. By tracking objects of interest, we compute an object's self-similarity as it evolves
in time. For periodic motion, the self-similarity measure is also periodic, and we apply Time-Frequency
analysis to detect and characterize the periodic motion. The periodicity is also analyzed robustly using
the 2-D lattice structures inherent in similarity matrices.
Future work includes using alternative independent motion algorithms for moving camera video,
which could make the analysis more robust for non-homogeneous backgrounds for the case of a moving
camera. Further use of the symmetries of motion for use in classification of additional types of periodic
motion is also being investigated.
Acknowledgments
The airborne video was provided by the DARPA Airborne Video Surveillance project. This paper
was written under the support of contract DAAL-01-97-K-0102 (ARPA Order E653), DAAB07-98-C-
J019, and AASERT Grant DAAH-04-96-1-0221.
--R
Image sequence description using spatiotemporal flow curves: Toward Motion-Based Recog- nition
Color indexing.
Time Series: Theory and Methods.
Recurrence plots revisited.
Dynamic system representation
Recurrence plots of dynamical systems.
The pigeon's discrimination of movement patterns (lissajous figures) and contour-dependent rotational invariance
The interpretation of visual motion: recognizing moving light displays.
Backpack: Detection of people carrying objects using silhouettes.
Comparing images using the hausdorff distance.
Visual motion perception.
Visual position stabilization in the hummingbird hawk moth
Fast normalized cross-correlation
Extracting periodicity of a regular texture based on autocorrelation functions.
Recognizing people by their gate: the shape of motion.
Finding periodicity in space and time.
Classical dynamics of particles and systems.
Recurrence matrices and the preservation of dynamical properties.
Rigidity checking of 3D point correspondences under perspective projection.
Analyzing and recognizing walking figures in xyt.
Analyzing gait with spatiotemporal surfaces.
Spectral Analysis for Physical Applications: Multitaper and Conventional Univariate Techniques
Detection and recognition of periodic
Numerical Recipes in C.
General filtered image rescaling.
Classifying moving objects as rigid or non-rigid without correspondences
Cyclic motion detection for motion based recognition.
Princeton University Press
--TR
--CTR
J. Janta , P. Kumsawat , K. Attakitmongkol , A. Srikaew, A pedestrian detection system using applied log-Gabor, Proceedings of the 7th WSEAS International Conference on Signal, Speech and Image Processing, p.55-60, September 15-17, 2007, Beijing, China
Paul Viola , Michael J. Jones , Daniel Snow, Detecting Pedestrians Using Patterns of Motion and Appearance, International Journal of Computer Vision, v.63 n.2, p.153-161, July 2005
Computational Model for Periodic Pattern Perception Based on Frieze and Wallpaper Groups, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.3, p.354-371, March 2004
G. Aliaga, Efficient multi-viewpoint acquisition of 3D objects undergoing repetitive motions, Proceedings of the 2007 symposium on Interactive 3D graphics and games, April 30-May 02, 2007, Seattle, Washington
Enrica Dente , Anil Anthony Bharath , Jeffrey Ng , Aldert Vrij , Samantha Mann , Anthony Bull, Tracking hand and finger movements for behaviour analysis, Pattern Recognition Letters, v.27 n.15, p.1797-1808, November, 2006
Enrica Dente , Anil Anthony Bharath , Jeffrey Ng , Aldert Vrij , Samantha Mann , Anthony Bull, Tracking hand and finger movements for behaviour analysis, Pattern Recognition Letters, v.27 n.15, p.1797-1808, November 2006
Tao Zhao , Ram Nevatia, Tracking Multiple Humans in Complex Situations, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.9, p.1208-1221, September 2004
S. Ioffe , D. A. Forsyth, Probabilistic Methods for Finding People, International Journal of Computer Vision, v.43 n.1, p.45-68, June 2001
Yang Ran , Isaac Weiss , Qinfen Zheng , Larry S. Davis, Pedestrian Detection via Periodic Motion Analysis, International Journal of Computer Vision, v.71 n.2, p.143-160, February 2007
D. M. Gavrila , S. Munder, Multi-cue Pedestrian Detection and Tracking from a Moving Vehicle, International Journal of Computer Vision, v.73 n.1, p.41-59, June 2007
Robert Pless, Spatio-temporal background models for outdoor surveillance, EURASIP Journal on Applied Signal Processing, v.2005 n.1, p.2281-2291, 1 January 2005
Guangyu Zhu , Changsheng Xu , Qingming Huang , Wen Gao , Liyuan Xing, Player action recognition in broadcast tennis video with applications to semantic analysis of sports game, Proceedings of the 14th annual ACM international conference on Multimedia, October 23-27, 2006, Santa Barbara, CA, USA
Josh Wills , Sameer Agarwal , Serge Belongie, A Feature-based Approach for Dense Segmentation and Estimation of Large Disparity Motion, International Journal of Computer Vision, v.68 n.2, p.125-143, June 2006
Zhongfei (Mark) Zhang , Stoyan Kurtev, Independent motion detection directly from compressed surveillance video, First ACM SIGMM international workshop on Video surveillance, November 02-08, 2003, Berkeley, California
Gary R. Bradski , James W. Davis, Motion segmentation and pose recognition with motion history gradients, Machine Vision and Applications, v.13 n.3, p.174-184, July 2002
Chiraz BenAbdelkader , Ross G. Cutler , Larry S. Davis, Gait recognition using image self-similarity, EURASIP Journal on Applied Signal Processing, v.2004 n.1, p.572-585, 1 January 2004
Congxia Dai , Yunfei Zheng , Xin Li, Pedestrian detection and tracking in infrared imagery using shape and appearance, Computer Vision and Image Understanding, v.106 n.2-3, p.288-299, May, 2007
Yingen Xiong , Francis Quek , David McNeill, Hand motion gestural oscillations and multimodal discourse, Proceedings of the 5th international conference on Multimodal interfaces, November 05-07, 2003, Vancouver, British Columbia, Canada
ChunMei Lu , Nicola J. Ferrier, Repetitive Motion Analysis: Segmentation and Event Classification, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.2, p.258-263, January 2004
Chung-Lin Huang , Chia-Ying Chung, A real-time model-based human motion tracking and analysis for human computer interface systems, EURASIP Journal on Applied Signal Processing, v.2004 n.1, p.1648-1662, 1 January 2004
Yingen Xiong , Francis Quek, Hand Motion Gesture Frequency Properties and Multimodal Discourse Analysis, International Journal of Computer Vision, v.69 n.3, p.353-371, September 2006
M. Bertozzi , A. Broggi , C. Caraffi , M. Del Rose , M. Felisa , G. Vezzoni, Pedestrian detection by means of far-infrared stereo vision, Computer Vision and Image Understanding, v.106 n.2-3, p.194-204, May, 2007
Berna Erol , Faouzi Kossentini, Retrieval by local motion, EURASIP Journal on Applied Signal Processing, v.2003 n.1, p.41-47, January
David A. Forsyth , Okan Arikan , Leslie Ikemoto , James O'Brien , Deva Ramanan, Computational studies of human motion: part 1, tracking and motion synthesis, Foundations and Trends in Computer Graphics and Vision, v.1 n.2, p.77-254, July 2006 | motion-based recognition;periodic motion;motion segmention;object classification;person detection;motion symmetries |
351607 | A Bayesian Computer Vision System for Modeling Human Interactions. | AbstractWe describe a real-time computer vision and machine learning system for modeling and recognizing human behaviors in a visual surveillance task [1]. The system is particularly concerned with detecting when interactions between people occur and classifying the type of interaction. Examples of interesting interaction behaviors include following another person, altering one's path to meet another, and so forth. Our system combines top-down with bottom-up information in a closed feedback loop, with both components employing a statistical Bayesian approach [2]. We propose and compare two different state-based learning architectures, namely, HMMs and CHMMs for modeling behaviors and interactions. The CHMM model is shown to work much more efficiently and accurately. Finally, to deal with the problem of limited training data, a synthetic Alife-style training system is used to develop flexible prior models for recognizing human interactions. We demonstrate the ability to use these a priori models to accurately classify real human behaviors and interactions with no additional tuning or training. | Introduction
We describe a real-time computer vision and machine
learning system for modeling and recognizing human
behaviors in a visual surveillance task. The system
is particularly concerned with detecting when interactions
between people occur, and classifying the type
of interaction.
Over the last decade there has been growing interest
within the computer vision and machine learning
communities in the problem of analyzing human behavior
in video ([10],[3],[20], [8], [17], [14],[9], [11]).
Such systems typically consist of a low- or mid-level
computer vision system to detect and segment a moving
object - human or car, for example -, and a
higher level interpretation module that classifies the
motion into 'atomic' behaviors such as, for example, a
pointing gesture or a car turning left.
However, there have been relatively few efforts to
understand more human behaviors that have substantial
extent in time, particularly when they involve interactions
between people. This level of interpretation
is the goal of this paper, with the intention of
building systems that can deal with the complexity of
multi-person pedestrian and highway scenes.
This computational task combines elements of
AI/machine learning and computer vision, and
presents challenging problems in both domains: from
a Computer Vision viewpoint, it requires real-time, accurate
and robust detection and tracking of the objects
of interest in an unconstrained environment; from a
Machine Learning and Artificial Intelligence perspective
behavior models for interacting agents are needed
to interpret the set of perceived actions and detect
eventual anomalous behaviors or potentially dangerous
situations. Moreover, all the processing modules
need to be integrated in a consistent way.
Our approach to modeling person-to-person interactions
is to use supervised statistical learning techniques
to teach the system to recognize normal single-person
behaviors and common person-to-person inter-
actions. A major problem with a data-driven statistical
approach, especially when modeling rare or
anomalous behaviors, is the limited number of examples
of those behaviors for training the models. A
major emphasis of our work, therefore, is on efficient
Bayesian integration of both prior knowledge (by the
use of synthetic prior models) with evidence from data
(by situation-specific parameter tuning). Our goal is
to be able to successfully apply the system to any normal
multi-person interaction situation without additional
training.
Another potential problem arises when a completely
new pattern of behavior is presented to the
system. After the system has been trained at a few
different sites, previously unobserved behaviors will be
(by definition) rare and unusual. To account for such
novel behaviors the system should be able to recognize
such new behaviors, and to build models of the
behavior from as as little as a single example.
We have pursued a Bayesian approach to modeling
that includes both prior knowledge and evidence from
data, believing that the Bayesian approach provides
the best framework for coping with small data sets and
novel behaviors. Graphical models [6], such as Hidden
Markov Models (HMMs) [21] and Coupled Hidden
Markov Models (CHMMs) [5, 4], seem most appropriate
for modeling and classifying human behaviors
because they offer dynamic time warping, a well-understood
training algorithm, and a clear Bayesian
semantics for both individual (HMMs) and interacting
or coupled (CHMMs) generative processes.
To specify the priors in our system, we have found it
useful to develop a framework for building and training
models of the behaviors of interest using synthetic
agents. Simulation with the agents yields synthetic
data that is used to train prior models. These prior
models are then used recursively in a Bayesian frame-work
to fit real behavioral data. This approach provides
a rather straightforward and flexible technique
to the design of priors, one that does not require strong
analytical assumptions to be made about the form of
the priors 1 . In our experiments we have found that by
combining such synthetic priors with limited real data
we can easily achieve very high accuracies of recognition
of different human-to-human interactions. Thus,
our system is robust to cases in which there are only
a few examples of a certain behavior (such as in interaction
described in section 5.1) or even no
examples except synthetically-generated ones.
The paper is structured as follows: section 2
presents an overview of the system, section 3 describes
the computer vision techniques used for segmentation
and tracking of the pedestrians, and the statistical
models used for behavior modeling and recognition are
described in section 4. Section 5 contains experimental
results with both synthetic agent data and real video
data, and section 6 summarizes the main conclusions
and sketches our future directions of research. Finally
a summary of the CHMM formulation is presented in
the appendix.
System Overview
Our system employs a static camera with wide field-of-
view watching a dynamic outdoor scene (the extension
to an active camera [1] is straightforward and planned
for the next version). A real-time computer vision system
segments moving objects from the learned scene.
The scene description method allows variations in
lighting, weather, etc., to be learned and accurately
discounted.
1 Note that our priors have the same form as our posteriors,
namely they are Markov models.
For each moving object an appearance-based description
is generated, allowing it to be tracked though
temporary occlusions and multi-object meetings. An
Extended Kalman filter tracks the objects location,
coarse shape, color pattern, and velocity. This temporally
ordered stream of data is then used to obtain
a behavioral description of each object, and to detect
interactions between objects.
Figure
1 depicts the processing loop and main functional
units of our ultimate system.
1. The real-time computer vision input module detects
and tracks moving objects in the scene, and
for each moving object outputs a feature vector
describing its motion and heading, and its spatial
relationship to all nearby moving objects.
2. These feature vectors constitute the input to
stochastic state-based behavior models. Both
HMMs and CHMMs, with varying structures depending
on the complexity of the behavior, are
then used for classifying the perceived behaviors.
Figure
1: Top-down and bottom-up processing loop
Note that both top-down and bottom-up streams of
information are continuously managed and combined
for each moving object within the scene. Consequently
our Bayesian approach offers a mathematical frame-work
for both combining the observations (bottom-up)
with complex behavioral priors (top-down) to provide
expectations that will be fed back to the perceptual
system.
3 Segmentation and Tracking
The first step in the system is to reliably and robustly
detect and track the pedestrians in the scene.
We use 2-D blob features for modeling each pedes-
trian. The notion of "blobs" as a representation for
image features has a long history in computer vision
[19, 15, 2, 25, 18], and has had many different mathematical
definitions. In our usage it is a compact set of
pixels that share some visual properties that are not
shared by the surrounding pixels. These properties
could be color, texture, brightness, motion, shading,
a combination of these, or any other salient spatio-temporal
property derived from the signal (the image
sequence).
3.1 Segmentation by Eigenbackground
subtraction
In our system the main cue for clustering the pixels
into blobs is motion, because we have a static background
with moving objects. To detect these moving
objects we build an adaptive eigenspace that models
the background. This eigenspace model describes
the range of appearances (e.g., lighting variations over
the day, weather variations, etc.) that have been ob-
served. The eigenspace can also be generated from
a site model using standard computer graphics techniques
The eigenspace model is formed by taking a sample
of N images and computing both the mean - b
background image and its covariance matrix C b . This
covariance matrix can be diagonalized via an eigenvalue
decomposition
b , where \Phi b is the
eigenvector matrix of the covariance of the data and
L b is the corresponding diagonal matrix of its eigen-
values. In order to reduce the dimensionality of the
space, in principal component analysis (PCA) only M
eigenvectors (eigenbackgrounds) are kept, corresponding
to the M largest eigenvalues to give a \Phi M matrix.
A principal component feature vector I
then formed, where is the mean normalized
image vector.
Note that moving objects, because they don't appear
in the same location in the N sample images and
they are typically small, do not have a significant contribution
to this model. Consequently the portions of
an image containing a moving object cannot be well
described by this eigenspace model (except in very unusual
cases), whereas the static portions of the image
can be accurately described as a sum of the the various
eigenbasis vectors. That is, the eigenspace provides a
robust model of the probability distribution function
of the background, but not for the moving objects.
Once the eigenbackground images (stored in a matrix
called
hereafter) are obtained, as well as their
mean - b , we can project each input image I i onto
the space expanded by the eigenbackground images
X i to model the static parts of the scene,
pertaining to the background. Therefore, by comput-
Figure
2: Background mean image, blob segmentation
image and input image with blob bounding boxes
ing and thresholding the Euclidean distance (distance
from feature space DFFS [16]) between the input image
and the projected image we can detect the moving
objects present in the scene: D
where t is a given threshold. Note that it is easy to
adaptively perform the eigenbackground subtraction,
in order to compensate for changes such as big shad-
ows. This motion mask is the input to a connected
component algorithm that produces blob descriptions
that characterize each person's shape. We have also
experimented with modeling the background by using
a mixture of Gaussian distributions at each pixel,
as in Pfinder [26]. However we finally opted for the
eigenbackground method because it offered good results
and less computational load.
3.2 Tracking
The trajectories of each blob are computed and saved
into a dynamic track memory. Each trajectory has
associated a first order Extended Kalman filter that
predicts the blob's position and velocity in the next
frame. Recall that the Kalman Filter is the 'best linear
unbiased estimator' in a mean squared sense and that
for Gaussian processes, the Kalman filter equations
corresponds to the optimal Bayes' estimate.
In order to handle occlusions as well as to solve the
correspondence between blobs over time, the appearance
of each blob is also modeled by a Gaussian PDF
in RGB color space. When a new blob appears in
the scene, a new trajectory is associated to it. Thus
for each blob the Kalman-filter-generated spatial PDF
and the Gaussian color PDF are combined to form a
joint image space and color space PDF. In subsequent
frames the Mahalanobis distance is used to
determine the blob that is most likely to have the same
identity.
4 Behavior Models
In this section we develop our framework for building
and applying models of individual behaviors and
person-to-person interactions. In order to build effective
computer models of human behaviors we need to
address the question of how knowledge can be mapped
onto computation to dynamically deliver consistent interpretations
From a strict computational viewpoint there are
two key problems when processing the continuous
flow of feature data coming from a stream of input
video: (1) Managing the computational load imposed
by frame-by-frame examination of all of the agents
and their interactions. For example, the number of
possible interactions between any two agents of a set
of N agents is N (N \Gamma 1)=2. If naively managed this
load can easily become large for even moderate N ; (2)
Even when the frame-by-frame load is small and the
representation of each agent's instantaneous behavior
is compact, there is still the problem of managing all
this information over time.
Statistical directed acyclic graphs (DAGs) or probabilistic
inference networks (PINs) [7, 13] can provide
a computationally efficient solution to these problems.
HMMs and their extensions, such as CHMMs, can be
viewed as a particular, simple case of temporal PIN or
DAG. PINs consist of a set of random variables represented
as nodes as well as directed edges or links between
them. They define a mathematical form of the
joint or conditional PDF between the random vari-
ables. They constitute a simple graphical way of representing
causal dependencies between variables. The
absence of directed links between nodes implies a conditional
independence. Moreover there is a family of
transformations performed on the graphical structure
that has a direct translation in terms of mathematical
operations applied to the underlying PDF. Finally
they are modular, i.e. one can express the joint global
PDF as the product of local conditional PDFS.
PINs present several important advantages that are
relevant to our problem: they can handle incomplete
data as well as uncertainty; they are trainable and
easier to avoid overfitting; they encode causality in a
natural way; there are algorithms for both doing prediction
and probabilistic inference; they offer a frame-work
for combining prior knowledge and data; and
finally they are modular and parallelizable.
In this paper the behaviors we examine are generated
by pedestrians walking in an open outdoor envi-
ronment. Our goal is to develop a generic, compositional
analysis of the observed behaviors in terms of
states and transitions between states over time in such
a manner that (1) the states correspond to our common
sense notions of human behaviors, and (2) they
are immediately applicable to a wide range of sites and
viewing situations. Figure 3 shows a typical image for
our pedestrian scenario.
Figure
3: A typical image of a pedestrian plaza
Observations
States
O
O
O'
States
Observations S'
Figure
4: Graphical representation of HMM and
CHMM rolled-out in time
4.1 Visual Understanding via Graphical
Models: HMMs and CHMMs
Hidden Markov models (HMMs) are a popular probabilistic
framework for modeling processes that have
structure in time. They have a clear Bayesian seman-
tics, efficient algorithms for state and parameter esti-
mation, and they automatically perform dynamic time
warping. An HMM is essentially a quantization of a
system's configuration space into a small number of
discrete states, together with probabilities for transitions
between states. A single finite discrete variable
indexes the current state of the system. Any information
about the history of the process needed for future
inferences must be reflected in the current value of this
state variable. Graphically HMMs are often depicted
'rolled-out in time' as PINs, such as in figure 4.
However, many interesting systems are composed
of multiple interacting processes, and thus merit a
compositional representation of two or more variables.
This is typically the case for systems that have structure
both in time and space. With a single state vari-
able, Markov models are ill-suited to these problems.
In order to model these interactions a more complex
architecture is needed.
Extensions to the basic Markov model generally increase
the memory of the system (durational model-
ing), providing it with compositional state in time.
We are interested in systems that have compositional
state in space, e.g., more than one simultaneous state
variable. It is well known that the exact solution of
extensions of the basic HMM to 3 or more chains is
intractable. In those cases approximation techniques
are needed ([22, 12, 23, 24]). However, it is also known
that there exists an exact solution for the case of 2 interacting
chains, as it is our case [22, 4].
We therefore use two Coupled Hidden Markov Models
(CHMMs) for modeling two interacting processes,
in our case they correspond to individual humans. In
this architecture state chains are coupled via matrices
of conditional probabilities modeling causal (tempo-
ral) influences between their hidden state variables.
The graphical representation of CHMMs is shown in
figure 4. From the graph it can be seen that for each
chain, the state at time t depends on the state at time
both chains. The influence of one chain on the
other is through a causal link. The appendix contains
a summary of the CHMM formulation.
In this paper we compare performance of HMMs
and CHMMs for maximum a posteriori (MAP) state
estimation. We compute the most likely sequence of
states -
S within a model given the observation sequence
ng. This most likely sequence is obtained
by -
In the case of HMMs the posterior state sequence
probability P (SjO) is given by
Y
(1)
is the set of discrete states,
corresponds to the state at time t. P ijj
is the state-to-state transition probability
(i.e. probability of being in state a i at time t given
that the system was in state a j at time t \Gamma 1). In
the following we will write them as P s t js
. The prior
probabilities for the initial state are P i
are the
output probabilities for each state, (i.e. the probability
of observing given state a i at time t).
In the case of CHMMs we need to introduce another
set of probabilities, P s t js 0
, which correspond to the
probability of state s t at time t in one chain given that
the other chain - denoted hereafter by superscript 0
- was in state s 0
These new probabilities
express the causal influence (coupling) of one
chain to the other. The posterior state probability for
CHMMs is given by
(2)
\Theta
Y
js
t denote states and observations for
each of the Markov chains that compose the CHMMs.
We direct the reader to [4] for a more detailed description
of the MAP estimation in CHMMs.
Coming back to our problem of modeling human
behaviors, two persons (each modeled as a generative
process) may interact without wholly determining
each others' behavior. Instead, each of them has its
own internal dynamics and is influenced (either weakly
or strongly) by others. The probabilities
and
js
describe this kind of interactions and CHMMs
are intended to model them in as efficient a manner
as is possible.
5 Experimental Results
Our goal is to have a system that will accurately interpret
behaviors and interactions within almost any
pedestrian scene with little or no training. One critical
problem, therefore, is generation of models that
capture our prior knowledge about human behavior.
The selection of priors is one of the most controversial
and open issues in Bayesian inference. To address
this problem we have created a synthetic agents modeling
package which allows us to build flexible prior
behavior models.
5.1 Synthetic Agents Behaviors
We have developed a framework for creating synthetic
agents that mimic human behavior in a virtual en-
vironment. The agents can be assigned different behaviors
and they can interact with each other as well.
Currently they can generate 5 different interacting behaviors
and various kinds of individual behaviors (with
no interaction). The parameters of this virtual environment
are modeled on the basis of a real pedestrian
scene from which we obtained (by hand) measurements
of typical pedestrian movement.
One of the main motivations for constructing such
synthetic agents is the ability to generate synthetic
data which allows us to determine which Markov
model architecture will be best for recognizing a new
behavior (since it is difficult to collect real examples
of rare behaviors). By designing the synthetic agents
models such that they have the best generalization
and invariance properties possible, we can obtain flexible
prior models that are transferable to real human
behaviors with little or no need of additional training.
The use of synthetic agents to generate robust behavior
models from very few real behavior examples is of
special importance in a visual surveillance task, where
typically the behaviors of greatest interest are also the
most rare.
In the experiments reported here, we considered five
different interacting behaviors: (1) Follow, reach and
walk together (inter1), (2) Approach, meet and go on
separately (inter2), (3) Approach, meet and go on together
(inter3), (4) Change direction in order to meet,
approach, meet and continue together (inter4), and
Change direction in order to meet, approach, meet
and go on separately (inter5).
Note that we assume that these interactions can
happen at any moment in time and at any location,
provided only that the precondititions for the interactions
are satisfied.
For each agent the position, orientation and velocity
is measured, and from this data a feature vector
is constructed which consists of: -
d 12 , the derivative
of the relative distance between two agents; ff
or degree of alignment of the agents,
and
the magnitude of their
velocities. Note that such feature vector is invariant
to the absolute position and direction of the agents
and the particular environment they are in.
Figure
5 illustrates the agents trajectories and associated
feature vector for an example of interaction
2, i.e. an 'approach, meet and continue separately'
behavior.
-0.4
-0.2Alignment
Relative distance
Derivative of Relative distance
Figure
5: Example trajectories and feature vector for
interaction 2, or approach, meet and continue separately
behavior.
5.1.1 Comparison of CHMM and HMM
architectures
We built models of the previously described interactions
with both CHMMs and HMMs. We used 2
or 3 states per chain in the case of CHMMs, and 3
to 5 states in the case of HMMs (accordingly to the
complexity of the various interactions). Each of these
architectures corresponds to a different physical hy-
pothesis: CHMMs encode a spatial coupling in time
between two agents (e.g., a non-stationary process)
whereas HMMs model the data as an isolated, stationary
process. We used from 11 to 75 sequences
for training each of the models, depending on their
complexity, such that we avoided overfitting. The optimal
number of training examples, of states for each
interaction as well as the optimal model parameters
were obtained by a 10% cross-validation process. In
all cases, the models were set up with a full state-to-
state connection topology, so that the training algorithm
was responsible for determining an appropriate
state structure for the training data. The feature vector
was 6-dimensional in the case of HMMs, whereas
in the case of CHMMs each agent was modeled by
a different chain, each of them with a 3-dimensional
feature vector.
To compare the performance of the two previously
described architectures we used the best trained models
to classify 20 unseen new sequences. In order
to find the most likely model, the Viterbi algorithm
was used for HMMs and the N-heads dynamic programming
forward-backward propagation algorithm
for CHMMs.
Table
5.1.1 illustrates the accuracy for each of the
two different architectures and interactions. Note the
superiority of CHMMs versus HMMs for classifying
the different interactions and, more significantly, identifying
the case in which there are no interactions
present in the testing data.
Accuracy on synthetic data
HMMs CHMMs
Table
1: Accuracy for HMMs and CHMMs on synthetic
data. Accuracy at recognizing when no inter-action
occurs ('No inter'), and accuracy at classifying
each type of interaction: 'Inter1' is follow, reach and
walk together; 'Inter2' is approach, meet and go on;
'Inter3' is approach, meet and continue together; 'In-
ter4' is change direction to meet, approach, meet and
go together and 'Inter5' is change direction to meet,
approach, meet and go on separately
Complexity in time and space is an important issue
when modeling dynamic time series. The number
of degrees of freedom (state-to-state probabili-
ties+output means+output covariances) in the largest
best-scoring model was 85 for HMMs and 54 for
CHMMs. We also performed an analysis of the accuracies
of the models and architectures with respect
to the number of sequences used for training. Figure
5.1.1 illustrates the accuracies in the case of interaction
4 (change direction for meeting, stop and continue
together). Efficiency in terms of training data
is specially important in the case of on-line real-time
learning systems -such as ours would ultimately be-
and/or in domains in which collecting clean labeled
data may be difficult.
901030507090number of sequences used for training
accurancy
Single HMMs
Coupled HMMs
curve for synthetic data CHMMs
False alarm rate
Detection
rate
Figure
First figure: Accuracies of CHMMs (solid
line) and HMMs (dotted line) for one particular in-
teraction. The dashed line is the accuracy on testing
without considering the case of no interaction, while
the dark line includes this case. Second figure: ROC
curve on synthetic data.
The cross-product HMMs that result from incorporating
both generative processes into the same joint-
product state space usually requires many more sequences
for training because of the larger number of
parameters. In our case, this appears to result in a accuracy
ceiling of around 80% for any amount of training
that was evaluated, whereas for CHMMs we were
able to reach approximately 100% accuracy with only
a small amount of training. From this result it seems
that the CHMMs architecture, with two coupled generative
processes, is more suited to the problem of the
behavior of interacting agents than a generative process
encoded by a single HMM.
In a visual surveillance system the false alarm rate
is often as important as the classification accuracy.
In an ideal automatic surveillance system, all the targeted
behaviors should be detected with a close-to-
zero false alarm rate, so that we can reasonably alert a
human operator to examine them further. To analyze
this aspect of our system's performance, we calculated
the system's ROC curve. Figure 5.1.1 shows that it
is quite possible to achieve very low false alarm rates
while still maintaining good classification accuracy.
5.2 Pedestrian Behaviors
Our goal is to develop a framework for detecting, classifying
and learning generic models of behavior in a
visual surveillance situation. It is important that the
models be generic, applicable to many different situa-
tions, rather than being tuned to the particular viewing
or site. This was one of our main motivations
for developing a virtual agent environment for modeling
behaviors. If the synthetic agents are 'similar'
enough in their behavior to humans, then the same
models that were trained with synthetic data should
be directly applicable to human data. This section
describes the experiments we have performed analyzing
real pedestrian data using both synthetic and site-specific
models (models trained on data from the site
being monitored).
5.2.1 Data collection and preprocessing
Using the person detection and tracking system described
in section 3 we obtained 2D blob features for
each person in several hours of video. Up to 20 examples
of following and various types of meeting behaviors
were detected and processed.
The feature vector - x coming from the computer
vision processing module consisted of the 2D (x; y)
centroid (mean position) of each person's blob, the
Kalman Filter state for each instant of time, consisting
of (-x; -
y), where -: represents the filter estimation,
and the (r; components of the mean of the Gaussian
fitted to each blob in color space. The frame-rate
of the vision system was of about 20-30 Hz on an SGI
R10000 O2 computer. We low-pass filtered the data
with a 3Hz cutoff filter and computed for every pair
of nearby persons a feature vector consisting of: -
derivative of the relative distance between two per-
sons, jv i norm of the velocity vector for each
person, or degree of alignment
of the trajectories of each person. Typical trajectories
and feature vectors for an 'approach, meet and continue
separately' behavior (interaction 2) are shown in
figure 7. This is the same type of behavior as the one
displayed in figure 5 for the synthetic agents. Note the
similarity of the feature vectors in both cases.
5.2.2 Behavior Models and Results
CHMMs were used for modeling three different be-
haviors: meet and continue together (interaction 3);
meet and split (interaction 2) and follow (interaction
1). In addition, an interaction versus no interaction
detection test was also performed. HMMs performed
1.5Velocity Magnitudes
100 200 300
-1.4
-1.2
-0.4
-0.2Alignment
Relative distance
100 200 300
Derivative of Relative
distance
Figure
7: Example trajectories and feature vector for
interaction 2, or approach, meet and continue separately
behavior.
much worse than CHMMs and therefore we omit reporting
their results.
We used models trained with two types of data:
1. Prior-only (synthetic data) models: that is, the
behavior models learned in our synthetic agent
environment and then directly applied to the
real data with no additional training or tuning
of the parameters.
2. Posterior (synthetic-plus-real data) models: new
behavior models trained by using as starting
points the synthetic best models. We used 8 examples
of each interaction data from the specific
site.
Recognition accuracies for both these 'prior' and 'pos-
terior' CHMMs are summarized in table 5.2.2. It is
noteworthy that with only 8 training examples, the
recognition accuracy on the real data could be raised
to 100%. This results demonstrates the ability to accomplish
extremely rapid refinement of our behavior
models from the initial prior models.
Finally the ROC curve for the posterior CHMMs is
displayed in figure 8.
One of the most interesting results from these experiments
is the high accuracy obtained when testing
the a priori models obtained from synthetic agent
simulations. The fact that a priori models transfer
so well to real data demonstrates the robustness of
the approach. It shows that with our synthetic agent
training system, we can develop models of many different
types of behavior - avoiding thus the problem
of limited amount of training data - and apply these
models to real human behaviors without additional
parameter tuning or training.
Parameters sensitivity In order to evaluate the
sensitivity of our classification accuracy to variations
in the model parameters, we trained a set of models
where we changed different parameters of the agents'
Testing on real pedestrian data
Prior Posterior
CHMMs CHMMs
No-inter 90.9 100
Table
2: Accuracy for both untuned, a priori models
and site-specific CHMMs tested on real pedestrian
data. The first entry in each row is the interaction
vs no-interaction accuracy, the remaining entries are
classification accuracies between the different interacting
behaviors. Interactions are: 'Inter1' follow, reach
and walk together; 'Inter2' approach, meet and go on;
'Inter3' approach, meet and continue together.
curve for pedestrian data CHMM
False alarm rate
Detection
rate
Figure
8: ROC curve for real pedestrian data
dynamics by factors of 2:5 and 5. The performance
of these altered models turned out to be virtually the
same in every case except for the 'inter1' (follow) inter-
action, which seems to be sensitive to people's relative
rates of movement.
6 Summary and Conclusions
In this paper we have described a computer vision system
and a mathematical modeling framework for recognizing
different human behaviors and interactions in
a visual surveillance task. Our system combines top-down
with bottom-up information in a closed feedback
loop, with both components employing a statistical
Bayesian approach.
Two different state-based statistical learning archi-
tectures, namely HMMs and CHMMs, have been proposed
and compared for modeling behaviors and in-
teractions. The superiority of the CHMM formulation
has been demonstrated in terms of both training efficiency
and classification accuracy. A synthetic agent
training system has been created in order to develop
flexible and interpretable prior behavior models, and
we have demonstrated the ability to use these a priori
models to accurately classify real behaviors with
no additional tuning or training. This fact is specially
important, given the limited amount of training data
available.
Acknowledgments
We would like to sincerely thank Michael Jordan, Tony
Jebara and Matthew Brand for their inestimable help
and insightful comments.
Appendix
A: Forward (ff) and
Backward (fi) expressions for CHMMs
In [4] a deterministic approximation for maximum a
posterior (MAP) state estimation is introduced. It enables
fast classification and parameter estimation via
expectation maximization, and also obtains an upper
bound on the cross entropy with the full (combina-
toric) posterior which can be minimized using a sub-space
that is linear in the number of state variables.
An "N-heads" dynamic programming algorithm samples
from the O(N ) highest probability paths through
a compacted state trellis, with complexity O(T (CN
for C chains of N states apiece observing T data
points. For interesting cases with limited couplings
the complexity falls further to O(TCN 2 ).
For HMMs the forward-backward or Baum-Welch
algorithm provides expressions for the ff and fi vari-
ables, whose product leads to the likelihood of a sequence
at each instant of time. In the case of CHMMs
two state-paths have to be followed over time for each
chain: one path corresponds to the 'head' (represented
with subscript 'h') and another corresponds to the
'sidekick' (indicated with subscript 'k') of this head.
Therefore, in the new forward-backward algorithm the
expressions for computing the ff and fi variables will
incorporate the probabilities of the head and sidekick
for each chain (the second chain is indicated with 0 ).
As an illustration of the effect of maintaining multiple
paths per chain, the traditional expression for the ff
variable in a single HMM:
will be transformed into a pair of equations, one for
the full posterior ff and another for the marginalized
posterior ff:
ff
jh
ff
jh
ff
The fi variable can be computed in a similar way by
tracing back through the paths selected by the forward
analysis. After collecting statistics using N-heads dynamic
programming, transition matrices within chains
are re-estimated according to the conventional HMM
expression. The coupling matrices are given by:
--R
Active perception vs. passive per- ception
The representation space paradigm of concurrent evolving object de- scriptions
Computers seeing action.
Coupled hidden markov models for modeling interacting processes.
Alex Pent- land
Operations for learning with graphical models.
A guide to the literature on learning probabilistic networks from data.
Advanced visual surveillance using bayesian networks.
What is going on?
Active gesture recognition using partially observable markov decision processes.
Building qualitative event models automatically from visual input.
Factorial hidden Markov models.
A tutorial on learning with bayesian networks.
Automatic symbolic traffic scene analysis using belief networks.
An unsupervised clustering approach to spatial preprocessing of mss imagery.
Probabilistic visual learning for object detection.
From image sequences towards conceptual descriptions.
Lafter: Lips and face tracking.
Classification by clustering.
Modeling and prediction of human behavior.
A tutorial on hidden markov models and selected applications in speech recognition.
Boltzmann chains and hidden Markov models.
Probabilistic independence networks for hidden Markov probability models.
Mean field networks that learn to discriminate temporally distorted strings.
--TR
--CTR
Koichi Sato , Brian L. Evans , J. K. Aggarwal, Designing an Embedded Video Processing Camera Using a 16-bit Microprocessor for a Surveillance System, Journal of VLSI Signal Processing Systems, v.42 n.1, p.57-68, January 2006
Lang Congyan , Xu De, An event detection framework in video sequences based on hierarchic event structure perception, Proceedings of the 5th WSEAS International Conference on Signal Processing, Robotics and Automation, p.307-312, February 15-17, 2006, Madrid, Spain
A. Pentland, Learning Communities Understanding Information Flow in Human Networks, BT Technology Journal, v.22 n.4, p.62-70, October 2004
Michael Cheng , Binh Pham , Dian Tjondronegoro, Tracking and video surveillance activity analysis, Proceedings of the 4th international conference on Computer graphics and interactive techniques in Australasia and Southeast Asia, November 29-December 02, 2006, Kuala Lumpur, Malaysia
Iain McCowan , Daniel Gatica-Perez , Samy Bengio , Guillaume Lathoud , Mark Barnard , Dong Zhang, Automatic Analysis of Multimodal Group Actions in Meetings, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.3, p.305-317, March 2005
Alberto Avanzi , Franois Brmond , Christophe Tornieri , Monique Thonnat, Design and assessment of an intelligent activity monitoring platform, EURASIP Journal on Applied Signal Processing, v.2005 n.1, p.2359-2374, 1 January 2005
Longin Jan Latecki , Roland Miezianko , Dragoljub Pokrajac, Reliability of motion features in surveillance videos, Integrated Computer-Aided Engineering, v.12 n.3, p.279-290, July 2005
Lucjan Pelc , Bogdan Kwolek, Recognition of action meeting videos using timed automata, Machine Graphics & Vision International Journal, v.15 n.3, p.577-584, January 2006
Sangho Park , J. K. Aggarwal, Recognition of two-person interactions using a hierarchical Bayesian network, First ACM SIGMM international workshop on Video surveillance, November 02-08, 2003, Berkeley, California
Sangho Park , Mohan M. Trivedi, Analysis and query of person-vehicle interactions in homography domain, Proceedings of the 4th ACM international workshop on Video surveillance and sensor networks, October 27-27, 2006, Santa Barbara, California, USA
Maja Mateti , Slobodan Ribari , Ivo Ipi, Qualitative Modelling and Analysis of Animal Behaviour, Applied Intelligence, v.21 n.1, p.25-44, July-August 2004
Antoine Manzanera , Julien C. Richefeu, A new motion detection algorithm based on - background estimation, Pattern Recognition Letters, v.28 n.3, p.320-328, February, 2007
Rita Cucchiara , Costantino Grana , Andrea Prati , Roberto Vezzani, Computer vision techniques for PDA accessibility of in-house video surveillance, First ACM SIGMM international workshop on Video surveillance, November 02-08, 2003, Berkeley, California
Donatello Conte , Pasquale Foggia , Jean-Michel Jolion , Mario Vento, A graph-based, multi-resolution algorithm for tracking objects in presence of occlusions, Pattern Recognition, v.39 n.4, p.562-572, April, 2006
Somboon Hongeng , Ram Nevatia , Francois Bremond, Video-based event recognition: activity representation and probabilistic recognition methods, Computer Vision and Image Understanding, v.96 n.2, p.129-162, November 2004
Amit Sethi , Mandar Rahurkar , Thomas S. Huang, Event detection using "variable module graphs" for home care applications, EURASIP Journal on Applied Signal Processing, v.2007 n.1, p.111-111, 1 January 2007
Alex Pentland , Tanzeem Choudhury , Nathan Eagle , Push Singh, Human dynamics: computation for organizations, Pattern Recognition Letters, v.26 n.4, p.503-511, March 2005
Albert Ali Salah , Ethem Alpaydin , Lale Akarun, A Selective Attention-Based Method for Visual Pattern Recognition with Application to Handwritten Digit Recognition and Face Recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.3, p.420-425, March 2002
Dong Zhang , Daniel Gatica-Perez , Samy Bengio , Iain McCowan , Guillaume Lathoud, Multimodal group action clustering in meetings, Proceedings of the ACM 2nd international workshop on Video surveillance & sensor networks, October 15-15, 2004, New York, NY, USA
Maria Cecilla Mazzaro , Mario Sznaier , Octavia Camps, A Model (In)Validation Approach to Gait Classification, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.11, p.1820-1825, November 2005
Daniel DeMenthon , David Doermann, Video retrieval using spatio-temporal descriptors, Proceedings of the eleventh ACM international conference on Multimedia, November 02-08, 2003, Berkeley, CA, USA
Ying Luo , Tzong-Der Wu , Jenq-Neng Hwang, Object-based analysis and interpretation of human motion in sports video sequences by dynamic Bayesian networks, Computer Vision and Image Understanding, v.92 n.2-3, p.196-216, November/December
Rokia Missaoui , Roman M. Palenichka, Effective image and video mining: an overview of model-based approaches, Proceedings of the 6th international workshop on Multimedia data mining: mining integrated media and complex data, p.43-52, August 21-21, 2005, Chicago, Illinois
Ruth Aguilar-Ponce , Ashok Kumar , J. Luis Tecpanecatl-Xihuitl , Magdy Bayoumi, A network of sensor-based framework for automated visual surveillance, Journal of Network and Computer Applications, v.30 n.3, p.1244-1271, August, 2007
Yaser Sheikh , Mubarak Shah, Bayesian Modeling of Dynamic Scenes for Object Detection, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.11, p.1778-1792, November 2005
Youfu Wu , Jun Shen , Mo Dai, Traffic object detections and its action analysis, Pattern Recognition Letters, v.26 n.13, p.1963-1984, 1 October 2005
Datong Chen , Jie Yang , Howard D. Wactlar, Towards automatic analysis of social interaction patterns in a nursing home environment from video, Proceedings of the 6th ACM SIGMM international workshop on Multimedia information retrieval, October 15-16, 2004, New York, NY, USA
Cen Rao , Mubarak Shah , Tanveer Syeda-Mahmood, Invariance in motion analysis of videos, Proceedings of the eleventh ACM international conference on Multimedia, November 02-08, 2003, Berkeley, CA, USA
Tao Xiang , Shaogang Gong, Model Selection for Unsupervised Learning of Visual Context, International Journal of Computer Vision, v.69 n.2, p.181-201, August 2006
Tao Xiang , Shaogang Gong, Beyond Tracking: Modelling Activity and Understanding Behaviour, International Journal of Computer Vision, v.67 n.1, p.21-51, April 2006
Ulf Ekblad , Jason M. Kinser, Theoretical foundation of the intersecting cortical model and its use for change detection of aircraft, cars, and nuclear explosion tests, Signal Processing, v.84 n.7, p.1131-1146, July 2004
Rmer Rosales , Stan Sclaroff, A framework for heading-guided recognition of human activity, Computer Vision and Image Understanding, v.91 n.3, p.335-367, September
Gianluca Antonini , Santiago Venegas Martinez , Michel Bierlaire , Jean Philippe Thiran, Behavioral Priors for Detection and Tracking of Pedestrians in Video Sequences, International Journal of Computer Vision, v.69 n.2, p.159-180, August 2006
Datong Chen , Jie Yang , Robert Malkin , Howard D. Wactlar, Detecting social interactions of the elderly in a nursing home environment, ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP), v.3 n.1, p.6-es, February 2007
Sudeep Sarkar , Daniel Majchrzak , Kishore Korimilli, Perceptual organization based computational model for robust segmentation of moving objects, Computer Vision and Image Understanding, v.86 n.3, p.141-170, June 2002
Alper Yilmaz , Omar Javed , Mubarak Shah, Object tracking: A survey, ACM Computing Surveys (CSUR), v.38 n.4, p.13-es, 2006
Thomas B. Moeslund , Adrian Hilton , Volker Krger, A survey of advances in vision-based human motion capture and analysis, Computer Vision and Image Understanding, v.104 n.2, p.90-126, November 2006 | Hidden Markov Models;tracking;visual surveillance;human behavior recognition;people detection |
351611 | Normalized Cuts and Image Segmentation. | AbstractWe propose a novel approach for solving the perceptual grouping problem in vision. Rather than focusing on local features and their consistencies in the image data, our approach aims at extracting the global impression of an image. We treat image segmentation as a graph partitioning problem and propose a novel global criterion, the normalized cut, for segmenting the graph. The normalized cut criterion measures both the total dissimilarity between the different groups as well as the total similarity within the groups. We show that an efficient computational technique based on a generalized eigenvalue problem can be used to optimize this criterion. We have applied this approach to segmenting static images, as well as motion sequences, and found the results to be very encouraging. | Introduction
Nearly launched the Gestalt approach which laid out the
importance of perceptual grouping and organization in visual perception. For our purposes,
the problem of grouping can be well motivated by considering the set of points shown in the
figure (1).
Figure
1: How many groups?
Typically a human observer will perceive four objects in the image-a circular ring with
a cloud of points inside it, and two loosely connected clumps of points on its right. However
this is not the unique partitioning of the scene. One can argue that there are three objects-
the two clumps on the right constitute one dumbbell shaped object. Or there are only two
objects, a dumb bell shaped object on the right, and a circular galaxy like structure on the
left. If one were perverse, one could argue that in fact every point was a distinct object.
This may seem to be an artificial example, but every attempt at image segmentation
ultimately has to confront a similar question-there are many possible partitions of the domain
D of an image into subsets D i (including the extreme one of every pixel being a separate
entity). How do we pick the "right" one? We believe the Bayesian view is appropriate- one
wants to find the most probable interpretation in the context of prior world knowledge. The
difficulty, of course, is in specifying the prior world knowledge-some of it is low level such
as coherence of brightness, color, texture, or motion, but equally important is mid- or high-level
knowledge about symmetries of objects or object models.
This suggests to us that image segmentation based on low level cues can not and should
not aim to produce a complete final "correct" segmentation. The objective should instead be
to use the low-level coherence of brightness, color, texture or motion attributes to sequentially
come up with hierarchical partitions. Mid and high level knowledge can be used to either
confirm these groups or select some for further attention. This attention could result in
further repartitioning or grouping. The key point is that image partitioning is to be done
from the big picture downwards, rather like a painter first marking out the major areas and
then filling in the details.
Prior literature on the related problems of clustering, grouping and image segmentation
is huge. The clustering community[12] has offered us agglomerative and divisive algorithms;
in image segmentation we have region-based merge and split algorithms. The hierarchical
divisive approach that we advocate produces a tree, the dendrogram. While most of these
ideas go back to the 70s (and earlier), the 1980s brought in the use of Markov Random
Fields[10] and variational formulations[17, 2, 14]. The MRF and variational formulations also
exposed two basic questions (1) What is the criterion that one wants to optimize? and (2) Is
there an efficient algorithm for carrying out the optimization? Many an attractive criterion
has been doomed by the inability to find an effective algorithm to find its minimum-greedy
or gradient descent type approaches fail to find global optima for these high dimensional,
nonlinear problems.
Our approach is most related to the graph theoretic formulation of grouping. The set
of points in an arbitrary feature space are represented as a weighted undirected graph
E), where the nodes of the graph are the points in the feature space, and an edge is
formed between every pair of nodes. The weight on each edge, w(i; j), is a function of the
similarity between nodes i and j.
In grouping, we seek to partition the set of vertices into disjoint sets
where by some measure the similarity among the vertices in a set V i is high and across
different sets V i ,V j is low.
To partition a graph, we need to also ask the following questions:
1. What is the precise criterion for a good partition?
2. How can such a partition be computed efficiently?
In the image segmentation and data clustering community, there has been much previous
work using variations of the minimal spanning tree or limited neighborhood set approaches.
Although those use efficient computational methods, the segmentation criteria used in most
of them are based on local properties of the graph. Because perceptual grouping is about
extracting the global impressions of a scene, as we saw earlier, this partitioning criterion
often falls short of this main goal.
In this paper we propose a new graph-theoretic criterion for measuring the goodness of
an image partition- the normalized cut. We introduce and justify this criterion in section 2.
The minimization of this criterion can be formulated as a generalized eigenvalue problem;
the eigenvectors of this problem can be used to construct good partitions of the image and
the process can be continued recursively as desired(section 2.1) Section 3 gives a detailed
explanation of the steps of our grouping algorithm. In section 4 we show experimental results.
The formulation and minimization of the normalized cut criterion draws on a body of results
from the field of spectral graph theory(section 5). Relationship to work in computer vision is
discussed in section 6, and comparison with related eigenvector based segmentation methods
is represented in section 6.1. We conclude in section 7.
Grouping as graph partitioning
A graph E) can be partitioned into two disjoint sets, A; B,
by simply removing edges connecting the two parts. The degree of dissimilarity between
these two pieces can be computed as total weight of the edges that have been removed. In
graph theoretic language, it is called the cut:
w(u; v): (1)
The optimal bi-partitioning of a graph is the one that minimizes this cut value. Although
there are exponential number of such partitions, finding the minimum cut of a graph is a
well studied problem, and there exist efficient algorithms for solving it.
Wu and Leahy[25] proposed a clustering method based on this minimum cut criterion.
In particular, they seek to partition a graph into k-subgraphs, such that the maximum cut
across the subgroups is minimized. This problem can be efficiently solved by recursively
finding the minimum cuts that bisect the existing segments. As shown in Wu & Leahy's
work, this globally optimal criterion can be used to produce good segmentation on some of
the images.
However, as Wu and Leahy also noticed in their work, the minimum cut criteria favors
cutting small sets of isolated nodes in the graph. This is not surprising since the cut defined
in (1) increases with the number of edges going across the two partitioned parts. Figure
(2) illustrates one such case. Assuming the edge weights are inversely proportional to the
better cut
Figure
2: A case where minimum cut gives a bad partition.
distance between the two nodes, we see the cut that partitions out node n 1 or n 2 will have a
very small value. In fact, any cut that partitions out individual nodes on the right half will
have smaller cut value than the cut that partitions the nodes into the left and right halves.
To avoid this unnatural bias for partitioning out small sets of points, we propose a new
measure of disassociation between two groups. Instead of looking at the value of total edge
weight connecting the two partitions, our measure computes the cut cost as a fraction of the
total edge connections to all the nodes in the graph. We call this disassociation measure the
normalized cut (Ncut):
(2)
u2A;t2V w(u; t) is the total connection from nodes in A to all nodes
in the graph, and assoc(B; V ) is similarly defined. With this definition of the disassociation
between the groups, the cut that partitions out small isolated points will no longer have
small Ncut value, since the cut value will almost certainly be a large percentage of the total
connection from that small set to all other nodes. In the case illustrated in figure 2, we see
that the cut 1 value across node n 1 will be 100% of the total connection from that node.
In the same spirit, we can define a measure for total normalized association within groups
for a given partition:
where assoc(A; are total weights of edges connecting nodes within A
and B respectively. We see again this is an unbiased measure, which reflects how tightly on
average nodes within the group are connected to each other.
Another important property of this definition of association and disassociation of a partition
is that they are naturally related:
Hence the two partition criteria that we seek in our grouping algorithm, minimizing the
disassociation between the groups and maximizing the association within the group, are
in fact identical, and can be satisfied simultaneously. In our algorithm, we will use this
normalized cut as the partition criterion.
Unfortunately minimizing normalized cut exactly is NP-complete, even for the special
case of graphs on grids. The proof, due to C. Papadimitriou, can be found in appendix A.
However, we will show that when we embed the normalized cut problem in the real value
domain, an approximate solution can be found efficiently.
2.1 Computing the optimal partition
Given a partition of nodes of a graph, V, into two sets A and B, let x be an
dimensional indicator vector, x node i is in A, and \Gamma1 otherwise. Let
be the total connection from node i to all other nodes. With the definitions x and d we can
rewrite Ncut(A; B) as:
Let D be an N \Theta N diagonal matrix with d on its diagonal, W be an N \Theta N symmetrical
matrix with
, and 1 be an N \Theta 1 vector of all ones. Using the
fact 1+xand 1\Gammaxare indicator vectors for respectively, we can rewrite
as:
k1 T D1
can then further expand the above equation as:
Dropping the last constant term, which in this case equals 0, we get
Letting
, and since
Setting is easy to see that
since
Putting everything together we have,
with the condition y(i) 2 f1; \Gammabg and y T
Note that the above expression is the Rayleigh quotient[11]. If y is relaxed to take on
real values, we can minimize equation (5) by solving the generalized eigenvalue system,
(D
However, we have two constraints on y, which come from the condition on the corresponding
indicator vector x. First consider the constraint y T We can show this constraint
on y is automatically satisfied by the solution of the generalized eigensystem. We will do so
by first transforming equation (6) into a standard eigensystem, and show the corresponding
condition is satisfied there. Rewrite equation (6) as
y. One can easily verify that z
1 is an eigenvector of equation (7)
with eigenvalue of 0. Furthermore,
2 is symmetric positive semidefinite,
since (D \Gamma W), also called the Laplacian matrix, is known to be positive semidefinite[18].
Hence z 0 is in fact the smallest eigenvector of equation (7), and all eigenvectors of equation
are perpendicular to each other. In particular, z 1 the second smallest eigenvector is
perpendicular to z 0 . Translating this statement back into the general eigensystem (6), we
have
1 is the smallest eigenvector with eigenvalue of 0, and
1, where y 1 is the second smallest eigenvector of (6).
Now recall a simple fact about the Rayleigh quotient[11]:
Let A be a real symmetric matrix. Under the constraint that x is orthogonal to the
smallest eigenvectors x 1 the quotient x T Ax
x T x is minimized by the next smallest
eigenvector x j , and its minimum value is the corresponding eigenvalue - j .
As a result, we obtain:
z
z
and consequently,
Thus the second smallest eigenvector of the generalized eigensystem (6) is the real valued
solution to our normalized cut problem. The only reason that it is not necessarily the
solution to our original problem is that the second constraint on y that y(i) takes on two
discrete values is not automatically satisfied. In fact relaxing this constraint is what makes
this optimization problem tractable in the first place. We will show in section (3) how this
real valued solution can be transformed into a discrete form.
A similar argument can also be made to show that the eigenvector with the third smallest
eigenvalue is the real valued solution that optimally sub-partitions the first two parts. In fact
this line of argument can be extended to show that one can sub-divide the existing graphs,
each time using the eigenvector with the next smallest eigenvalue. However, in practice
because the approximation error from the real valued solution to the discrete valued solution
accumulates with every eigenvector taken, and all eigenvectors have to satisfy a global mutual
orthogonality constraint, solutions based on higher eigenvectors become unreliable. It is best
to restart solving the partitioning problem on each subgraph individually.
It is interesting to note that, while the second smallest eigenvector y of (6) only approximates
the optimal normalized cut solution, it exactly minimizes the following problem:
in real-valued domain, where Roughly speaking, this forces the indicator
vector y to take similar values for nodes i and j that are tightly coupled(large w ij ).
In summary, we propose using the normalized cut criteria for graph partitioning, and we
have shown how this criteria can be computed efficiently by solving a generalized eigenvalue
problem.
3 The grouping algorithm
Our grouping algorithm consists of the following steps:
1. Given an image or image sequence, set up a weighted graph E), and set the
weight on the edge connecting two nodes being a measure of the similarity between
the two nodes.
2. Solve (D \Gamma eigenvectors with the smallest eigenvalues.
3. Use the eigenvector with second smallest eigenvalue to bipartition the graph.
4. Decide if the current partition should be sub-divided, and recursively repartition the
segmented parts if necessary.
The grouping algorithm as well as its computational complexity can be best illustrated
by using the following two examples.
3.1 Example 1: Point Set Case
Take the example illustrated in figure 3. Suppose we would like to group points on the 2D
plane based purely on their spatial proximity. This can be done through the following steps:
1. Define a weighted graph E), by taking each point as a node in the graph, and
connecting each pair of nodes by a graph edge. The weight on the graph edge connecting
node i and j is set to be w(i; is the Euclidean distance between
the two nodes, and oe x controls the scale of the spatial proximity measure. oe x is set to be 2.0
which is 10% of the height of the point set layout. Figure 4 shows the weight matrix for the
weighted graph constructed.
Figure
3: A point set in the plane.
Figure
4: The weight matrix constructed for the point set in (3), using spatial proximity as
the similarity measure. The points in figure (3), are numbered as: 1-90, points in the circular
ring in counter-clockwise order, 90-100, points in the cluster inside the ring, 100-120, and
120-140, the upper and lower clusters on the right respectively. Notice that the non-zero
weights are mostly concentrated in a few blocks around the diagonal. The entries in those
square blocks are the connections within each of the clusters.
2. Solve for the eigenvectors with the smallest eigenvalues of the system
(D
or equivalently the eigenvectors with the largest eigenvalues of the system
This can be done by using a generalized eigenvector solver, or by first transforming the
generalized eigenvalue system of (11) or (12) into standard eigenvector problems of
or
with
y, and solve it with a standard eigenvector solver.
Either way, we will obtain the solution of the eigenvector problem as shown in figure 5.
3. Partition the point set using the eigenvectors computed. As we have shown, the eigenvector
with the second smallest eigenvalue is the continuous approximation to the discrete
bi-partitioning indicator vector that we seek. For the case that we have constructed, the
eigenvector with the second smallest eigenvalue(figure 5.3) is indeed very close to a discrete
one. One can just partition the nodes in the graph based on the sign of their values in the
eigenvector. Using this rule, we can partition the point set into two sets as shown in figure
6.
To recursively subdivide each of the two groups, we can either 1) rerun the above procedure
on each of the individual groups, or 2) using the the eigenvectors with the next smallest
eigenvalues as approximation to the sub-partitioning indicator vectors for each of the groups.
We can see that in this case, the eigenvector with the third and the fourth smallest eigenvalue
are also good partitioning indicator vectors. Using zero as the splitting point, one can partition
the nodes based on their values in the eigenvector into two halves. The sub-partition
of the existing groups based on those two subsequent eigenvectors are shown in figure 7.
In this case, we see that the eigenvectors computed from system (11) are very close to the
discrete solution that we seek. The second smallest eigenvalue is very close to the optimal
eigenvector
eigenvalue
-0.2
-0.050.10.2
-0.06
-0.06
-0.06
Figure
5: Subplot (1) plots the smallest 10 eigenvalues of the generalized eigenvalue system
(11). Subplot (2) - (6) shows the eigenvectors corresponding the the 5 smallest eigenvalues
of the system. Note the eigenvector with the smallest eigenvalue (2) is a constant as shown
in section 2, and eigenvector with the second smallest eigenvalue (3) is an indicator vector: it
takes on only positive values for points in the two clusters to the right. Therefore, using this
eigenvector, we can find the first partition of the point set into two clusters: the points on
the left with the ring and the cluster inside, and points on the right with the two dumb bell
shaped clusters. Furthermore, note that the eigenvectors with the third smallest eigenvalue,
subplot (3) and the fourth smallest eigenvalue, subplot(4), are indicator vectors which can
be used to partition apart the ring set with the cluster inside of it, and the two clusters on
the right.
Figure
Partition of the point set using the eigenvector with the second smallest eigenvalue.
Figure
7: Sub-partitioning of the point sets using the eigenvectors with the third and fourth
smallest eigenvalues.
normalized cut value. For the general case, although we are not necessarily this lucky,
there is a bound on how far the second smallest eigenvalue can deviate from the optimal
normalized cut value, as we shall see in section 5. However, there is little theory on how
close the eigenvector is to the discrete form that the normalized cut algorithm seeks. In our
experience, the eigenvector computed is quite close to the desired discrete solution.
3.2 Example 2: Brightness Images
Having studied an example of point set grouping, we turn our attention to the case of static
image segmentation based on brightness and spatial features. Figure 8 shows an image that
we would like to segment.
Figure
8: A gray level image of a baseball game.
Just as in the point set grouping case, we have the following steps for image segmentation:
1. Construct a weighted graph, E), by taking each pixel as a node, and connecting
each pair of pixels by an edge. The weight on that edge should reflect the likelihood
of the two pixels belong to one object. Using just the brightness value of the pixels and their
spatial location, we can define the graph edge weight connecting two nodes i and j as:
\GammakF (i)\GammaF (j)k 2oe I
e
\GammakX (i)\GammaX (j)k2
Figure
9 shows the weight matrix W associated with this weighted graph.
nr
(b)
n=nr * nc
n=nr * nc
(c)
(a)
(d)
Figure
9: The similarity measure between each pair of pixels in (a) can be summarized in a
n \Theta n weight matrix W, shown in (b), where n is the number of pixels in the image. Instead
of displaying W itself, which is very large, two particular rows, i 1 and i 2 of W are shown in
(c) and (d). Each of the rows, is the connection weights from a pixel to all other pixels in
the image. The two rows,i 1 and i 2 are reshaped into the size of the image, and displayed.
The brightness value in (c) and (d) reflects the connection weights. Note that W contains
large number of zeros or near zeros, due to the spatial proximity factor.
2. Solve for the eigenvectors with the smallest eigenvalues of the system
(D
As we saw above, the generalized eigensystem in (16) can be transformed into a standard
eigenvalue problem of
Solving a standard eigenvalue problem for all eigenvectors takes O(n 3 ) operations, where n is
the number of nodes in the graph. This becomes impractical for image segmentation applications
where n is the number of pixels in an image. Fortunately, our graph partitioning has the
following properties: 1) the graphs often are only locally connected and the resulting eigensystem
are very sparse, 2) only the top few eigenvectors are needed for graph partitioning,
and 3) the precision requirement for the eigenvectors is low, often only the right sign bit is re-
quired. These special properties of our problem can be fully exploited by an eigensolver called
the Lanczos method. The running time of a Lanczos algorithm is O(mn) +O(mM(n))[11],
where m is the maximum number of matrix-vector computations required, and M(n) is the
cost of a matrix-vector computation of Ax, where
. Note that sparse
structure in A is identical to that of the weight matrix W. Due to the sparse structure in the
weight matrix W, and therefore A, the matrix- vector computation is only of O(n), where
n is the number of the nodes.
To see why this is the case, we will look at the cost of the inner product of one row of A
with a vector x. Let y
. For a fixed i, A ij is only nonzero if node j is in
a spatial neighborhood of i. Hence there are only a fixed number of operations required for
each A i \Delta x, and the total cost of computing Ax is O(n). Figure 10 is graphical illustration
of this special inner product operation for the case of the image segmentation.
Furthermore, it turns out that we can substantially cut down additional connections from
each node to its neighbors by randomly selecting the connections within the neighborhood
for the weighted graph as shown in figure 11. Empirically, we have found that one can remove
up to 90% of the total connections with each of the neighborhoods when the neighborhoods
are large, without effecting the eigenvector solution to the system.
nr
w(i,j)00
x
y
x y
A
Figure
10: The inner product operation between A(i; and x, of dimension n, is a convolution
operation in the case of image segmentation.
(a)
(b)
Figure
11: Instead of connecting a node i to all the nodes in its neighborhood (indicated by
the shaded area in (a)), we will only connect i to randomly selected nodes(indicated by the
open circles in (b)).
Putting everything together, each of the matrix-vector computations cost O(n) operations
with a small constant factor. The number m depends on many factors[11]. In our experiments
on image segmentation, we observed that m is typically less than O(n 1
Figure
12 shows the smallest eigenvectors computed for the generalized eigensystem with
the weight matrix defined above.
(1) (2) (3)
eigenvalue
Figure
12: Subplot (1) plots the smallest eigenvectors of the generalized eigenvalue system
(11). Subplot (2) - (9) shows the eigenvectors corresponding the 2nd smallest to the 9th
smallest eigenvalues of the system. The eigenvectors are reshaped to be the size of the
image.
3. Once the eigenvectors are computed, we can partition the graph into two pieces using
the second smallest eigenvector. In the ideal case, the eigenvector should only take on two
discrete values, and the signs of the values can tell us exactly how to partition the graph.
However, our eigenvectors can take on continuous values, and we need to choose a splitting
point to partition it into two parts. There are many different ways of choosing such splitting
point. One can take 0 or the median value as the splitting point, or one can search for the
splitting point such that the resulting partition has the best Ncut(A; B) value. We take
the latter approach in our work. Currently, the search is done by checking l evenly spaced
possible splitting points, and computing the best Ncut among them. In our experiments, the
values in the eigenvectors are usually well separated, and this method of choosing a splitting
point is very reliable even with a small l. Figure 13 shows this process.
4. After the graph is broken into two pieces, we can recursively run our algorithm on the
two partitioned parts. Or equivalently, we could take advantage of the special properties of
the other top eigenvectors as explained in previous section to subdivide the graph based on
those eigenvectors. The recursion stops once the Ncut value exceeds certain limit.
We also impose a stability criterion on the partition. As we saw earlier, and as we see in
the eigenvectors with the 7-9th smallest eigenvalues(figure(12.7-9)), sometimes an eigenvector
can take on the shape of a continuous function rather that the discrete indicator function that
we seek. From the view of segmentation, such an eigenvector is attempting to subdivide an
image region where there is no sure way of breaking it. In fact, if we are forced to partition
the image based on this eigenvector, we will see there are many different splitting points
which have similar Ncut values. Hence the partition will be highly uncertain and unstable.
In our current segmentation scheme, we simply choose to ignore all those eigenvectors which
have smoothly varying eigenvector values. We achieve this by imposing a stability criterion
which measures the degree of smoothness in the eigenvector values. The simplest measure is
based on first computing the histogram of the eigenvector values, and then computing the
ratio between the minimum and maximum values in the bins. When the eigenvector values
are continuously varying, the values in the histogram bins will stay relatively the same, and
the ratio will be relatively high. In our experiments, we find that simple thresholding on the
ratio described above can be used to exclude unstable eigenvectors. We have set that value
to be 0.06 in all our experiments.
Figure
14 shows the final segmentation for the image shown in figure 8.
-0.2
-0.10.10.30.50.7(d)
(b)
(a)
(c)
Figure
13: The eigenvector in (a) is a close approximation to a discrete partitioning indicator
vector. Its histogram, shown in (b), indicates that the values in the eigenvector cluster around
two extreme values. (c) and (d) shows the partitioning results with different splitting points
indicated by the arrows in (b). The partition with the best normalized cut value is chosen.
(a) (b) (c)
(d) (e) (f)
Figure
14: (a) shows the original image of size 80 \Theta 100. Image intensity is normalized to
lie within 0 and 1. Subplot (b) - (h) shows the components of the partition with Ncut value
less than 0.04. Parameter setting: oe I = 0:1, oe
3.3 Recursive 2-way Ncut
In summary, our grouping algorithm consists of the following steps:
1. Given a set of features, set up a weighted graph E), compute the weight on
each edge, and summarize the information into W, and D.
2. Solve (D \Gamma eigenvectors with the smallest eigenvalues.
3. Use the eigenvector with second smallest eigenvalue to bipartition the graph by finding
the splitting point such that Ncut is maximized,
4. Decide if the current partition should be sub-divided by checking the stability of the
cut, and make sure Ncut is below pre-specified value. Recursively repartition the
segmented parts if necessary.
The number of groups segmented by this method is controlled directly by the maximum
allowed Ncut.
3.4 Simultanous K-way cut with multiple eigenvectors
One drawback of the recursive 2-way cut is its treatment of the oscillatory eigenvectors. The
stability criteria provides us from cutting oscillatory eigenvectors, but it also prevents us
cutting the subsequent eigenvectors which might be perfect partitioning vectors. Also the
approach is computationally wasteful; only the second eigenvector is used whereas the next
few small eigenvectors also contain useful partitioning information.
Instead of finding the partition using recursive 2-way cut as described above, one can
use the all the top eigenvectors to simultanously obtain a K-way partition. In this method,
the n top eigenvectors are used as n dimensional indicator vectors for each pixel. In the
first step, a simple clustering algorithm, such as the k-means algorithm, is used to obtain an
over-segmentation of the image into k 0 groups. No attempt is made to identify and exclude
oscillatory eigenvectors-they exarcabete the oversegmentation, but that will be dealt with
subsequently.
In the second step, one can proceed in the following two ways:
1. Greedy pruning: iteratively merge two segments at a time until only k segments are
left. At each merge step, those two segments are merged that minimize the k-way
Ncut criterion defined as:
where A i is the ith subset of whole set V.
This computation can be efficiently carried out by iteratively updating the compacted
weight matrix W c , with W c (i;
2. Global recursive cut. From the initial k 0 segments we can build a condensed graph
segment A i corresponds to a node V c
i of the graph. The
weight on each graph edge W c (i; j) is defined to be the total edge
weights from elements in A i to elements in A j . From this condensed graph, we then
recursively bi-partition the graph according the Ncut criteria. This can be carried
out either with the generalized eigenvalue system as in section 3.3, or with exhaustive
search in the discrete domain. Exhaustive search is possible in this case since k 0 is
small, typically k 0 - 100.
We have experimented with this simultanous k-way cut method on our recent test images.
However, the results presented in this paper are all based on the recursive 2-way partitioning
algorithm outlined in the previous subsection 3.3.
4 Experiments
We have applied our grouping algorithm to image segmentation based on brightness, color,
texture, or motion information. In the monocular case, we construct the graph E)
by taking each pixel as a node, and define the edge weight w ij between node i and j as the
product of a feature similarity term and spatial proximity term:
\GammakF (i)\GammaF (j)k 2oe I
e
\GammakX (i)\GammaX (j)k 2oe X if
where X(i) is the spatial location of node i, and F (i) is a feature vector based on intensity,
color, or texture information at that node defined as:
in the case of segmenting point sets,
I(i), the intensity value, for segmenting brightness images,
are the HSV values, for color
segmentation,
where the f i are DOOG filters at various scales and
orientations as used in[16], in the case of texture segmentation.
Note that the weight any pair of nodes i and j that are more than r pixels apart.
We first tested our grouping algorithm on spatial point sets similar to the one shown in
figure (2). Figure (15) shows a point set and the segmentation result. As we can see from
the figure, the normalized cut criterion is indeed able to partition the point set in a desirable
way as we have argued in section (2).
Figure
15: (a) Point set generated by two Poisson processes, with densities of 2.5 and 1.0
on the left and right clusters respectively, (b) 4 and \Theta indicates the partition of point set
in (a). Parameter settings: oe
Figures
(16), (17), (18), and (19) shows the result of our segmentation algorithm on various
brightness images. Figure (16), (17) are synthetic images with added noise. Figure (18)
and (19) are natural images. Note that the "objects" in figure (19) have rather ill-defined
c
a
Figure
image showing a noisy "step" image. Intensity varies from 0 to
1, and Gaussian noise with shows the eigenvector with the
second smallest eigenvalue, and subplot (c) shows the resulting partition.
a b c d
Figure
17: (a) A synthetic image showing three image patches forming a junction. Image
intensity varies from 0 to 1, and Gaussian noise with oe = 0:1 is added. (b)-(d) shows the
top three components of the partition.
a b c d
Figure
(a) shows a 80x100 baseball scene, image intensity is normalized to lie within
0 and 1. (b)-(h) shows the components of the partition with Ncut value less than 0.04.
Parameter setting: oe I = 0.01, oe 5.
a b c d
Figure
19: (a) shows a 126x106 weather radar image. (b)-(g) show the components of the
partition with Ncut value less than 0.08. Parameter setting: oe I = 0:005, oe
boundaries which would make edge detection perform poorly. Figure (20) shows the segmentation
on a color image, reproduced in gray scale in these transactions. The original image
and many other examples can be found at web site http://www.cs.berkeley.edu/~jshi/Grouping.
Note that in all these examples the algorithm is able to extract the major components
of scene, while ignoring small intra-component variations. As desired, recursive partitioning
can be used to further decompose each piece.
a b c
Figure
20: (a) shows a 77x107 color image. (b)-(e) show the components of the partition
with Ncut value less than 0.04. Parameter settings: oe I = 0.01, oe 5.
Figure
shows preliminary results on texture segmentation for a natural image of
a zebra against a background. Note that the measure we have used is orientation-variant,
and therefore parts of the zebra skin with different stripe orientation should be marked as
separate regions.
In the motion case, we will treat the image sequence as a spatiotemporal data set. Given
an image sequence, a weighted graph is constructed by taking each pixel in the image sequence
as a node, and connecting pixels that are in the spatiotemporal neighborhood of each
other. The weight on each graph edge is defined as:
e
\Gammad
a b
c d
Figure
21: (a) shows an image of a zebra. The remaining images show the major components
of the partition. The texture features used correspond to convolutions with DOOG
filters[16] at 6 orientations and 5 scales.
where d(i; j) is the "motion distance" between two pixels i and j. Note that X i in this case
represents the spatial-temporal position of pixel i.
To compute this "motion distance", we will use a motion feature called motion profile.
By motion profile we seek to estimate the probability distribution of image velocity at each
pixel. Let I t (X) denote a image window centered at the pixel at location X 2 R 2 at time
t. We denote by P i (dx) the motion profile of an image patch at node i, I t (X i ), at time t
corresponding to another image patch I t+1 (X i +dx) at time t can be estimated
by first computing the similarity S i (dx) between I t (X i ) and I t+1 (X i +dx), and normalizing
it to get a probability distribution:
There are many ways one can compute similarity between two image patches; we will use a
measure that is based on the sum of squared differences(SSD):
ssd
local neighborhood of image patch I t (X i ). The "motion distance"
between two image pixels is then defined as one minus the cross-correlation of the motion
profiles:
dx
In figure (22) and (23) we show results of the normalized cut algorithm on a synthetic
random dot motion sequence and a indoor motion sequence respectively. For more elaborate
discussion on motion segmentation using normalized cut, as well as how to segment and
track over long image sequences, readers might want to refer to our paper[21].
4.1 Computation time
As we saw from section 3.2, the running time of the normalized cut algorithm is O(mn),
where n is the number of pixels, and m is the number of steps Lanczos takes to converge. On
the 100 \Theta 120 test images shown here, the normalized cut algorithm takes about 2 minutes
on a Intel Pentium 200MHz machines.
Figure
22: Row (a) of this plot shows the six frames of a synthetic random dot image
sequence. Row (b) shows the outlines of the two moving patches in this image sequence.
The outlines shown here are for illustration purposes only. Row (1)-(3) shows the top three
partitions of this image sequence that have Ncut values less than 0.05. The segmentation
algorithm produces 3D space-time partitions of the image sequence. Cross-sections of those
partitions are shown here. The original image size is 100 \Theta 100, and the segmentation is
computed using image patches(superpixels) of size 3 \Theta 3. Each image patch is connected to
other image patches that are less than 5 superpixels away in spatial distance, and 3 frames
away in temporal distance.
Figure
23: Subimage (a) and (b) shows two frames of an image sequence. Segmentation
results on this two frame image sequence are shown in subimage (1) to (5). Segments in (1)
and (2) correspond to the person in the foreground, and segments in (3) to (5) correspond to
the background. The reason that the head of the person is segmented away from the body is
that although they have similar motion, their motion profiles are different. The head region
contains 2D textures and the motion profiles are more peaked, while in the body region the
motion profiles are more spread out. Segment (3) is broken away from (4) and (5) for the
same reason.
An multi-resolution implementation can be used to reduce this running time further on
larger images. In our current experiments, with this implementation, the running time on a
300 \Theta 400 image can be reduced to about 20 seconds on Intel Pentium 300MHz machines.
Furthermore, the bottle neck of the computation, a sparse matrix-vector multiplication step,
can be easily parallelized taking advantage of future computer chip designs.
In our current implementation, the sparse eigenvalue decomposition is computed using
the LASO2 numerical package developed by D. Scott.
4.2 Choice of graph edge weight
In the examples shown here, we used an exponential function of the form of
on the weighted graph edge with feature similarity of d(x). The value of oe is typically set to
of the total range of the feature distance function d(x). The exponential weighting
function is chosen here for its relatively simplicity as well as neutrality, since the focus of this
paper is on developing a general segmentation procedure, given a feature similarity measure.
We found this choice of weight function is quite adequate for typical image and feature
spaces. Section 6.1 shows the effect of using different weighting functions and parameters on
the output of the normalized cut algorithm.
However, the general problem of defining feature similarity incorporating a variety of cues
is not a trivial one. The grouping cues could be of different abstraction levels and types, and
they could be in conflict with each other. Furthermore, the weighting function could vary
from image region to image region, particularly in a textured image. Some of these issues
are addressed in [15].
5 Relationship to Spectral Graph Theory
The computational approach that we have developed for image segmentation is based on
concepts from spectral graph theory. The core idea to use matrix theory and linear algebra
to study properties of the incidence matrix, W and the Laplacian matrix, D \Gamma W, of the
graph and relate them back to various properties of the original graph. This is a rich area
of mathematics, and the idea of using eigenvectors of the Laplacian for finding partitions of
graphs can be traced back to Cheeger[4], Donath & Hoffman[7], and Fiedler[9]. This area has
also seen contributions by theoretical computer scientists[1, 3, 22, 23]. It can be shown that
our notion of normalized cut is related by a constant factor to the concept of conductance
in[22].
For a tutorial introduction to spectral graph theory, we recommend the recent monograph
by Fan Chung[5]. In this monograph, Chung[5] proposes a "normalized" definition of the
Laplacian, as
. The eigenvectors for this "normalized" Laplacian, when
multiplied by
2 , are exactly the generalized eigenvectors we used to compute normalized
cut. Chung points out that the eigenvalues of this "normalized" Laplacian relate well to
graph invariants for general graph in ways that eigenvalues of the standard Laplacian has
failed to do.
Spectral graph theory provides us some guidance on the goodness of the approximation
to the normalized cut provided by the second eigenvalue of the normalized Laplacian. One
way is through bounds on the normalized Cheeger constant[5] which in our terminology can
be defined as
The eigenvalues of (6) are related to the Cheeger constant by the inequality[5]:
G
Earlier work on spectral partitioning used the second eigenvectors of the Laplacian of the
graph defined as D \Gamma W to partition a graph. The second smallest eigenvalue of D \Gamma W is
sometimes known as the Fiedler value. Several results have been derived relating the ratio
cut, and the Fiedler value. A ratio cut of a partition of V , which in fact is
the standard definition of the Cheeger constant, is defined as cut(A;V \GammaA)
. It was shown that
if the Fiedler value is small, partitioning graph based on the Fiedler vector will lead to good
ratio cut[1][23]. Our derivation in section 2.1 can be adapted (by replacing the matrix D in
the denominators by the identity matrix I) to show that the Fiedler vector is a real valued
solution to the problem of minAaeV cut(A;V \GammaA)
, which we can call the average cut.
Although average cut looks similar to the normalized cut, average cut does not have the
important property of having a simple relationship to the average association, which can be
analogously defined as assoc(A;A)
Consequently, one can not simultaneously
minimize the disassociation across the partitions, while maximizing the association within
the groups. When we applied both techniques to the image segmentation problem, we found
that the normalized cut produces better results in practice. There are also other explanations
why the normalized cut has better behavior from graph theoretical point of view, as pointed
out by Chung[5].
As far as we are aware, our work, first presented in[20], represents the first application
of spectral partitioning to computer vision or image analysis. There is however one application
area that has seen substantial application of spectral partitioning-the area of parallel
scientific computing. The problem there is to balance the workload over multiple processors
taking into account communication needs. One of the early papers is [18]. The generalized
eigenvalue approach was first applied to graph partitioning by [8] for dynamically balancing
computational load in a parallel computer. Their algorithm is motivated by [13]'s paper on
representing a hypergraph in a Euclidean Space.
5.1 A physical interpretation
As one might expect, a physical analogy can be set up for the generalized eigenvalue system
(6) that we used to approximate the solution of normalized cut. We can construct a spring-mass
system from the weighted graph by taking graph nodes as physical nodes and graph
edges as springs connecting each pair of nodes. Furthermore, we will define the graph edge
weight as the spring stiffness, and the total edge weights connecting to a node as its mass.
Imagine what would happen if we were to give a hard shake to this spring-mass system
forcing the nodes to oscillate in the direction perpendicular to the image plane. Nodes that
have stronger spring connections among them will likely oscillate together. As the shaking
become more violent, weaker springs connecting to this group of node will be over-stretched.
Eventually the group will "pop" off from the image plane. The overall steady state behavior
of the nodes can be described by its fundamental mode of oscillation. In fact, it can be
shown that the fundamental modes of oscillation of this spring mass system are exactly the
generalized eigenvectors of (6).
ij be the spring stiffness connecting nodes i and j. Define K to be the n \Theta n stiffness
matrix, with K(i;
. Define the diagonal n \Theta n mass matrix M
as
Let x(t) be the n \Theta 1 vector describing the motion of each node. This
spring-mass dynamic system can be described by:
Assuming the solution take the form of the steady state solutions of
this spring-mass system satisfy:
analogous to equation (6) for normalized cut.
Each solution pair describes a fundamental mode of the spring-mass
system. The eigenvectors v k give the steady state displacement of the oscillation in each
mode, and the eigenvalues ! 2
k give the energy required to sustain each mode of oscillation.
Therefore, finding graph partitions that have small normalized cut values is, in effect, the
same as finding a way to "pop" off image regions with minimal effort.
6 Relationship to other graph theoretic approaches to
image segmentation
In the computer vision community, there has been some been previous work on image segmentation
formulated as a graph partition problem. Wu&Leahy[25] use the minimum cut
criterion for their segmentation. As mentioned earlier, our criticism of this criterion is that it
tends to favor cutting off small regions which is undesirable in the context of image segmen-
tation. In an attempt to get more balanced partitions, Cox et.al. [6] seek to minimize the
ratio cut(A;V \GammaA)
some function of the set A. When weight(A) is
taken to the be the sum of the elements in A, we see that this criterion becomes one of the
terms in the definition of average cut above. Cox et. al. use an efficient discrete algorithm
to solve their optimization problem assuming the graph is planar.
Sarkar & Boyer[19] use the eigenvector with the largest eigenvalue of the system
-x for finding the most coherent region in an edge map. Using a similar derivation as
in section (2.1), we can see that the first largest eigenvector of their system approximates
, and the second largest eigenvector approximates minAaeV;BaeV assoc(A;A)
However, the approximation is not tight, and there is no guarantee that
As we should see later in the section , this situation can happen quite often in practice. Since
this algorithm is essentially looking for clusters that have tight within-grouping similarity,
we will call this criteria average association.
6.1 Comparison with related eigenvector based methods
The normalized cut formulation has certain resemblance to the average cut, the standard
spectral graph partitioning, as well as average association formulation. All these three algorithms
can be reduced to solving certain eigenvalue systems. How are they related to each
other?
Figure
summarizes the relationship between these three algorithms. On one hand, both
the normalized cut and the average cut algorithm are trying to find a "balanced partition"
of a weighted graph, while on the other hand, the normalized association and the average
association are trying to find "tight" clusters in the graph. Since the normalized association
is exactly the normalized cut value, the normalized cut formulation seeks a balance
between the goal of clustering and segmentation. It is, therefore, not too surprising to see
that the normalized cut vector can be approximated with the generalized eigenvector of
(D well as that of
Judging from the discrete formulations of these three grouping criterion, it can be seen
that the average association, assoc(A;A)
, has a bias for finding tight clusters. Therefore
it runs the risk of becoming too greedy in finding small but tight clusters in the data.
This might be perfect for data that are Gaussian distributed. However for typical data in
the real world that are more likely to be made up of a mixture of various different types of
Wx='l x (D-W)
or
Average cut
Normalized Cut
or
Discrete
formulation
Continuous
solution
|A|
|B|
|A|
|B| asso(B,V)
Average association
Finding clumps Finding splits
Figure
24: Relationship between normalized cut and other eigenvector based partitioning
techniques. Compared to the average cut and average association formulation, normalized
cut seeks a balance between the goal of finding clumps and finding splits.
distributions, this bias in grouping will have undesired consequences, as we shall illustrate
in the examples below.
For average cut, cut(A;B)
, the opposite problem arises - one can not ensure the
two partitions computed will have tight within-group similarity. This becomes particularly
problematic if the dissimilarity among the different groups varies from one to another, or if
there are several possible partitions all with similar average cut values.
To illustrate these points, let us first consider a set of randomly distributed data in
1D shown in figure 25. The 1D data is made up by two subsets of points, one randomly
distributed from 0 to 0.5, and the other from 0.65 to 1.0. Each data point is taken as a node
in the graph, and the weighted graph edge connecting two points is defined to be inversely
proportional to the distance between two nodes. We will use three monotonically decreasing
weighting functions, defined on the distance function, d(x), with different
rate of fall-off. The three weighting functions are plotted in figure 26(a), 27(a), and 28(a).
The first function,
0:1
, plotted in figure 26(a), has the fastest decreasing rate
among the three. With this weighting function, only close-by points are connected, as shown
in the graph weight matrix W plotted in figure 26(b). In this case, average association fails
to find the right partition. Instead it focuses on finding small clusters in each of the two
main subgroups.
The second function, plotted in figure 27(a), has the slowest decreasing
rate among the three. With this weighting function, most points have some non-trivial
connections to the rest. To find a cut of the graph, a number of edges with heavy weights
have to be removed. In addition, the cluster on the right has less within-group similarity
comparing with the cluster on the left. In this case, average cut has trouble deciding on
where to cut.
The third function,
0:2 , plotted in figure 28(a), has a moderate decreasing
rate. With this weighting function, the nearby point connections are balanced against far-away
point connections. In this case, all three algorithms performs well with normalized cut
producing a clearer solution than the two other methods.
These problems illustrated in figure 26, 27 and 28, in fact are quite typical in segmenting
real natural images. This is particularly true in the case of texture segmentation. Different
Figure
25: A set of randomly distributed points in 1D. The first 20 points are randomly
distributed from 0.0 to 0.5, and the remaining 12 points are randomly distributed from 0.65
to 1.0. Segmentation result of these points with different weighting functions are show in
figure 26, 27, and 28.
texture regions often have very different within-group similarity, or coherence. It is very
difficult to pre-determine the right weighting function on each image region. Therefore it is
important to design a grouping algorithm that is more tolerant to a wide range of weighting
functions. The advantage of using normalized cut becomes more evident in this case. Figure
29 illustrates this point on a natural texture image shown previously in figure 21.
7 Conclusion
In this paper, we developed a grouping algorithm based on the view that perceptual grouping
should be a process that aims to extract global impressions of a scene and provides
a hierarchical description of it. By treating the grouping problem as a graph partitioning
problem, we proposed the normalized cut criteria for segmenting the graph. Normalized
cut is an unbiased measure of disassociation between sub-groups of a graph, and it has the
nice property that minimizing normalized cut leads directly to maximizing the normalized
association which is an unbiased measure for total association within the sub-groups. In
finding an efficient algorithm for computing the minimum normalized cut, we showed that
a generalized eigenvalue system provides a real valued solution to our problem.
0.1Average Cut:
(D
-0.2
-0.10.10.3
Average Association:
-0.4
-0.3
-0.2
-0.4
-0.3
-0.2
Figure
weighting function with fast rate of fall-off:
0:1
, shown in subplot
(a) in solid line. The dotted lines show the two alternative weighting functions used in
figure 27 and 28. Subplot (b) shows the corresponding graph weight matrix W . The two
columns (1) and (2) below show the first, and second extreme eigenvectors for the Normalized
cut(row 1), Average cut(row 2), and Average association(row 3). For both normalized cut,
and average cut, the smallest eigenvector is a constant vector as predicted. In this case, both
normalized cut and average cut perform well, while the average association fails to do the
right thing. Instead it tries to pick out isolated small clusters.
Average Cut:
(D
-0.50.5Average Association:
-0.4
-0.20.2
Figure
27: A weighting function with slow rate of fall-off: shown in subplot
(a) in solid line. The dotted lines show the two alternative weighting functions used in figure
26 and 28. Subplot (b) shows the corresponding graph weight matrix W . The two columns
(1) and (2) below show the first, and second extreme eigenvectors for the Normalized cut(row
1), Average cut(row 2), and Average association(row 3). In this case, both normalized cut
and average association give the right partition, while the average cut has trouble deciding
on where to cut.
Average Cut:
(D
-0.4
-0.20.2Average Association:
-0.4
-0.20.2
Figure
28: A weighting function with medium rate of fall-off:
0:2 , shown in
subplot (a) in solid line. The dotted lines show the two alternative weighting functions used
in figure 26 and 28. Subplot (b) shows the corresponding graph weight matrix W . The two
columns (1) and (2) below show the first, and second extreme eigenvectors for the Normalized
cut(row 1), Average cut(row 2), and average association(row 3). All these three algorithms
perform satisfactorily in this case, with normalized cut producing a clearer solution than the
other two cuts.
(b) (c)
50 100 150 200 250 300 350100200
50 100 150 200 250 300 350100200
(d) (e)
50 100 150 200 250 300 350100200
50 100 150 200 250 300 350100200
Figure
29: Normalized cut and average association result on the zebra image in figure
21. Subplot (a) shows the second largest eigenvector of approximating the
normalized cut vector. Subplot (b) - (e) shows the first to fourth largest eigenvectors of
approximating the average association vector, using the same graph weight
matrix. In this image, pixels on the zebra body have, on average, lower degree of coherence
than the pixels in the background. The average association, with its tendency to find tight
clusters, partitions out only small clusters in the background. The normalized cut algorithm,
having to balance the goal of clustering and segmentation, finds the better partition in this
case.
A computational method based on this idea has been developed, and applied to segmentation
of brightness, color, and texture images. Results of experiments on real and synthetic
images are very encouraging, and illustrate that the normalized cut criterion does indeed
satisfy our initial goal of extracting the "big picture" of a scene.
Acknowledgment
This research was supported by (ARO)DAAH04-96-1-0341, and an NSF Graduate Fellowship
to J. Shi. We thank Christos Papadimitriou for supplying the proof of NP-completeness for
normalized cuts on a grid. In addition, we wish to acknowledge Umesh Vazirani and Alistair
for discussions on graph theoretic algorithms and Inderjit Dhillon and Mark Adams
for useful pointers to numerical packages. Thomas Leung, Serge Belongie, Yeir Weiss and
other members of the computer vision group at U.C. Berkeley provided much useful feedback
on our algorithm.
--R
Visual Reconstruction.
Eigenvalues and graph bisection: an average-case analysis
A lower bound for the smallest eigenvalue of the laplacian.
Spectral Graph Theory.
regions: a technique for image segmentation.
Lower bounds for the partitioning of graphs.
An improved spectral bisection algorithm and its application to dynamic load balancing.
A property of eigenvectors of nonnegative symmetric matrices and its applications to graph theory.
Stochastic relaxation
Matrix computations.
Algorithms for Clustering Data.
A representation of hypergraphs in the euclidean space.
Constructing simple stable descriptions for image partitioning.
Textons, contours and regions: Cue integration in image segmentation.
Preattentive texture discrimination with early vision mecha- nisms
Optimal approximations by piecewise smooth functions
Partitioning sparse matrices with eigenvectors of graphs.
Quantitative measures of change based on feature organiza- tion: Eigenvalues and eigenvectors
Normalized cuts and image segmentation.
Motion segmentation and tracking using normalized cuts.
Approximative counting
Disk packings and planar separators.
Laws of organization in perceptual forms(partial translation).
An optimal graph theoretic approach to data clustering: Theory and its application to image segmentation.
--TR
--CTR
Jaehwan Kim , Seungjin Choi, Semidefinite spectral clustering, Pattern Recognition, v.39 n.11, p.2025-2035, November, 2006
Lakshman Prasad , Alexei N. Skourikhine, Vectorized image segmentation via trixel agglomeration, Pattern Recognition, v.39 n.4, p.501-514, April, 2006
Segmentation of Vector Field Using Green Function and Normalized Cut, Proceedings of the 14th IEEE Visualization 2003 (VIS'03), p.106, October 22-24,
Fei Ma , Mariusz Bajger , John P. Slavotinek , Murk J. Bottema, Two graph theory based methods for identifying the pectoral muscle in mammograms, Pattern Recognition, v.40 n.9, p.2592-2602, September, 2007
Qiankun Zhao , Prasenjit Mitra , C. Lee Giles, Image annotation by hierarchical mapping of features, Proceedings of the 16th international conference on World Wide Web, May 08-12, 2007, Banff, Alberta, Canada
Dan Kushnir , Meirav Galun , Achi Brandt, Fast multiscale clustering and manifold identification, Pattern Recognition, v.39 n.10, p.1876-1891, October, 2006
Xiaofeng Zhang , William K. Cheung , C. H. Li, Graph-Based Abstraction for Privacy Preserving Manifold Visualization, Proceedings of the 2006 IEEE/WIC/ACM international conference on Web Intelligence and Intelligent Agent Technology, p.94-97, December 18-22, 2006
Stella X. Yu , Jianbo Shi, Segmentation Given Partial Grouping Constraints, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.2, p.173-183, January 2004
Richard Nock , Frank Nielsen, Statistical Region Merging, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.11, p.1452-1458, November 2004
Mikhail Belkin , John Goldsmith, Using eigenvectors of the bigram graph to infer morpheme identity, Proceedings of the ACL-02 workshop on Morphological and phonological learning, p.41-47, July 11, 2002
Arik Azran , Zoubin Ghahramani, A new approach to data driven clustering, Proceedings of the 23rd international conference on Machine learning, p.57-64, June 25-29, 2006, Pittsburgh, Pennsylvania
Robert Jenssen , Deniz Erdogmus , Kenneth E. Hild, II , Jose C. Principe , Torbjrn Eltoft, Information cut for clustering using a gradient descent approach, Pattern Recognition, v.40 n.3, p.796-806, March, 2007
Steven J. Simske , Jordi Arnabat, User-directed analysis of scanned images, Proceedings of the ACM symposium on Document engineering, November 20-22, 2003, Grenoble, France
S. H. Srinivasan, Spectral matching of bipartite graphs, Design and application of hybrid intelligent systems, IOS Press, Amsterdam, The Netherlands,
Laurent Favreau , Lionel Reveret , Christine Depraz , Marie-Paule Cani, Animal gaits from video: comparative studies, Graphical Models, v.68 n.2, p.212-234, March 2006
Volker Roth , Julian Laub , Motoaki Kawanabe , Joachim M. Buhmann, Optimal Cluster Preserving Embedding of Nonmetric Proximity Data, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.12, p.1540-1551, December
Dengyong Zhou , Jiayuan Huang , Bernhard Schlkopf, Learning from labeled and unlabeled data on a directed graph, Proceedings of the 22nd international conference on Machine learning, p.1036-1043, August 07-11, 2005, Bonn, Germany
Huan Wang , Shuicheng Yan , Thomas Huang , Xiaoou Tang, Maximum unfolded embedding: formulation, solution, and application for image clustering, Proceedings of the 14th annual ACM international conference on Multimedia, October 23-27, 2006, Santa Barbara, CA, USA
Inderjit Dhillon , Yuqiang Guan , Brian Kulis, A fast kernel-based multilevel algorithm for graph clustering, Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, August 21-24, 2005, Chicago, Illinois, USA
Laurent Favreau , Lionel Reveret , Christine Depraz , Marie-Paule Cani, Animal gaits from video, Proceedings of the 2004 ACM SIGGRAPH/Eurographics symposium on Computer animation, August 27-29, 2004, Grenoble, France
Vincent. S. Tseng , Ja-Hwung Su , Bo-Wen Wang , Yu-Ming Lin, Web image annotation by fusing visual features and textual information, Proceedings of the 2007 ACM symposium on Applied computing, March 11-15, 2007, Seoul, Korea
Xiaofei He , Deng Cai , Wanli Min, Statistical and computational analysis of locality preserving projection, Proceedings of the 22nd international conference on Machine learning, p.281-288, August 07-11, 2005, Bonn, Germany
Qiankun Zhao , Tie-Yan Liu , Sourav S. Bhowmick , Wei-Ying Ma, Event detection from evolution of click-through data, Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, August 20-23, 2006, Philadelphia, PA, USA
Wen Wu , Jie Yang, SmartLabel: an object labeling tool using iterated harmonic energy minimization, Proceedings of the 14th annual ACM international conference on Multimedia, October 23-27, 2006, Santa Barbara, CA, USA
Kai Zhang , Ivor W. Tsang , James T. Kwok, Maximum margin clustering made practical, Proceedings of the 24th international conference on Machine learning, p.1119-1126, June 20-24, 2007, Corvalis, Oregon
Laurent Guigues , Herv Le Men , Jean-Pierre Cocquerez, The hierarchy of the cocoons of a graph and its application to image segmentation, Pattern Recognition Letters, v.24 n.8, p.1059-1066, May
Bernd Fischer , Joachim M. Buhmann, Bagging for Path-Based Clustering, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.11, p.1411-1415, November
Chris Ding , Xiaofeng He, Linearized cluster assignment via spectral ordering, Proceedings of the twenty-first international conference on Machine learning, p.30, July 04-08, 2004, Banff, Alberta, Canada
Wei Xu , Xin Liu , Yihong Gong, Document clustering based on non-negative matrix factorization, Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, July 28-August 01, 2003, Toronto, Canada
Peter Gorniak , Deb Roy, Understanding complex visually referring utterances, Proceedings of the HLT-NAACL workshop on Learning word meaning from non-linguistic data, p.14-21, May 31,
Yang , Da-You Liu, A heuristic clustering algorithm for mining communities in signed networks, Journal of Computer Science and Technology, v.22 n.2, p.320-328, March 2007
Bernd Fischer , Joachim M. Buhmann, Path-Based Clustering for Grouping of Smooth Curves and Texture Segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.4, p.513-518, April
David M. Blei , Michael I. Jordan, Modeling annotated data, Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, July 28-August 01, 2003, Toronto, Canada
Wei Xu , Yihong Gong, Document clustering by concept factorization, Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, July 25-29, 2004, Sheffield, United Kingdom
Inderjit S. Dhillon , Yuqiang Guan , Brian Kulis, Kernel k-means: spectral clustering and normalized cuts, Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, August 22-25, 2004, Seattle, WA, USA
Huaijun Qiu , Edwin R. Hancock, Graph matching and clustering using spectral partitions, Pattern Recognition, v.39 n.1, p.22-34, January, 2006
Dianxun Shuai , Yumin Dong , Qing Shuai, A new data clustering Generalized cellular automata, Information Systems, v.32 n.7, p.968-977, November, 2007
Unsupervised relation disambiguation using spectral clustering, Proceedings of the COLING/ACL on Main conference poster sessions, p.89-96, July 17-18, 2006, Sydney, Australia
Amy McGovern , Lisa Friedland , Michael Hay , Brian Gallagher , Andrew Fast , Jennifer Neville , David Jensen, Exploiting relational structure to understand publication patterns in high-energy physics, ACM SIGKDD Explorations Newsletter, v.5 n.2, December
Rmer Rosales , Kannan Achan , Brendan Frey, Learning to cluster using local neighborhood structure, Proceedings of the twenty-first international conference on Machine learning, p.87, July 04-08, 2004, Banff, Alberta, Canada
Dirk Walther , Ueli Rutishauser , Christof Koch , Pietro Perona, Selective visual attention enables learning and recognition of multiple objects in cluttered scenes, Computer Vision and Image Understanding, v.100 n.1-2, p.41-63, October 2005
Brian Kulis , Sugato Basu , Inderjit Dhillon , Raymond Mooney, Semi-supervised graph clustering: a kernel approach, Proceedings of the 22nd international conference on Machine learning, p.457-464, August 07-11, 2005, Bonn, Germany
Xifeng Yan , X. Jasmine Zhou , Jiawei Han, Mining closed relational graphs with connectivity constraints, Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, August 21-24, 2005, Chicago, Illinois, USA
Arik Azran, The rendezvous algorithm: multiclass semi-supervised learning with Markov random walks, Proceedings of the 24th international conference on Machine learning, p.49-56, June 20-24, 2007, Corvalis, Oregon
Long , Zhongfei (Mark) Zhang , Philip S. Yu, Co-clustering by block value decomposition, Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, August 21-24, 2005, Chicago, Illinois, USA
zgr imek , Alicia P. Wolfe , Andrew G. Barto, Identifying useful subgoals in reinforcement learning by local graph partitioning, Proceedings of the 22nd international conference on Machine learning, p.816-823, August 07-11, 2005, Bonn, Germany
Kai Zhang , James T. Kwok, Block-quantized kernel matrix for fast spectral embedding, Proceedings of the 23rd international conference on Machine learning, p.1097-1104, June 25-29, 2006, Pittsburgh, Pennsylvania
Gina-Anne Levow, Unsupervised and semi-supervised learning of tone and pitch accent, Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, p.224-231, June 04-09, 2006, New York, New York
P. Merchn , A. Adn, Exploration trees on highly complex scenes: A new approach for 3D segmentation, Pattern Recognition, v.40 n.7, p.1879-1898, July, 2007
Sridhar Mahadevan, Adaptive mesh compression in 3D computer graphics using multiscale manifold learning, Proceedings of the 24th international conference on Machine learning, p.585-592, June 20-24, 2007, Corvalis, Oregon
Steven J. Simske , Scott C. Baggs, Digital capture for automated scanner workflows, Proceedings of the 2004 ACM symposium on Document engineering, October 28-30, 2004, Milwaukee, Wisconsin, USA
Yuxin Peng , Chong-Wah Ngo, Clip-based similarity measure for hierarchical video retrieval, Proceedings of the 6th ACM SIGMM international workshop on Multimedia information retrieval, October 15-16, 2004, New York, NY, USA
Hany Farid, Exposing digital forgeries in scientific images, Proceeding of the 8th workshop on Multimedia and security, September 26-27, 2006, Geneva, Switzerland
Mustafa Ozden , Ediz Polat, A color image segmentation approach for content-based image retrieval, Pattern Recognition, v.40 n.4, p.1318-1325, April, 2007
Xiaoli Zhang Fern , Carla E. Brodley, Solving cluster ensemble problems by bipartite graph partitioning, Proceedings of the twenty-first international conference on Machine learning, p.36, July 04-08, 2004, Banff, Alberta, Canada
Juliana F. Camapum Wanderley , Mark H. Fisher, Spatial-feature parametric clustering applied to motion-based segmentation in camouflage, Computer Vision and Image Understanding, v.85 n.2, p.144-157, February 2002
Nathan A. Carr , John C. Hart, Two algorithms for fast reclustering of dynamic meshed surfaces, Proceedings of the 2004 Eurographics/ACM SIGGRAPH symposium on Geometry processing, July 08-10, 2004, Nice, France
Xin-Jing Wang , Wei-Ying Ma , Lei Zhang , Xing Li, Iteratively clustering web images based on link and attribute reinforcements, Proceedings of the 13th annual ACM international conference on Multimedia, November 06-11, 2005, Hilton, Singapore
Tatsuya Ishihara , Hironobu Takagi , Takashi Itoh , Chieko Asakawa, Analyzing visual layout for a non-visual presentation-document interface, Proceedings of the 8th international ACM SIGACCESS conference on Computers and accessibility, October 23-25, 2006, Portland, Oregon, USA
Andrea Torsello , Edwin R. Hancock, Graph embedding using tree edit-union, Pattern Recognition, v.40 n.5, p.1393-1405, May, 2007
Cheng Bing , Zheng Nanning , Wang Ying , Zhang Yongping , Zhang Zhihua, Color image segmentation based on edge-preservation smoothing and soft C-means clustering, Machine Graphics & Vision International Journal, v.11 n.2/3, p.183-194, 2002
Peter Gorniak , Deb Roy, A visually grounded natural language interface for reference to spatial scenes, Proceedings of the 5th international conference on Multimodal interfaces, November 05-07, 2003, Vancouver, British Columbia, Canada
J. Jeon , V. Lavrenko , R. Manmatha, Automatic image annotation and retrieval using cross-media relevance models, Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, July 28-August 01, 2003, Toronto, Canada
Kannan , Santosh Vempala , Adrian Vetta, On clusterings: Good, bad and spectral, Journal of the ACM (JACM), v.51 n.3, p.497-515, May 2004
Xiaofei He , Deng Cai , Haifeng Liu , Jiawei Han, Image clustering with tensor representation, Proceedings of the 13th annual ACM international conference on Multimedia, November 06-11, 2005, Hilton, Singapore
Xiang Ji , Wei Xu, Document clustering with prior knowledge, Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, August 06-11, 2006, Seattle, Washington, USA
Miguel . Carreira-Perpin, Fast nonparametric clustering with Gaussian blurring mean-shift, Proceedings of the 23rd international conference on Machine learning, p.153-160, June 25-29, 2006, Pittsburgh, Pennsylvania
Igor Malioutov , Regina Barzilay, Minimum cut model for spoken lecture segmentation, Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the ACL, p.25-32, July 17-18, 2006, Sydney, Australia
Dengyong Zhou , Christopher J. C. Burges, Spectral clustering and transductive learning with multiple views, Proceedings of the 24th international conference on Machine learning, p.1159-1166, June 20-24, 2007, Corvalis, Oregon
J. P. Lewis , Nickson Fong , Xie XueXiang , Seah Hock Soon , Tian Feng, More optimal strokes for NPR sketching, Proceedings of the 3rd international conference on Computer graphics and interactive techniques in Australasia and South East Asia, November 29-December 02, 2005, Dunedin, New Zealand
Xin Zheng , Deng Cai , Xiaofei He , Wei-Ying Ma , Xueyin Lin, Locality preserving clustering for image database, Proceedings of the 12th annual ACM international conference on Multimedia, October 10-16, 2004, New York, NY, USA
Tomer Hertz , Aharon Bar-Hillel , Daphna Weinshall, Boosting margin based distance functions for clustering, Proceedings of the twenty-first international conference on Machine learning, p.50, July 04-08, 2004, Banff, Alberta, Canada
Natasha Gelfand , Leonidas J. Guibas, Shape segmentation using local slippage analysis, Proceedings of the 2004 Eurographics/ACM SIGGRAPH symposium on Geometry processing, July 08-10, 2004, Nice, France
Munirathnam Srikanth , Joshua Varner , Mitchell Bowden , Dan Moldovan, Exploiting ontologies for automatic image annotation, Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, August 15-19, 2005, Salvador, Brazil
Jingrui He , Hanghang Tong , Mingjing Li , Wei-Ying Ma , Changshui Zhang, Multiple random walk and its application in content-based image retrieval, Proceedings of the 7th ACM SIGMM international workshop on Multimedia information retrieval, November 10-11, 2005, Hilton, Singapore
Sameer Agarwal , Kristin Branson , Serge Belongie, Higher order learning with graphs, Proceedings of the 23rd international conference on Machine learning, p.17-24, June 25-29, 2006, Pittsburgh, Pennsylvania
Jian Yao , Zhonefei Zhang, Hierarchical shadow detection for color aerial images, Computer Vision and Image Understanding, v.102 n.1, p.60-69, April 2006
Long Quan , Ping Tan , Gang Zeng , Lu Yuan , Jingdong Wang , Sing Bing Kang, Image-based plant modeling, ACM Transactions on Graphics (TOG), v.25 n.3, July 2006
Nicolas Loeff , Cecilia Ovesdotter Alm , David A. Forsyth, Discriminating image senses by clustering with multimodal features, Proceedings of the COLING/ACL on Main conference poster sessions, p.547-554, July 17-18, 2006, Sydney, Australia
Yining Deng , b. s. Manjunath, Unsupervised Segmentation of Color-Texture Regions in Images and Video, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.8, p.800-810, August 2001
Greg Hamerly , Charles Elkan, Alternatives to the k-means algorithm that find better clusterings, Proceedings of the eleventh international conference on Information and knowledge management, November 04-09, 2002, McLean, Virginia, USA
Jia Li , James Z. Wang, Automatic Linguistic Indexing of Pictures by a Statistical Modeling Approach, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.9, p.1075-1088, September
Yixin Chen , James Z. Wang , Robert Krovetz, Content-based image retrieval by clustering, Proceedings of the 5th ACM SIGMM international workshop on Multimedia information retrieval, November 07-07, 2003, Berkeley, California
Bin Gao , Tie-Yan Liu , Guang Feng , Tao Qin , Qian-Sheng Cheng , Wei-Ying Ma, Hierarchical Taxonomy Preparation for Text Categorization Using Consistent Bipartite Spectral Graph Copartitioning, IEEE Transactions on Knowledge and Data Engineering, v.17 n.9, p.1263-1273, September 2005
Jennifer Neville , David Jensen, Leveraging relational autocorrelation with latent group models, Proceedings of the 4th international workshop on Multi-relational mining, p.49-55, August 21-21, 2005, Chicago, Illinois
Deng Cai , Zheng Shao , Xiaofei He , Xifeng Yan , Jiawei Han, Mining hidden community in heterogeneous social networks, Proceedings of the 3rd international workshop on Link discovery, p.58-65, August 21-25, 2005, Chicago, Illinois
Ying Liu , Dengsheng Zhang , Guojun Lu , Wei-Ying Ma, A survey of content-based image retrieval with high-level semantics, Pattern Recognition, v.40 n.1, p.262-282, January, 2007
Xiaobin Li , Zheng Tian, Optimum cut-based clustering, Signal Processing, v.87 n.11, p.2491-2502, November, 2007
Mezaris , Ioannis Kompatsiaris , Michael G. Strintzis, Region-based image retrieval using an object ontology and relevance feedback, EURASIP Journal on Applied Signal Processing, v.2004 n.1, p.886-901, 1 January 2004
Tijl De Bie , Nello Cristianini, Fast SDP Relaxations of Graph Cut Clustering, Transduction, and Other Combinatorial Problems, The Journal of Machine Learning Research, 7, p.1409-1436, 12/1/2006
Yining Deng , b. s. Manjunath, Unsupervised Segmentation of Color-Texture Regions in Images and Video, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.8, p.800-810, August 2001
Sagi Katz , Ayellet Tal, Hierarchical mesh decomposition using fuzzy clustering and cuts, ACM Transactions on Graphics (TOG), v.22 n.3, July
Cem Unsalan , Kim L. Boyer, A Theoretical and Experimental Investigation of Graph Theoretical Measures for Land Development in Satellite Imagery, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.4, p.575-589, April 2005
Shigeru Owada , Frank Nielsen , Takeo Igarashi, Volume catcher, Proceedings of the 2005 symposium on Interactive 3D graphics and games, April 03-06, 2005, Washington, District of Columbia
Vincent S. Tseng , Chon-Jei Lee , Ja-Hwung Su, Classify By Representative Or Associations (CBROA): a hybrid approach for image classification, Proceedings of the 6th international workshop on Multimedia data mining: mining integrated media and complex data, p.61-69, August 21-21, 2005, Chicago, Illinois
James Z. Wang , Jia Li, Learning-based linguistic indexing of pictures with 2--d MHMMs, Proceedings of the tenth ACM international conference on Multimedia, December 01-06, 2002, Juan-les-Pins, France
Aleix M. Martnez , Pradit Mittrapiyanuruk , Avinash C. Kak, On combining graph-partitioning with non-parametric clustering for image segmentation, Computer Vision and Image Understanding, v.95 n.1, p.72-85, July 2004
Mikhail Belkin , Partha Niyogi, Semi-Supervised Learning on Riemannian Manifolds, Machine Learning, v.56 n.1-3, p.209-239
Jingrui He , Mingjing Li , Hong-Jiang Zhang , Hanghang Tong , Changshui Zhang, Manifold-ranking based image retrieval, Proceedings of the 12th annual ACM international conference on Multimedia, October 10-16, 2004, New York, NY, USA
Fernando De la Torre , Takeo Kanade, Discriminative cluster analysis, Proceedings of the 23rd international conference on Machine learning, p.241-248, June 25-29, 2006, Pittsburgh, Pennsylvania
Desmond J. Higham , Gabriela Kalna , Milla Kibble, Spectral clustering and its use in bioinformatics, Journal of Computational and Applied Mathematics, v.204 n.1, p.25-37, July, 2007
Inderjit S. Dhillon, Co-clustering documents and words using bipartite spectral graph partitioning, Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, p.269-274, August 26-29, 2001, San Francisco, California
Jia-Yu Pan , Hyung-Jeong Yang , Christos Faloutsos , Pinar Duygulu, Automatic multimedia cross-modal correlation discovery, Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, August 22-25, 2004, Seattle, WA, USA
Bin Gao , Tie-Yan Liu , Xin Zheng , Qian-Sheng Cheng , Wei-Ying Ma, Consistent bipartite graph co-partitioning for star-structured high-order heterogeneous data co-clustering, Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, August 21-24, 2005, Chicago, Illinois, USA
Tilman Lange , Joachim M. Buhmann, Combining partitions by probabilistic label aggregation, Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, August 21-24, 2005, Chicago, Illinois, USA
Long , Zhongfei (Mark) Zhang , Xiaoyun W , Philip S. Yu, Spectral clustering for multi-type relational data, Proceedings of the 23rd international conference on Machine learning, p.585-592, June 25-29, 2006, Pittsburgh, Pennsylvania
Chris H. Q. Ding, Analysis of gene expression profiles: class discovery and leaf ordering, Proceedings of the sixth annual international conference on Computational biology, p.127-136, April 18-21, 2002, Washington, DC, USA
Eyal Amir , Robert Krauthgamer , Satish Rao, Constant factor approximation of vertex-cuts in planar graphs, Proceedings of the thirty-fifth annual ACM symposium on Theory of computing, June 09-11, 2003, San Diego, CA, USA
Desmond J. Higham, Unravelling small world networks, Journal of Computational and Applied Mathematics, v.158 n.1, p.61-74, 1 September
Gang Wu , Edward Y. Chang , Navneet Panda, Formulating context-dependent similarity functions, Proceedings of the 13th annual ACM international conference on Multimedia, November 06-11, 2005, Hilton, Singapore
Deng Cai , Xiaofei He , Zhiwei Li , Wei-Ying Ma , Ji-Rong Wen, Hierarchical clustering of WWW image search results using visual, textual and link information, Proceedings of the 12th annual ACM international conference on Multimedia, October 10-16, 2004, New York, NY, USA
Xiaofei He , Deng Cai , Ji-Rong Wen , Wei-Ying Ma , Hong-Jiang Zhang, Clustering and searching WWW images using link and page layout analysis, ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP), v.3 n.2, p.10-es, May 2007
Andrea Torsello , Dzena Hidovic-Rowe , Marcello Pelillo, Polynomial-Time Metrics for Attributed Trees, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.7, p.1087-1099, July 2005
Ravikrishna Kolluri , Jonathan Richard Shewchuk , James F. O'Brien, Spectral surface reconstruction from noisy point clouds, Proceedings of the 2004 Eurographics/ACM SIGGRAPH symposium on Geometry processing, July 08-10, 2004, Nice, France
Xiaofei He , Shuicheng Yan , Yuxiao Hu , Partha Niyogi , Hong-Jiang Zhang, Face Recognition Using Laplacianfaces, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.3, p.328-340, March 2005
Adrian Barbu , Song-Chun Zhu, Generalizing Swendsen-Wang to Sampling Arbitrary Posterior Probabilities, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.8, p.1239-1253, August 2005
Gl Nildem Demir , A. Sima Uyar , Sule Gndz gdc, Graph-based sequence clustering through multiobjective evolutionary algorithms for web recommender systems, Proceedings of the 9th annual conference on Genetic and evolutionary computation, July 07-11, 2007, London, England
Samuel Gerber , Tolga Tasdizen , Ross Whitaker, Robust non-linear dimensionality reduction using successive 1-dimensional Laplacian Eigenmaps, Proceedings of the 24th international conference on Machine learning, p.281-288, June 20-24, 2007, Corvalis, Oregon
Long , Zhongfei (Mark) Zhang , Xiaoyun Wu , Philip S. Yu, Relational clustering by symmetric convex coding, Proceedings of the 24th international conference on Machine learning, p.569-576, June 20-24, 2007, Corvalis, Oregon
J. J. Steil , M. Gtting , H. Wersing , E. Krner , H. Ritter, Adaptive scene dependent filters for segmentation and online learning of visual objects, Neurocomputing, v.70 n.7-9, p.1235-1246, March, 2007
Zhuowen Tu , Song-Chun Zhu, Image Segmentation by Data-Driven Markov Chain Monte Carlo, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.5, p.657-673, May 2002
Yixin Chen , James Z. Wang, A Region-Based Fuzzy Feature Matching Approach to Content-Based Image Retrieval, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.9, p.1252-1267, September 2002
Long , Xiaoyun Wu , Zhongfei (Mark) Zhang , Philip S. Yu, Unsupervised learning on k-partite graphs, Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, August 20-23, 2006, Philadelphia, PA, USA
Byoung-Ki Jeon , Yun-Beom Jung , Ki-Sang Hong, Image segmentation by unsupervised sparse clustering, Pattern Recognition Letters, v.27 n.14, p.1650-1664, 15 October 2006
Xiaojun Qi , Yutao Han, Incorporating multiple SVMs for automatic image annotation, Pattern Recognition, v.40 n.2, p.728-741, February, 2007
Chris H. Q. Ding , Xiaofeng He , Hongyuan Zha, A spectral method to separate disconnected and nearly-disconnected web graph components, Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, p.275-280, August 26-29, 2001, San Francisco, California
Anthony Hoogs , Roderic Collins , Robert Kaucic , Joseph Mundy, A Common Set of Perceptual Observables for Grouping, Figure-Ground Discrimination, and Texture Classification, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.4, p.458-474, April
Bin Gao , Tie-Yan Liu , Tao Qin , Xin Zheng , Qian-Sheng Cheng , Wei-Ying Ma, Web image clustering by consistent utilization of visual features and surrounding texts, Proceedings of the 13th annual ACM international conference on Multimedia, November 06-11, 2005, Hilton, Singapore
Ian H. Jermyn , Hiroshi Ishikawa, Globally Optimal Regions and Boundaries as Minimum Ratio Weight Cycles, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.10, p.1075-1088, October 2001
Joes Staal , Stiliyan N. Kalitzin , Max A. Viergever, A Trained Spin-Glass Model for Grouping of Image Primitives, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.7, p.1172-1182, July 2005
Antonio Robles-Kelly , Edwin R. Hancock, A Riemannian approach to graph embedding, Pattern Recognition, v.40 n.3, p.1042-1056, March, 2007
Felipe P. Bergo , Alexandre X. Falco , Paulo A. Miranda , Leonardo M. Rocha, Automatic Image Segmentation by Tree Pruning, Journal of Mathematical Imaging and Vision, v.29 n.2-3, p.141-162, November 2007
Yi Liu , Rong Jin , Joyce Y. Chai, A statistical framework for query translation disambiguation, ACM Transactions on Asian Language Information Processing (TALIP), v.5 n.4, p.360-387, December 2006
Charless Fowlkes , Serge Belongie , Fan Chung , Jitendra Malik, Spectral Grouping Using the Nystrm Method, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.2, p.214-225, January 2004
Exploiting relationships for object consolidation, Proceedings of the 2nd international workshop on Information quality in information systems, June 17-17, 2005, Baltimore, Maryland
Yixin Chen , James Z. Wang, Image Categorization by Learning and Reasoning with Regions, The Journal of Machine Learning Research, 5, p.913-939, 12/1/2004
Yizhou Yu , Johnny T. Chang, Shadow Graphs and 3D Texture Reconstruction, International Journal of Computer Vision, v.62 n.1-2, p.35-60, April-May 2005
Neil Lawrence, Probabilistic Non-linear Principal Component Analysis with Gaussian Process Latent Variable Models, The Journal of Machine Learning Research, 6, p.1783-1816, 12/1/2005
Richard C. Wilson , Edwin R. Hancock , Bin Luo, Pattern Vectors from Algebraic Graph Theory, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.7, p.1112-1124, July 2005
Qi-Xing Huang , Simon Flry , Natasha Gelfand , Michael Hofer , Helmut Pottmann, Reassembling fractured objects by geometric matching, ACM Transactions on Graphics (TOG), v.25 n.3, July 2006
Alper Yilmaz , Mubarak Shah, Matching actions in presence of camera motion, Computer Vision and Image Understanding, v.104 n.2, p.221-231, November 2006
Peter Orbanz , Joachim M. Buhmann, Nonparametric Bayesian Image Segmentation, International Journal of Computer Vision, v.77 n.1-3, p.25-45, May 2008
Kobus Barnard , Quanfu Fan , Ranjini Swaminathan , Anthony Hoogs , Roderic Collins , Pascale Rondot , John Kaufhold, Evaluation of Localized Semantics: Data, Methodology, and Experiments, International Journal of Computer Vision, v.77 n.1-3, p.199-217, May 2008
Matthias Heiler , Christoph Schnrr, Natural Image Statistics for Natural Image Segmentation, International Journal of Computer Vision, v.63 n.1, p.5-19, June 2005
Kilian Q. Weinberger , Lawrence K. Saul, Unsupervised Learning of Image Manifolds by Semidefinite Programming, International Journal of Computer Vision, v.70 n.1, p.77-90, October 2006
Okan Arikan, Compression of motion capture databases, ACM Transactions on Graphics (TOG), v.25 n.3, July 2006
Ying Zhao , George Karypis , Usama Fayyad, Hierarchical Clustering Algorithms for Document Datasets, Data Mining and Knowledge Discovery, v.10 n.2, p.141-168, March 2005
Kobus Barnard , Pinar Duygulu , David Forsyth , Nando de Freitas , David M. Blei , Michael I. Jordan, Matching words and pictures, The Journal of Machine Learning Research, 3, 3/1/2003
Gaurav Harit , Santanu Chaudhury, Clustering in video data: Dealing with heterogeneous semantics of features, Pattern Recognition, v.39 n.5, p.789-811, May, 2006
Jitendra Malik , Serge Belongie , Thomas Leung , Jianbo Shi, Contour and Texture Analysis for Image Segmentation, International Journal of Computer Vision, v.43 n.1, p.7-27, June 2001
Fast Approximate Energy Minimization via Graph Cuts, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.11, p.1222-1239, November 2001
Tao Li , Sheng Ma , Mitsunori Ogihara, Document clustering via adaptive subspace iteration, Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, July 25-29, 2004, Sheffield, United Kingdom
Zhuowen Tu , Song-Chun Zhu, Parsing Images into Regions, Curves, and Curve Groups, International Journal of Computer Vision, v.69 n.2, p.223-249, August 2006
Long Quan , Jingdong Wang , Ping Tan , Lu Yuan, Image-Based Modeling by Joint Segmentation, International Journal of Computer Vision, v.75 n.1, p.135-150, October 2007
Frdric Cao , Pablo Mus , Frdric Sur, Extracting Meaningful Curves from Images, Journal of Mathematical Imaging and Vision, v.22 n.2-3, p.159-181, May 2005
Lior Wolf , Amnon Shashua, Feature Selection for Unsupervised and Supervised Inference: The Emergence of Sparsity in a Weight-Based Approach, The Journal of Machine Learning Research, 6, p.1855-1887, 12/1/2005
Mohan Sridharan , Peter Stone, Structure-based color learning on a mobile robot under changing illumination, Autonomous Robots, v.23 n.3, p.161-182, October 2007
Ying Zhao , George Karypis, Empirical and Theoretical Comparisons of Selected Criterion Functions for Document Clustering, Machine Learning, v.55 n.3, p.311-331, June 2004
Han , Hongyuan Zha , C. Lee Giles, Name disambiguation in author citations using a K-way spectral clustering method, Proceedings of the 5th ACM/IEEE-CS joint conference on Digital libraries, June 07-11, 2005, Denver, CO, USA
Keiji Yanai , Nikhil V. Shirahatti , Prasad Gabbur , Kobus Barnard, Evaluation strategies for image understanding and retrieval, Proceedings of the 7th ACM SIGMM international workshop on Multimedia information retrieval, November 10-11, 2005, Hilton, Singapore
Asaad Hakeem , Mubarak Shah, Learning, detection and representation of multi-agent events in videos, Artificial Intelligence, v.171 n.8-9, p.586-605, June, 2007
Francis R. Bach , Michael I. Jordan, Learning Spectral Clustering, With Application To Speech Separation, The Journal of Machine Learning Research, 7, p.1963-2001, 12/1/2006
Andrea Torsello , Antonio Robles-Kelly , Edwin R. Hancock, Discovering Shape Classes using Tree Edit-Distance and Pairwise Clustering, International Journal of Computer Vision, v.72 n.3, p.259-285, May 2007
Zhuowen Tu , Xiangrong Chen , Alan L. Yuille , Song-Chun Zhu, Image Parsing: Unifying Segmentation, Detection, and Recognition, International Journal of Computer Vision, v.63 n.2, p.113-140, July 2005
Ana L. N. Fred , Anil K. Jain, Combining Multiple Clusterings Using Evidence Accumulation, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.6, p.835-850, June 2005
Kobus Barnard , Matthew Johnson, Word sense disambiguation with pictures, Artificial Intelligence, v.167 n.1-2, p.13-30, September 2005
Hoiem , Alexei A. Efros , Martial Hebert, Recovering Surface Layout from an Image, International Journal of Computer Vision, v.75 n.1, p.151-172, October 2007
Ali Shokoufandeh , Spiros Mancoridis , Trip Denton , Matthew Maycock, Spectral and meta-heuristic algorithms for software clustering, Journal of Systems and Software, v.77 n.3, p.213-223, September 2005
Laurent Guigues , Jean Pierre Cocquerez , Herv Men, Scale-Sets Image Analysis, International Journal of Computer Vision, v.68 n.3, p.289-317, July 2006
Song Wang , Toshiro Kubota , Jeffrey Mark Siskind , Jun Wang, Salient Closed Boundary Extraction with Ratio Contour, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.4, p.546-561, April 2005
Martin H. C. Law , Mario A. T. Figueiredo , Anil K. Jain, Simultaneous Feature Selection and Clustering Using Mixture Models, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.9, p.1154-1166, September 2004
Ulrike Luxburg, A tutorial on spectral clustering, Statistics and Computing, v.17 n.4, p.395-416, December 2007
Luminita A. Vese , Tony F. Chan, A Multiphase Level Set Framework for Image Segmentation Using the Mumford and Shah Model, International Journal of Computer Vision, v.50 n.3, p.271-293, December 2002
Robert Hanek , Michael Beetz, The Contracting Curve Density Algorithm: Fitting Parametric Curve Models to Images Using Local Self-Adapting Separation Criteria, International Journal of Computer Vision, v.59 n.3, p.233-258, September-October 2004
Jacob Feldman, Perceptual Grouping by Selection of a Logically Minimal Model, International Journal of Computer Vision, v.55 n.1, p.5-25, October
Jens Keuchel , Christoph Schnrr , Christian Schellewald , Daniel Cremers, Binary Partitioning, Perceptual Grouping, and Restoration with Semidefinite Programming, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.11, p.1364-1379, November
Dirk Walther , Christof Koch, 2006 Special Issue: Modeling attention to salient proto-objects, Neural Networks, v.19 n.9, p.1395-1407, November, 2006
Fernando De La Torre , Michael J. Black, A Framework for Robust Subspace Learning, International Journal of Computer Vision, v.54 n.1-3, p.117-142, August-September
Bodo Rosenhahn , Thomas Brox , Joachim Weickert, Three-Dimensional Shape Knowledge for Joint Image Segmentation and Pose Tracking, International Journal of Computer Vision, v.73 n.3, p.243-262, July 2007
Graph Cuts and Efficient N-D Image Segmentation, International Journal of Computer Vision, v.70 n.2, p.109-131, November 2006
Lawrence K. Saul , Sam T. Roweis, Think globally, fit locally: unsupervised learning of low dimensional manifolds, The Journal of Machine Learning Research, 4, p.119-155, 12/1/2003
John P. Eakins, Retrieval of still images by content, Lectures on information retrieval, Springer-Verlag New York, Inc., New York, NY, 2001
Daniel Cremers , Mikael Rousson , Rachid Deriche, A Review of Statistical Approaches to Level Set Segmentation: Integrating Color, Texture, Motion and Shape, International Journal of Computer Vision, v.72 n.2, p.195-215, April 2007
Ritendra Datta , Jia Li , James Z. Wang, Content-based image retrieval: approaches and trends of the new age, Proceedings of the 7th ACM SIGMM international workshop on Multimedia information retrieval, November 10-11, 2005, Hilton, Singapore
Alper Yilmaz , Omar Javed , Mubarak Shah, Object tracking: A survey, ACM Computing Surveys (CSUR), v.38 n.4, p.13-es, 2006
Axel Pinz, Object categorization, Foundations and Trends in Computer Graphics and Vision, v.1 n.4, p.255-353, December 2005 | graph partitioning;image segmentation;grouping |
352648 | A Comparison of Prediction Accuracy, Complexity, and Training Time of Thirty-Three Old and New Classification Algorithms. | Twenty-two decision tree, nine statistical, and two neural network algorithms are compared on thirty-two datasets in terms of classification accuracy, training time, and (in the case of trees) number of leaves. Classification accuracy is measured by mean error rate and mean rank of error rate. Both criteria place a statistical, spline-based, algorithm called POLYCLSSS at the top, although it is not statistically significantly different from twenty other algorithms. Another statistical algorithm, logistic regression, is second with respect to the two accuracy criteria. The most accurate decision tree algorithm is QUEST with linear splits, which ranks fourth and fifth, respectively. Although spline-based statistical algorithms tend to have good accuracy, they also require relatively long training times. POLYCLASS, for example, is third last in terms of median training time. It often requires hours of training compared to seconds for other algorithms. The QUEST and logistic regression algorithms are substantially faster. Among decision tree algorithms with univariate splits, C4.5, IND-CART, and QUEST have the best combinations of error rate and speed. But C4.5 tends to produce trees with twice as many leaves as those from IND-CART and QUEST. | Introduction
There is much current research in the machine learning and statistics communities
on algorithms for decision tree classifiers. Often the emphasis is on the accuracy of
the algorithms. One study, called the StatLog Project (Michie, Spiegelhalter and
Taylor, 1994), compares the accuracy of several decision tree algorithms against
some non-decision tree algorithms on a large number of datasets. Other studies that
are smaller in scale include Brodley and Utgoff (1992), Brown, Corruble and Pittard
(1993), Curram and Mingers (1994), and Shavlik, Mooney and Towell (1991).
Recently, comprehensibility of the tree structures has received some attention.
Comprehensibility typically decreases with increase in tree size and complexity. If
two trees employ the same kind of tests and have the same prediction accuracy, the
one with fewer leaves is usually preferred. Breslow and Aha (1997) survey methods
of tree simplification to improve their comprehensibility.
A third criterion that has been largely ignored is the relative training time of
the algorithms. The StatLog Project finds that no algorithm is uniformly most
accurate over the datasets studied. Instead, many algorithms possess comparable
accuracy. For such algorithms, excessive training times may be undesirable (Hand,
1997).
The purpose of our paper is to extend the results of the StatLog Project in the
following ways:
1. In addition to classification accuracy and size of trees, we compare the training
times of the algorithms. Although training time depends somewhat on imple-
mentation, it turns out that there are such large differences in times (seconds
versus days) that the differences cannot be attributed to implementation alone.
2. We include some decision tree algorithms that are not included in the StatLog
Project, namely, S-Plus tree (Clark and Pregibon, 1993),
Holte and Maass, 1995), OC1 (Murthy, Kasif and Salzberg, 1994), LMDT
(Brodley and Utgoff, 1995), and QUEST (Loh and Shih, 1997).
3. We also include several of the newest spline-based statistical algorithms. Their
classification accuracy may be used as benchmarks for comparison with other
algorithms in the future.
4. We study the effect of adding independent noise attributes on the classification
accuracy and (where appropriate) tree size of each algorithm. It turns out that
except possibly for three algorithms, all the others adapt to noise quite well.
5. We examine the scalability of some of the more promising algorithms as the
sample size is increased.
Our experiment compares twenty-two decision tree algorithms, nine classical and
modern statistical algorithms, and two neural network algorithms. Many datasets
are taken from the University of California, Irvine (UCI), Repository of Machine
Learning Databases (Merz and Murphy, 1996). Fourteen of the datasets are from
real-life domains and two are artificially constructed. Five of the datasets were used
in the StatLog Project. To increase the probability of finding statistically significant
differences between algorithms, the number of datasets is doubled by the addition
of noise attributes. The resulting total of number of datasets is thirty-two.
Section 2 briefly describes the algorithms and Section 3 gives some background
to the datasets. Section 4 explains the experimental setup used in this study and
Section 5 analyzes the results. The issue of scalability is studied in Section 6 and
conclusions and recommendations are given in Section 7.
2. The algorithms
Only a short description of each algorithm is given. Details may be found in the
cited references. If an algorithm requires class prior probabilities, they are made
proportional to the training sample sizes.
2.1. Trees and rules
CART: We use the version of CART (Breiman, Friedman, Olshen and Stone,
1984) implemented in the cart style of the IND package (Buntine and Caru-
ana, 1992) with the Gini index of diversity as the splitting criterion. The trees
based on the 0-SE and 1-SE pruning rules are denoted by IC0 and IC1 respec-
tively. The software is obtained from
http://ic-www.arc.nasa.gov/ic/projects/bayes-group/ind/IND-program.html.
S-Plus tree: This is a variant of the CART algorithm written in the S language
(Becker, Chambers and Wilks, 1988). It is described in Clark and Pregibon
(1993). It employs deviance as the splitting criterion. The best tree is chosen
by ten-fold cross-validation. Pruning is performed with the p.tree() function
in the treefix library (Venables and Ripley, 1997) from the StatLib S Archive
at http://lib.stat.cmu.edu/S/. The 0-SE and 1-SE trees are denoted by
ST0 and ST1 respectively.
C4.5: We use Release 8 (Quinlan, 1993; Quinlan, 1996) with the default settings
including pruning (http://www.cse.unsw.edu.au/~quinlan/). After a tree is
constructed, the C4.5 rule induction program is used to produce a set of rules.
The trees are denoted by C4T and the rules by C4R.
FACT: This fast classification tree algorithm is described in Loh and Vanichse-
takul (1988). It employs statistical tests to select an attribute for splitting each
node and then uses discriminant analysis to find the split point. The size of
the tree is determined by a set of stopping rules. The trees based on univariate
splits (splits on a single attribute) are denoted by FTU and those based on linear
combination splits (splits on linear functions of attributes) are denoted by FTL.
The Fortran 77 program is obtained from http://www.stat.wisc.edu/~loh/.
QUEST: This new classification tree algorithm is described in Loh and Shih
(1997). QUEST can be used with univariate or linear combination splits. A
unique feature is that its attribute selection method has negligible bias. If all
the attributes are uninformative with respect to the class attribute, then each
has approximately the same chance of being selected to split a node. Ten-fold
cross-validation is used to prune the trees. The univariate 0-SE and 1-SE trees
are denoted by QU0 and QU1, respectively. The corresponding trees with linear
combination splits are denoted by QL0 and QL1, respectively. The results in
this paper are based on version 1.7.10 of the program. The software is obtained
from http://www.stat.wisc.edu/~loh/quest.html.
IND: This is due to Buntine (1992). We use version 2.1 with the default settings.
IND comes with several standard predefined styles. We compare four Bayesian
styles in this paper: bayes, bayes opt, mml, and mml opt (denoted by IB,
IBO, IM, and IMO, respectively). The opt methods extend the non-opt methods
by growing several different trees and storing them in a compact graph struc-
ture. Although more time and memory intensive, the opt styles can increase
classification accuracy.
OC1: This algorithm is described in Murthy et al. (1994). We use version 3
(http://www.cs.jhu.edu/~salzberg/announce-oc1.html) and compare three
styles. The first one (denoted by OCM) is the default that uses a mixture of univariate
and linear combination splits. The second one (option -a; denoted by
OCU) uses only univariate splits. The third one (option -o; denoted by OCL) uses
only linear combination splits. Other options are kept at their default values.
LMDT: The algorithm is described in Brodley and Utgoff (1995). It constructs
a decision tree based on multivariate tests that are linear combinations of the
attributes. The tree is denoted by LMT. We use the default values in the software
from http://yake.ecn.purdue.edu/~brodley/software/lmdt.html.
CAL5: This is from the Fraunhofer Society, Institute for Information and Data
Processing, Germany (M-uller and Wysotzki, 1994; M-uller and Wysotzki, 1997).
We use version 2. CAL5 is designed specifically for numerical-valued attributes.
However, it has a procedure to handle categorical attributes so that mixed attributes
(numerical and categorical) can be included. In this study we optimize
the two parameters which control tree construction. They are the predefined
threshold S and significance level ff. We randomly split the training set into
two parts, stratified by the classes: two-thirds are used to construct the tree and
one-third is used as a validation set to choose the optimal parameter configura-
tion. We employ the c-shell program that comes with the CAL5 package to
choose the best parameters by varying ff between 0.10 and 0.90 and S between
and 0.95 in steps of 0.05. The best combination of values that minimize
the error rate on the validation set is chosen. The tree is then constructed on all
the records in the training set using the chosen parameter values. It is denoted
by CAL.
T1: This is a one-level decision tree that classifies examples on the basis of only
one split on a single attribute (Holte, 1993). A split on a categorical attribute
5with b categories can produce up to b being reserved
for missing attribute values). On the other hand, a split on a continuous
attribute can yield up to J leaves, where J is the number of classes
(one leaf is again reserved for missing values). The software is obtained from
http://www.csi.uottawa.ca/~holte/Learning/other-sites.html.
2.2. Statistical algorithms
LDA: This is linear discriminant analysis, a classical statistical method. It models
the instances within each class as normally distributed with a common covariance
matrix. This yields linear discriminant functions.
QDA: This is quadratic discriminant analysis. It also models class distributions
as normal, but estimates each covariance matrix by the corresponding sample
covariance matrix. As a result, the discriminant functions are quadratic. Details
on LDA and QDA can be found in many statistics textbooks, e.g., Johnson and
(1992). We use the SAS PROC DISCRIM (SAS Institute, Inc., 1990)
implementation of LDA and QDA with the default settings.
NN: This is the SAS PROC DISCRIM implementation of the nearest neighbor
method. The pooled covariance matrix is used to compute Mahalanobis distances
LOG: This is logistic discriminant analysis. The results are obtained with a poly-
tomous logistic regression (see, e.g., Agresti (1990)) Fortran 90 routine written
by the first author (http://www.stat.wisc.edu/~limt/logdiscr/).
FDA: This is flexible discriminant analysis (Hastie, Tibshirani and Buja, 1994), a
generalization of linear discriminant analysis that casts the classification problem
as one involving regression. Only the MARS (Friedman, 1991) nonparametric
regression procedure is studied here. We use the S-Plus function fda
from the mda library of the StatLib S Archive. Two models are used: an additive
model (degree=1, denoted by FM1) and a model containing first-order
interactions (degree=2 with penalty=3, denoted by FM2).
PDA: This is a form of penalized LDA (Hastie, Buja and Tibshirani, 1995). It is
designed for situations in which there are many highly correlated attributes.
The classification problem is cast into a penalized regression framework via
optimal scoring. PDA is implemented in S-Plus using the function fda with
method=gen.ridge.
MDA: This stands for mixture discriminant analysis (Hastie and Tibshirani, 1996).
It fits Gaussian mixture density functions to each class to produce a classifier.
MDA is implemented in S-Plus using the library mda.
POL: This is the POLYCLASS algorithm (Kooperberg, Bose and Stone, 1997). It
fits a polytomous logistic regression model using linear splines and their tensor
products. It provides estimates for conditional class probabilities which can
then be used to predict class labels. POL is implemented in S-Plus using the
function poly.fit from the polyclass library of the StatLib S Archive. Model
selection is done with ten-fold cross-validation.
2.3. Neural networks
LVQ: We use the learning vector quantization algorithm in the S-Plus class library
(Venables and Ripley, 1997) at the StatLib S Archive. Details of the
algorithm may be found in Kohonen (1995). Ten percent of the training set are
used to initialize the algorithm, using the function lvqinit. Training is carried
out with the optimized learning rate function olvq1, a fast and robust LVQ
algorithm. Additional fine-tuning in learning is performed with the function
lvq1. The number of iterations is ten times the size of the training set in both
olvq1 and lvq1. We use the default values of 0.3 and 0.03 for ff, the learning
rate parameter, in olvq1 and lvq1, respectively.
RBF: This is the radial basis function network implemented in the SAS tnn3.sas
macro (Sarle, 1994) for feedforward neural networks (http://www.sas.com).
The network architecture is specified with the ARCH=RBF argument. In this
study, we construct a network with only one hidden layer. The number of
hidden units is chosen to be 20% of the total number of input and output units
[2.5% (5 hidden units) only for the dna and dna+ datasets and 10% (5 hidden
units) for the tae and tae+ datasets because of memory and storage limitations].
Although the macro can perform model selection to choose the optimal number
of hidden units, we did not utilize this capability because it would have taken too
long for some of the datasets (see Table 6 below). Therefore the results reported
here for this algorithm should be regarded as lower bounds on its performance.
The hidden layer is fully connected to the input and output layers but there is
no direct connection between the input and output layers. At the output layer,
each class is represented by one unit taking the value of 1 for that particular
category and 0 otherwise, except for the last one which is the reference category.
To avoid local optima, ten preliminary trainings were conducted and the best
estimates used for subsequent training. More details on the radial basis function
network can be found in Bishop (1995) and Ripley (1996).
3. The datasets
We briefly describe the sixteen datasets used in the study as well as any modifications
that are made for our experiment. Fourteen of them are from real domains
while two are artificially created. Thirteen are from UCI.
breast cancer (bcw). This is one of the breast cancer databases at
UCI, collected at the University of Wisconsin by W. H. Wolberg. The problem is
to predict whether a tissue sample taken from a patient's breast is malignant or
benign. There are two classes, nine numerical attributes, and 699 observations.
Sixteen instances contain a single missing attribute value and are removed from
the analysis. Our results are therefore based on 683 records. Error rates are
estimated using ten-fold cross-validation. A decision tree analysis of a subset
of the data using the FACT algorithm is reported in Wolberg, Tanner, Loh and
(1987), Wolberg, Tanner and Loh (1988), and Wolberg, Tanner
and Loh (1989). The dataset has also been analyzed with linear programming
methods (Mangasarian and Wolberg, 1990).
Contraceptive method choice (cmc). The data are taken from the 1987 National
Indonesia Contraceptive Prevalence Survey. The samples are married
women who were either not pregnant or did not know if they were pregnant
at the time of the interview. The problem is to predict the current contraceptive
method choice (no use, long-term methods, or short-term methods) of a
woman based on her demographic and socio-economic characteristics (Lerman,
Molyneaux, Pangemanan and Iswarati, 1991). There are three classes, two numerical
attributes, seven categorical attributes, and 1473 records. The error
rates are estimated using ten-fold cross-validation. The data are obtained from
http://www.stat.wisc.edu/p/stat/ftp/pub/loh/treeprogs/datasets/.
StatLog DNA (dna). This UCI dataset in molecular biology was used in the
Project. Splice junctions are points in a DNA sequence at which "su-
perfluous" DNA is removed during the process of protein creation in higher
organisms. The problem is to recognize, given a sequence of DNA, the boundaries
between exons (the parts of the DNA sequence retained after splicing)
and introns (the parts of the DNA sequence that are spliced out). There are
three classes and sixty categorical attributes each having four categories. The
sixty categorical attributes represent a window of sixty nucleotides, each having
one of four categories. The middle point in the window is classified as one of
exon/intron boundaries, intron/exon boundaries, or neither of these. The 3186
examples in the database were divided randomly into a training set of size 2000
and a test set of size 1186. The error rates are estimated from the test set.
StatLog heart disease (hea). This UCI dataset is from the Cleveland Clinic
Foundation, courtesy of R. Detrano. The problem concerns the prediction of
the presence or absence of heart disease given the results of various medical tests
carried out on a patient. There are two classes, seven numerical attributes, six
categorical attributes, and 270 records. The StatLog Project employed unequal
misclassification costs. We use equal costs here because some algorithms do
not allow unequal costs. The error rates are estimated using ten-fold cross-validation
Boston housing (bos). This UCI dataset gives housing values in Boston suburbs
(Harrison and Rubinfeld, 1978). There are three classes, twelve numerical
attributes, one binary attribute, and 506 records. Following Loh and Vanichse-
takul (1988), the classes are created from the attribute median value of owner-occupied
homes as follows: class
otherwise. The error rates are
estimated using ten-fold cross-validation.
LED display (led). This artificial domain is described in Breiman et al. (1984).
It contains seven Boolean attributes, representing seven light-emitting diodes,
and ten classes, the set of decimal digits. An attribute value is either zero or
one, according to whether the corresponding light is off or on for the digit. Each
attribute value has a ten percent probability of having its value inverted. The
class attribute is an integer between zero and nine, inclusive. A C program
from UCI is used to generate 2000 records for the training set and 4000 records
for the test set. The error rates are estimated from the test set.
BUPA liver disorders (bld). This UCI dataset was contributed by R. S. Forsyth.
The problem is to predict whether or not a male patient has a liver disorder
based on blood tests and alcohol consumption. There are two classes, six numerical
attributes, and 345 records. The error rates are estimated using ten-fold
cross-validation.
PIMA Indian diabetes (pid). This UCI dataset was contributed by V. Sigillito.
The patients in the dataset are females at least twenty-one years old of Pima
Indian heritage living near Phoenix, Arizona, USA. The problem is to predict
whether a patient would test positive for diabetes given a number of physiological
measurements and medical test results. There are two classes, seven numerical
attributes, and 532 records. The original dataset consists of 768 records with
eight numerical attributes. However, many of the attributes, notably serum in-
sulin, contain zero values which are physically impossible. We remove serum
insulin and records that have impossible values in other attributes. The error
rates are estimated using ten-fold cross validation.
StatLog satellite image (sat). This UCI dataset gives the multi-spectral values
of pixels within 3 \Theta 3 neighborhoods in a satellite image, and the classification
associated with the central pixel in each neighborhood. The aim is to predict
the classification given the multi-spectral values. There are six classes and
thirty-six numerical attributes. The training set consists of 4435 records while
the test set consists of 2000 records. The error rates are estimated from the test
set.
Image segmentation (seg). This UCI dataset was used in the StatLog Project.
The samples are from a database of seven outdoor images. The images are
hand-segmented to create a classification for every pixel as one of brickface,
sky, foliage, cement, window, path, or grass. There are seven classes, nineteen
numerical attributes and 2310 records in the dataset. The error rates are
estimated using ten-fold cross-validation.
The algorithm T1 could not handle this dataset without modification, because
the program requires a large amount of memory. Therefore for T1 (but not for
the other algorithms) we discretize each attribute except attributes 3, 4, and 5
into one hundred categories.
Attitude towards smoking restrictions (smo). This survey dataset (Bull, 1994)
is obtained from http://lib.stat.cmu.edu/datasets/csb/. The problem is
to predict attitude toward restrictions on smoking in the workplace (prohibited,
restricted, or unrestricted) based on bylaw-related, smoking-related, and sociodemographic
covariates. There are three classes, three numerical attributes,
and five categorical attributes. We divide the original dataset into a training
set of size 1855 and a test set of size 1000. The error rates are estimated from
the test set.
Thyroid disease (thy). This is the UCI ann-train.datacontributed by R. Werner.
The problem is to determine whether or not a patient is hyperthyroid. There
are three classes (normal, hyperfunction, and subnormal functioning), six numerical
attributes, and fifteen binary attributes. The training set consists of
3772 records and the test set has 3428 records. The error rates are estimated
from the test set.
StatLog vehicle silhouette (veh). This UCI dataset originated from the Turing
Institute, Glasgow, Scotland. The problem is to classify a given silhouette as
one of four types of vehicle, using a set of features extracted from the silhouette.
Each vehicle is viewed from many angles. The four model vehicles are double
decker bus, Chevrolet van, Saab 9000, and Opel Manta 400. There are four
classes, eighteen numerical attributes, and 846 records. The error rates are
estimated using ten-fold cross-validation.
Congressional voting records (vot). This UCI dataset gives the votes of each
member of the U. S. House of Representatives of the 98th Congress on sixteen
issues. The problem is to classify a Congressman as a Democrat or a
Republican based on the sixteen votes. There are two classes, sixteen categorical
attributes with three categories each ("yea", "nay", or neither), and 435 records.
rates are estimated by ten-fold cross-validation.
Waveform (wav). This is an artificial three-class problem based on three wave-
forms. Each class consists of a random convex combination of two waveforms
sampled at the integers with noise added. A description for generating the data
is given in Breiman et al. (1984) and a C program is available from UCI. There
are twenty-one numerical attributes, and 600 records in the training set. Error
rates are estimated from an independent test set of 3000 records.
evaluation (tae). The data consist of evaluations of teaching performance
over three regular semesters and two summer semesters of 151 teaching assistant
(TA) assignments at the Statistics Department of the University of
Wisconsin-Madison. The scores are grouped into three roughly equal-sized
categories ("low", "medium", and "high") to form the class attribute. The predictor
attributes are (i) whether or not the is a native English speaker (bi-
nary), (ii) course instructor (25 categories), (iii) course (26 categories), (iv) summer
or regular semester (binary), and (v) class size (numerical). This dataset
is first reported in Loh and Shih (1997). It differs from the other datasets
in that there are two categorical attributes with large numbers of categories.
As a result, decision tree algorithms such as CART that employ exhaustive
search usually take much longer to train than other algorithms. (CART has
to evaluate 2 splits for each categorical attribute with c values.) Error
rates are estimated using ten-fold cross-validation. The data are obtained from
http://www.stat.wisc.edu/p/stat/ftp/pub/loh/treeprogs/datasets/.
A summary of the attribute features of the datasets is given in Table 1.
Table
1. Characteristics of the datasets. The last three columns give the number and type of added noise attributes for
each dataset. The notation "N(0,1)" denotes the standard normal distribution, "UI(m,n)" denotes a uniform distribution
over the integers m through n inclusive, and "U(0,1)" denotes a uniform distribution over the unit interval.
Training No. of original attributes No. and type of noise attributes
Data sample No. of Num. Categorical Total Numerical Categorical Total
set size classes 2 3 4 5 25 26
dna 2000 3
led 2000
bld
pid
smo
thy
veh 846 4
vot 435 2
tae
4. Experimental setup
Some algorithms are not designed for categorical attributes. In these cases, each
categorical attribute is converted into a vector of 0-1 attributes. That is, if a
categorical attribute X takes k values fc g, it is replaced by a
1)-dimensional vector (d
otherwise, for , the vector consists of all zeros. The
affected algorithms are all the statistical and neural network algorithms as well as
the tree algorithms FTL, OCU, OCL, OCM, and LMT.
In order to increase the number of datasets and to study the effect of noise attributes
on each algorithm, we created sixteen new datasets by adding independent
noise attributes. The numbers and types of noise attributes added are given in the
right panel of Table 1. The name of each new dataset is the same as the original
dataset except for the addition of a '+' symbol. For example, the bcw dataset with
noise added is denoted by bcw+.
For each dataset, we use one of two different ways to estimate the error rate of
an algorithm. For large datasets (size much larger than 1000 and test set of size at
least 1000), we use a test set to estimate the error rate. The classifier is constructed
using the records in the training set and then it is tested on the test set. Twelve of
the thirty-two datasets are analyzed this way.
For the remaining twenty datasets, we use the following ten-fold cross-validation
procedure to estimate the error rate:
1. The dataset is randomly divided into ten disjoint subsets, with each containing
approximately the same number of records. Sampling is stratified by the class
labels to ensure that the subset class proportions are roughly the same as those
in the whole dataset.
2. For each subset, a classifier is constructed using the records not in it. The classifier
is then tested on the withheld subset to obtain a cross-validation estimate
of its error rate.
3. The ten cross-validation estimates are averaged to provide an estimate for the
classifier constructed from all the data.
Because the algorithms are implemented in different programming languages and
some languages are not available on all platforms, three types of UNIX workstations
are used in our study. The workstation type and implementation language for
each algorithm are given in Table 2. The relative performance of the workstations
according to SPEC marks is given in Table 3. The floating point SPEC marks
show that a task that takes one second on a DEC 3000 would take about 1.4 and
0.8 seconds on a SPARCstation 5 (SS5) and SPARCstation 20 (SS20), respectively.
Therefore, to enable comparisons, all training times are reported here in terms of
3000-equivalent seconds-the training times recorded on a SS5 and a SS20
are divided by 1.4 and 0.8, respectively.
5. Results
The error rates and training times for the algorithms are given in a separate table
for each dataset in the Appendix. The tables also report the error rates of the
'naive' plurality rule, which ignores the information in the covariates and classifies
every record to the majority class in the training sample.
5.1. Exploratory analysis of error rates
Before we present a formal statistical analysis of the results, it is helpful to study
the summary in Table 4. The mean error rate for each algorithm over the datasets
is given in the second row. The minimum and maximum error rates and that of
Table
2. Hardware and software platform for each algorithm. The workstations are DEC 3000 Alpha
Model 300 (DEC), Sun SPARCstation 20 Model 61 (SS20), and Sun SPARCstation 5 (SS5).
Algorithm Platform Algorithm Platform
Tree & Rules ST1 Splus tree, 1-SE DEC/S
QU0 QUEST, univariate 0-SE DEC/F90 LMT LMDT, linear DEC/C
QU1 QUEST, univariate 1-SE DEC/F90 CAL CAL5 SS5/C++
QL0 QUEST, linear 0-SE DEC/F90 single split DEC/C
QL1 QUEST, linear 1-SE DEC/F90
FTU FACT, univariate DEC/F77 Statistical
linear DEC/F77 LDA Linear discriminant anal. DEC/SAS
C4T C4.5 trees DEC/C QDA Quadratic discriminant anal. DEC/SAS
C4R C4.5 rules DEC/C NN Nearest-neighbor DEC/SAS
IB IND bayes style SS5/C LOG Linear logistic regression DEC/F90
IBO IND bayes opt style SS5/C FM1 FDA, degree 1 SS20/S
IM IND mml style SS5/C FM2 FDA, degree 2 SS20/S
IMO IND mml opt style SS5/C PDA Penalized LDA SS20/S
IC0 IND cart, 0-SE SS5/C MDA Mixture discriminant anal. SS20/S
IC1 IND cart, 1-SE SS5/C POL POLYCLASS SS20/S
OCU OC1, univariate SS5/C
OCL OC1, linear SS5/C Neural Network
OCM OC1, mixed SS5/C LVQ Learning vector quantization SS20/S
ST0 Splus tree, 0-SE DEC/S RBF Radial basis function network DEC/SAS
Table
3. SPEC benchmark summary
Workstation SPECfp92 SPECint92 Source
DEC DEC 3000 Model 300 91.5 66.2 SPEC Newsletter
(150MHz) Vol. 5, Issue 2, June 1993
SS20 Sun SPARCstation 20 102.8 88.9 SPEC Newsletter
Model 61 (60MHz) Vol. 6, Issue 2, June 1994
SS5 Sun SPARCstation 5 47.3 57.0 SPEC Newsletter
(70MHz) Vol. 6, Issue 2, June 1994
the plurality rule are given for each dataset in the last three columns. Let p denote
the smallest observed error rate in each row (i.e., dataset). If an algorithm has an
error rate within one standard error of p, we consider it to be close to the best and
indicate it by a p
in the table. The standard error is estimated as follows. If p is
from an independent test set, let n denote the size of the test set. Otherwise, if p is
a cross-validation estimate, let n denote the size of the training set. The standard
error of p is estimated by the formula
p)=n. The algorithm with the largest
error rate within a row is indicated by an X. The total numbers of p
and X-marks
for each algorithm are given in the third and fourth rows of the table.
The following conclusions may be drawn from the table:
Table
4. Minimum, maximum, and 'naive' plurality rule error rates for each dataset. A ` p '-mark indicates
that the algorithm has an error rate within one standard error of the minimum for the dataset. A 'X'-mark
indicates that the algorithm has the worst error rate for the dataset. The mean error rate for each algorithm
is given in the second row.
Decision trees and rules Statistical algorithms Nets Error rates
Naive
Mean
bcw
cmc
pp
pp
dna
dna+
hea
bos
led
bld
pid
seg
smo
thy
ppp ppp pp
ppp pp
vot
tae X
tae+
pp
1. Algorithm POL has the lowest mean error rate. An ordering of the other algorithms
in terms of mean error rate is given in the upper half of Table 5.
2. The algorithms can also be ranked in terms of total number of p
- and X-marks.
By this criterion, the most accurate algorithm is again POL, which has fifteen p
Table
5. Ordering of algorithms by mean error rate and mean rank of error rate
Mean POL LOG MDA QL0 LDA QL1 PDA IC0 FM2 IBO IMO
error .195 .204 .207 .207 .208 .211 .213 .215 .218 .219 .219
rate C4R IM LMT C4T QU0 QU1 OCU IC1 IB OCM ST0
Mean POL FM1 LOG FM2 QL0 LDA QU0 C4R IMO MDA PDA
rank 8.3 12.2 12.2 12.2 12.4 13.7 13.9 14.0 14.0 14.3 14.5
of C4T QL1 IBO IM IC0 FTL QU1 OCU IC1 ST0 ST1
error 14.5 14.6 14.7 14.9 15.0 15.4 16.6 16.6 16.9 17.0 17.7
rate LMT OCM IB RBF FTU QDA LVQ OCL CAL NN
marks and no X-marks. Eleven algorithms have one or more X-marks. Ranked
in increasing order of number of X-marks (in parentheses), they are:
FTL(1), OCM(1), ST1(1), FM2(1), MDA(1), FM1(2),
OCL(3), QDA(3), NN(4), LVQ(4), T1(11).
Excluding these, the remaining algorithms rank in order of decreasing number
of p
-marks (in parentheses) as:
POL(15), LOG(13), QL0(10), LDA(10), PDA(10), QL1(9), OCU(9), (1)
QU0(8), QU1(8), C4R(8), IBO(8), RBF(8), C4T(7), IMO(6),
IM(5), IC1(5), ST0(5), FTU(4), IC0(4), CAL(4), IB(3), LMT(1).
The top four algorithms in (1) also rank among the top five in the upper half
of
Table
5.
3. The last three columns of the table show that a few algorithms are sometimes
less accurate than the plurality rule. They are NN (at cmc, cmc+, smo+),
bld+), QDA (smo, thy, thy+), FTL (tae), and ST1 (tae+).
4. The easiest datasets to classify are bcw, bcw+, vot, and vot+; the error rates all
lie between 0.03 and 0.09.
5. The most difficult to classify are cmc, cmc+, and tae+, with minimum error
rates greater than 0.4.
6. Two other difficult datasets are smo and smo+. In the case of smo, only T1 has a
(marginally) lower error rate than that of the plurality rule. No algorithm has
a lower error rate than the plurality rule for smo+.
7. The datasets with the largest range of error rates are thy and thy+, where the
rates range from 0.005 to 0.890. However, the maximum of 0.890 is due to QDA.
If QDA is ignored, the maximum error rate drops to 0.096.
8. There are six datasets with only one p
-mark each. They are bld+ (POL), sat
(LVQ), sat+ (FM2), seg+ (IBO), veh and veh+ (QDA both times).
9. Overall, the addition of noise attributes does not appear to increase significantly
the error rates of the algorithms.
5.2. Statistical significance of error rates
5.2.1. Analysis of variance A statistical procedure called mixed effects analysis
of variance can be used to test the simultaneous statistical significance of differences
between mean error rates of the algorithms, while controlling for differences between
datasets (Neter, Wasserman and Kutner, 1990, p. 800). Although it makes the
assumption that the effects of the datasets act like a random sample from a normal
distribution, it is quite robust against violation of the assumption. For our data,
the procedure gives a significance probability less than 10 \Gamma4 . Hence the hypothesis
that the mean error rates are equal is strongly rejected.
Simultaneous confidence intervals for differences between mean error rates can be
obtained using the Tukey method (Miller, 1981, p. 71). According to this procedure,
a difference between the mean error rates of two algorithms is statistically significant
at the 10% level if they differ by more than 0.058.
To visualize this result, Figure 1(a) plots the mean error rate of each algorithm
versus its median training time in seconds. The solid vertical line in the plot is
units to the right of the mean error rate for POL. Therefore any algorithm
lying to the left of the line has a mean error rate that is not statistically significantly
different from that of POL.
The algorithms are seen to form four clusters with respect to training time. These
clusters are roughly delineated by the three horizontal dotted lines which correspond
to training times of one minute, ten minutes, and one hour. Figure 1(b) shows a
magnified plot of the eighteen algorithms with median training times less than ten
minutes and mean error rate not statistically significantly different from POL.
5.2.2. Analysis of ranks To avoid the normality assumption, we can instead
analyze the ranks of the algorithms within datasets. That is, for each dataset, the
algorithm with the lowest error rate is assigned rank one, the second lowest rank
two, etc., with average ranks assigned in the case of ties. The lower half of Table 5
gives an ordering of the algorithms in terms of mean rank of error rates. Again POL
is first and last. Note, however, that the mean rank of POL is 8.3. This shows
that it is far from being uniformly most accurate across datasets.
Comparing the two methods of ordering in Table 5, it is seen that POL, LOG,
QL0, and LDA are the only algorithms with consistently good performance. Three
algorithms that perform well by one criterion but not the other are MDA, FM1, and
FM2. In the case of MDA, its low mean error rate is due to its excellent performance
in four datasets (veh, veh+, wav, and wav+) where many other algorithms do poorly.
These domains concern shape identification and the datasets contain only numerical
Mean error rate
Median
sec.
FTU
C4R
IB
IBO
IM
IMO
OCU
OCL
OCM ST0 ST1
LDA
QDA
PDA
MDA
POL
RBF
1hr
10min
1min
(a) All thirty-three methods
Mean error rate
Median
sec.
FTU
C4R
IB
IM
OCU
LDA
PDA
MDA
1min
5min
(b) Less than 10min., accuracy not sig. different from POL
Figure
1. Plots of median training time versus mean error rate. The vertical axis is in log-scale.
The solid vertical line in plot (a) divides the algorithms into two groups: the mean error rates of
the algorithms in the left group do not differ significantly (at the 10% simultaneous significance
level) from that of POL, which has the minimum mean error rate. Plot (b) shows the algorithms
that are not statistically significantly different from POL in terms of mean error rate and that have
median training time less than ten minutes.
attributes. MDA is generally unspectacular in the rest of the datasets and this is the
reason for its tenth place ranking in terms of mean rank.
The situation for FM1 and FM2 is quite different. As its low mean rank indicates,
FM1 is usually a good performer. However, it fails miserably in the seg and seg+
datasets, reporting error rates of more than fifty percent when most of the other
algorithms have error rates less than ten percent. Thus FM1 seems to be less robust
than the other algorithms. FM2 also appears to lack robustness, although to a
lesser extent. Its worst performance is in the bos+ dataset, where it has an error
rate of forty-two percent, compared to less than thirty-five percent for the other
algorithms. The number of X-marks against an algorithm in Table 4 is a good
predictor of erratic if not poor performance. MDA, FM1, and FM2 all have at least
one X-mark.
The Friedman (1937) test is a standard procedure for testing statistical significance
in differences of mean ranks. For our experiment, it gives a significance
probability less than 10 \Gamma4 . Therefore the null hypothesis that the algorithms are
equally accurate on average is again rejected. Further, a difference in mean ranks
greater than 8.7 is statistically significant at the 10% level (Hollander and Wolfe,
1999, p. 296). Thus POL is not statistically significantly different from the twenty
other algorithms that have mean rank less than or equal to 17.0. Figure 2(a) shows
a plot of the median training time versus the mean ranks of the algorithms. Those
algorithms that lie to the left of the vertical line are not statistically significantly
different from POL. A magnified plot of the subset of algorithms that are not significantly
different from POL and that have median training time less than ten minutes
is given in Figure 2(b).
The algorithms that differ statistically significantly from POL in terms of mean
error rate form a subset of those that differ from POL in terms of mean ranks. Thus
the rank test appears to be more powerful than the analysis of variance test for this
experiment. The fifteen algorithms in Figure 2(b) may be recommended for use in
applications where good accuracy and short training time are desired.
5.3. Training time
Table
6 gives the median DEC 3000-equivalent training time for each algorithm
and the relative training time within datasets. Owing to the large range of training
times, only the order relative to the fastest algorithm for each dataset is reported.
The fastest algorithm is indicated by a '0'. An algorithm that is between 10 x\Gamma1 to
times as slow is indicated by the value of x. For example, in the case of the
dna+ dataset, the fastest algorithms are C4T and T1, each requiring two seconds.
The slowest algorithm is FM2, which takes more than three million seconds (almost
forty days) and hence is between 10 6 to 10 7 times as slow. The last two columns
of the table give the fastest and slowest times for each dataset.
Table
7 gives an ordering of the algorithms from fastest to slowest according to
median training time. Overall, the fastest algorithm is C4T, followed closely by FTU,
FTL, and LDA. There are two reasons for the superior speed of C4T compared to the
other decision tree algorithms. First, it splits each categorical attribute into as
Mean rank
Median
sec.
FTU
C4R
IB
IBO
IM
IMO
OCU
OCL
OCM
LDA
QDA
PDA
MDA
POL
RBF
1hr
10min
1min
(a) All thirty-three methods
Mean rank
Median
sec.
C4R
IM
OCU
LDA
PDA
MDA
1min
5min
(b) Less than 10min., accuracy not sig. different from POL
Figure
2. Plots of median training time versus mean rank of error rates. The vertical axis is in
log-scale. The solid vertical line in plot (a) divides the algorithms into two groups: the mean ranks
of the algorithms in the left group do not differ significantly (at the 10% simultaneous significance
level) from that of POL. Plot (b) shows the algorithms that are not statistically significantly
different from POL in terms of mean rank and that have median training time less than ten
minutes.
Table
6. DEC 3000-equivalent training times and relative times of the algorithms. The second and third rows
give the median training time and rank for each algorithm. An entry of 'x' in the each of the subsequent rows
indicates that an algorithm is between times slower than the fastest algorithm for the dataset. The
fastest algorithm is denoted by an entry of '0'. The minimum and maximum training times are given in the
last two columns. 's', `m', 'h', `d' denote seconds, minutes, hours, and days, respectively.
Decision trees and rules Statistical algorithms Nets CPU time
Median CPU 3.2m 3.2m 5.9m 5.9m 7s 8s 5s 20s 34s 27.5m 34s 33.9m 52s 47s 46s 14.9m 13.7m 15.1m 14.4m 5.7m 1.3h 36s 10s 15s 20s 4m 15.6m 3.8h 56s 3m 3.2h 1.1m 11.3h 5s
Rank
Table
7. Ordering of algorithms by median training time
5s 7s 8s 10s 15s 20s 20s 34s 34s 36s 46s
47s 52s 56s 1.1m 3m 3.2m 3.2m 4m 5.7m 5.9m 5.9m
OCM
13.7m 14.4m 14.9m 15.1m 15.6m 27.5m 33.9m 1.3h 3.2h 3.8h 11.3h
many subnodes as the number of categories. Therefore it wastes no time in forming
subsets of categories. Second, its pruning method does not require cross-validation,
which can increase training time several fold.
The classical statistical algorithms QDA and NN are also quite fast. As expected,
decision tree algorithms that employ univariate splits are faster than those that use
linear combination splits. The slowest algorithms are POL, FM2, and RBF; two are
spline-based and one is a neural network.
Although IC0, IC1, ST0 and ST1 all claim to implement the CART algorithm, the
IND versions are faster than the S-Plus versions. One reason is that IC0 and IC1
are written in C whereas ST0 and ST1 are written in the S language. Another reason
is that the IND versions use heuristics (Buntine, personal communication) instead
of greedy search when the number of categories in a categorical attribute is large.
This is most apparent in the tae+ dataset where there are categorical attributes
with up to twenty-six categories. In this case IC0 and IC1 take around forty seconds
versus two and a half hours for ST0 and ST1. The results in Table 4 indicate that
IND's classification accuracy is not adversely affected by such heuristics; see Aronis
and Provost (1997) for another possible heuristic.
Since T1 is a one-level tree, it may appear surprising that it is not faster than
algorithms such as C4T that produce multi-level trees. The reason is that splits
each continuous attribute into J is the number of classes.
On the other hand, C4T always splits a continuous attribute into two intervals only.
Therefore when J ? 2, T1 has to spend a lot more time to search for the intervals.
5.4. Size of trees
Table
8 gives the number of leaves for each tree algorithm and dataset before noise
attributes are added. In the case that an error rate is obtained by ten-fold cross-
validation, the entry is the mean number of leaves over the ten cross-validation
trees.
Table
9 shows how much the number of leaves changes after addition of noise
attributes. The mean and median of the number of leaves for each classifier are
given in the last columns of the two tables. IBO and IMO clearly yield the largest
trees by far. Apart from T1, which is necessarily short by design, the algorithm with
the shortest trees on average is QL1, followed closely by FTL and OCL. A ranking
of the algorithms with univariate splits (in increasing median number of leaves) is:
T1, IC1, ST1, QU1, FTU, IC0, ST0, OCU, QU0, and C4T. Algorithm C4T tends
to produce trees with many more leaves than the other algorithms. One reason
may be due to under-pruning (although its error rates are quite good). Another is
that, unlike the binary-tree algorithms, C4T splits each categorical attribute into as
many nodes as the number of categories.
Addition of noise attributes typically decreases the size of the trees, except for C4T
and CAL which tend to grow larger trees, and IMO which seems to fluctuate rather
wildly. These results complement those of Oates and Jensen (1997) who looked
at the effect of sample size on the number of leaves of decision tree algorithms
and found a significant relationship between tree size and training sample size for
C4T. They observed that tree algorithms which employ cost-complexity pruning are
better able to control tree growth.
6. Scalability of algorithms
Although differences in mean error rates between POL and many other algorithms
are not statistically significant, it is clear that if error rate is the sole criterion, POL
would be the method of choice. Unfortunately, POL is one of the most compute-intensive
1algorithms. To see how training times increase with sample size, a small
scalability study was carried out with the algorithms QU0, QL0, FTL, C4T, C4R, IC0,
LOG, FM1, and POL.
Training times are measured for these algorithms for training sets of size
Four datasets are used to generate the samples-sat, smo+,
tae+, and a new, very large UCI dataset called adult which has two classes and six
continuous and seven categorical attributes. Since the first three datasets are not
large enough for the experiment, bootstrap re-sampling is employed to generate the
training sets. That is, N samples are randomly drawn with replacement from each
dataset. To avoid getting many replicate records, the value of the class attribute
for each sampled case is randomly changed to another value with probability 0.1.
(The new value is selected from the pool of alternatives with equal probability.)
Bootstrap sampling is not carried out for the adult dataset because it has more
than 32,000 records. Instead, the nested training sets are obtained by random
sampling without replacement.
The times required to train the algorithms are plotted (in log-log scale) in Figure
3. With the exception of POL, FM1 and LOG, the logarithms of the training
times seem to increase linearly with log(N ). The non-monotonic behavior of POL
and FM1 is puzzling and might be due to randomness in their use of cross-validation
for model selection. The erratic behavior of LOG in the adult dataset is caused by
convergence problems during model fitting.
Many of the lines in Figure 3 are roughly parallel. This suggests that the relative
computational speed of the algorithms is fairly constant over the range of sample
sizes considered. QL0 and C4R are two exceptions. Cohen (1995) had observed that
C4R does not scale well.
7. Conclusions
Our results show that the mean error rates of many algorithms are sufficiently
similar that their differences are statistically insignificant. The differences are also
probably insignificant in practical terms. For example, the mean error rates of
the top ranked algorithms POL, LOG, and QL0 differ by less than 0.012. If such a
small difference is not important in real applications, the user may wish to select
an algorithm based on other criteria such as training time or interpretability of the
classifier.
Unlike error rates, there are huge differences between the training times of the
algorithms. POL, the algorithm with the lowest mean error rate, takes about fifty
times as long to train as the next most accurate algorithm. The ratio of times is
roughly equivalent to hours versus minutes, and Figure 3 shows that it is maintained
over a wide range of sample sizes. For large applications where time is a factor, it
may be advantageous to use one of the quicker algorithms.
It is interesting that the old statistical algorithm LDA has a mean error rate
close to the best. This is surprising because (i) it is not designed for binary-valued
attributes (all categorical attributes are transformed to 0-1 vectors prior to
application of LDA), and (ii) it is not expected to be effective when class densities
CPU
time
CPU
time
tae+
CPU
time
adult
CPU
time
Figure
3. Plots of training time versus sample size in log-log scale for selected algorithms.
are multi-modal. Because it is fast, easy to implement, and readily available in
statistical packages, it provides a convenient benchmark for comparison against
future algorithms.
The low error rates of LOG and LDA probably account for much of the performance
of the better algorithms. For example, POL is basically a modern version of LOG. It
enhances the flexibility of LOG by employing spline-based functions and automatic
model selection. Although this strategy is computationally costly, it does produce
a slight reduction in the mean error rate-enough to bring it to the top of the pack.
The good performance of QL0 may be similarly attributable to LDA. The QUEST
linear-split algorithm is designed to overcome the difficulties encountered by LDA
in multi-modal situations. It does this by applying a modified form of LDA to
partitions of the data, where each partition is represented by a leaf of the decision
tree. This strategy alone, however, is not enough, as the higher mean error rate
of FTL shows. The latter is based on the FACT algorithm which is a precursor
to QUEST. One major difference between the QUEST and FACT algorithms is
that the former employs the cost-complexity pruning method of CART whereas
the latter does not. Our results suggest that some form of bottom-up pruning may
be essential for low error rates.
If the purpose of constructing an algorithm is for data interpretation, then perhaps
only decision rules or trees with univariate splits will suffice. With the exception
of CAL and T1, the differences in mean error rates of the decision rule and tree
algorithms are not statistically significant from that of POL. IC0 has the lowest
mean error rate and QU0 is best in terms of mean ranks. C4R and C4T are not far
behind. Any of these four algorithms should provide good classification accuracy.
C4T is the fastest by far, although it tends to yield trees with twice as many leaves as
IC0 and QU0. C4R is the next fastest, but Figure 3 shows that it does not scale well.
IC0 is slightly faster and its trees have slightly fewer leaves than QU0. However,
Loh and Shih (1997) show that CART-based algorithms such as IC0 are prone to
produce spurious splits in some situations.
Acknowledgments
We are indebted to P. Auer, C. E. Brodley, W. Buntine, T. Hastie, R. C. Holte,
C. Kooperberg, S. K. Murthy, J. R. Quinlan, W. Sarle, B. Schulmeister, and W.
Taylor for help and advice on the installation of the computer programs. We
are also grateful to J. W. Molyneaux for providing the 1987 National Indonesia
Contraceptive Prevalence Survey data. Finally, we thank W. Cohen, F. Provost,
and the reviewers for many helpful comments and suggestions.
--R
Categorical Data Analysis
Increasing the efficiency of data mining algorithms with breadth-first marker propagation
The New S Language
Neural Networks for Pattern Recognition
Classification and Regression Trees
Simplifying decision trees: A survey
Multivariate versus univariate decision trees
Multivariate decision trees
A comparison of decision tree classifiers with backpropagation neural networks for multimodal classification problems
Analysis of attitudes toward workplace smoking restrictions
Learning classification trees
Introduction to IND Version 2.1 and Recursive Partitioning
Fast effective rule induction
Neural networks
Multivariate adaptive regression splines (with discussion)
The use of ranks to avoid the assumption of normality implicit in the analysis of variance
Construction and Assessment of Classification Rules
Hedonic prices and the demand for clean air
Discriminant analysis by Gaussian mixtures
Penalized discriminant analysis
Flexible discriminant analysis by optimal scoring
Nonparametric Statistical Methods
Very simple classification rules perform well on most commonly used datasets
Applied Multivariate Statistical Analysis
Polychotomous regression
Cancer diagnosis via linear programming
UCI Repository of Machine Learning Databases
Machine Learning
Automatic construction of decision trees for classification
The decision-tree algorithm CAL5 based on a statistical approach to its splitting algorithm
A system for induction of oblique decision trees
Applied Linear Statistical Models
The effects of training set size on decision tree complexity
Improved use of continuous attributes in C4.
Pattern Recognition and Neural Networks
Neural networks and statistical models
SAS Institute
Symbolic and neural learning algorithms: an empirical comparison
Modern Applied Statistics with S-Plus
Diagnostic schemes for fine needle aspirates of breast masses
Fine needle aspiration for breast mass diagnosis
Statistical approach to fine needle aspiration diagnosis of breast masses
--TR
Applied multivariate statistical analysis
Symbolic and Neural Learning Algorithms
C4.5: programs for machine learning
Very Simple Classification Rules Perform Well on Most Commonly Used Datasets
Multivariate Decision Trees
Self-organizing maps
SAS/ETS User''s Guide, Version 6
Neural Networks for Pattern Recognition
The Effects of Training Set Size on Decision Tree Complexity
Multivariate Versus Univariate Decision Trees
Simplifying decision trees: A survey
--CTR
Ganesan Velayathan , Seiji Yamada, Behavior-based web page evaluation, Proceedings of the 15th international conference on World Wide Web, May 23-26, 2006, Edinburgh, Scotland
Ganesan Velayathan , Seiji Yamada, Behavior-Based Web Page Evaluation, Proceedings of the 2006 IEEE/WIC/ACM international conference on Web Intelligence and Intelligent Agent Technology, p.409-412, December 18-22, 2006
Samuel E. Buttrey , Ciril Karo, Using k-nearest-neighbor classification in the leaves of a tree, Computational Statistics & Data Analysis, v.40 n.1, p.27-37, 28 July 2002
Kweku-Muata Osei-Bryson, Evaluation of decision trees: a multi-criteria approach, Computers and Operations Research, v.31 n.11, p.1933-1945, September 2004
Richi Nayak , Laurie Buys , Jan Lovie-Kitchin, Data mining in conceptualising active ageing, Proceedings of the fifth Australasian conference on Data mining and analystics, p.39-45, November 29-30, 2006, Sydney, Australia
Xiangyang Li , Nong Ye, A supervised clustering algorithm for computer intrusion detection, Knowledge and Information Systems, v.8 n.4, p.498-509, November 2005
Laura Elena Raileanu , Kilian Stoffel, Theoretical Comparison between the Gini Index and Information Gain Criteria, Annals of Mathematics and Artificial Intelligence, v.41 n.1, p.77-93, May 2004
Nong Ye , Xiangyang Li, A scalable, incremental learning algorithm for classification problems, Computers and Industrial Engineering, v.43 n.4, p.677-692, September 2002
Jonathan Eckstein , Peter L. Hammer , Ying Liu , Mikhail Nediak , Bruno Simeone, The Maximum Box Problem and its Application to Data Analysis, Computational Optimization and Applications, v.23 n.3, p.285-298, December 2002
Sattar Hashemi , Mohammad R. Kangavari, Parallel learning using decision trees: a novel approach, Proceedings of the 4th WSEAS International Conference on Applied Mathematics and Computer Science, p.1-8, April 25-27, 2005, Rio de Janeiro, Brazil
Khaled M. S. Badran , Peter I. Rockett, The roles of diversity preservation and mutation in preventing population collapse in multiobjective genetic programming, Proceedings of the 9th annual conference on Genetic and evolutionary computation, July 07-11, 2007, London, England
Nigel Williams , Sebastian Zander , Grenville Armitage, A preliminary performance comparison of five machine learning algorithms for practical IP traffic flow classification, ACM SIGCOMM Computer Communication Review, v.36 n.5, October 2006
Sorin Alexe , Peter L. Hammer, Accelerated algorithm for pattern detection in logical analysis of data, Discrete Applied Mathematics, v.154 n.7, p.1050-1063, 1 May 2006
S. Ruggieri, Efficient C4.5, IEEE Transactions on Knowledge and Data Engineering, v.14 n.2, p.438-444, March 2002
Md. Zahidul Islam , Ljiljana Brankovic, A framework for privacy preserving classification in data mining, Proceedings of the second workshop on Australasian information security, Data Mining and Web Intelligence, and Software Internationalisation, p.163-168, January 01, 2004, Dunedin, New Zealand
Efstathios Stamatatos , Gerhard Widmer, Automatic identification of music performers with learning ensembles, Artificial Intelligence, v.165 n.1, p.37-56, June 2005
Niels Landwehr , Mark Hall , Eibe Frank, Logistic Model Trees, Machine Learning, v.59 n.1-2, p.161-205, May 2005
Kar-Ann Toh , Quoc-Long Tran , Dipti Srinivasan, Benchmarking a Reduced Multivariate Polynomial Pattern Classifier, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.6, p.740-755, June 2004
Abraham Bernstein , Foster Provost , Shawndra Hill, Toward Intelligent Assistance for a Data Mining Process: An Ontology-Based Approach for Cost-Sensitive Classification, IEEE Transactions on Knowledge and Data Engineering, v.17 n.4, p.503-518, April 2005
Rich Caruana , Alexandru Niculescu-Mizil, An empirical comparison of supervised learning algorithms, Proceedings of the 23rd international conference on Machine learning, p.161-168, June 25-29, 2006, Pittsburgh, Pennsylvania
Nicolas Baskiotis , Michle Sebag, C4.5 competence map: a phase transition-inspired approach, Proceedings of the twenty-first international conference on Machine learning, p.10, July 04-08, 2004, Banff, Alberta, Canada
Gabriela Alexe , Peter L. Hammer, Spanned patterns for the logical analysis of data, Discrete Applied Mathematics, v.154 n.7, p.1039-1049, 1 May 2006
Ingolf Geist, A framework for data mining and KDD, Proceedings of the 2002 ACM symposium on Applied computing, March 11-14, 2002, Madrid, Spain
Anthony J. T. Lee , Yao-Te Wang, Efficient data mining for calling path patterns in GSM networks, Information Systems, v.28 n.8, p.929-948, December
Kweku-Muata Osei-Bryson, Post-pruning in decision tree induction using multiple performance measures, Computers and Operations Research, v.34 n.11, p.3331-3345, November, 2007
Friedhelm Schwenker , Hans A. Kestler , Gther Palm, Unsupervised and supervised learning in radial-basis-function networks, Self-Organizing neural networks: recent advances and applications, Springer-Verlag New York, Inc., New York, NY, 2001
Chen-Fu Chien , Wen-Chih Wang , Jen-Chieh Cheng, Data mining for yield enhancement in semiconductor manufacturing and an empirical study, Expert Systems with Applications: An International Journal, v.33 n.1, p.192-198, July, 2007
Irma Becerra-Fernandez , Stelios H. Zanakis , Steven Walczak, Knowledge discovery techniques for predicting country investment risk, Computers and Industrial Engineering, v.43 n.4, p.787-800, September 2002
Zhiwei Fu , Bruce L. Golden , Shreevardhan Lele , S. Raghavan , Edward Wasil, Diversification for better classification trees, Computers and Operations Research, v.33 n.11, p.3185-3202, November 2006
Ruey-Shiang Guh, A hybrid learning-based model for on-line detection and analysis of control chart patterns, Computers and Industrial Engineering, v.49 n.1, p.35-62, August 2005
Krzysztof Krawiec, Genetic Programming-based Construction of Features for Machine Learning and Knowledge Discovery Tasks, Genetic Programming and Evolvable Machines, v.3 n.4, p.329-343, December 2002
Huimin Zhao , Sudha Ram, Combining schema and instance information for integrating heterogeneous data sources, Data & Knowledge Engineering, v.61 n.2, p.281-303, May, 2007
Kar-Ann Toh, Training a reciprocal-sigmoid classifier by feature scaling-space, Machine Learning, v.65 n.1, p.273-308, October 2006
O. Asparoukhov , W. J. Krzanowski, Non-parametric smoothing of the location model in mixed variable discrimination, Statistics and Computing, v.10 n.4, p.289-297, October 2000
R. Chandrasekaran , Young U. Ryu , Varghese S. Jacob , Sungchul Hong, Isotonic Separation, INFORMS Journal on Computing, v.17 n.4, p.462-474, October 2005
Ching-Pao Chang , Chih-Ping Chu, Defect prevention in software processes: An action-based approach, Journal of Systems and Software, v.80 n.4, p.559-570, April, 2007
Tony Van Gestel , Johan A. K. Suykens , Bart Baesens , Stijn Viaene , Jan Vanthienen , Guido Dedene , Bart De Moor , Joos Vandewalle, Benchmarking Least Squares Support Vector Machine Classifiers, Machine Learning, v.54 n.1, p.5-32, January 2004
Elena Baralis , Silvia Chiusano, Essential classification rule sets, ACM Transactions on Database Systems (TODS), v.29 n.4, p.635-674, December 2004
Siddharth Pal , David J. Miller, An Extension of Iterative Scaling for Decision and Data Aggregation in Ensemble Classification, Journal of VLSI Signal Processing Systems, v.48 n.1-2, p.21-37, August 2007
Man Cheang , Kwong Sak Leung , Kin Hong Lee, Genetic parallel programming: design and implementation, Evolutionary Computation, v.14 n.2, p.129-156, June 2006
Foster Provost , Pedro Domingos, Tree Induction for Probability-Based Ranking, Machine Learning, v.52 n.3, p.199-215, September
Johannes Gehrke , Wie-Yin Loh , Raghu Ramakrishnan, Classification and regression: money *can* grow on trees, Tutorial notes of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining, p.1-73, August 15-18, 1999, San Diego, California, United States
Vasant Dhar , Dashin Chou , Foster Provost, Discovering Interesting Patterns for Investment Decision Making with GLOWER ◯-A Genetic Learner Overlaid with Entropy Reduction, Data Mining and Knowledge Discovery, v.4 n.4, p.251-280, October 2000
Perlich , Foster Provost , Jeffrey S. Simonoff, Tree induction vs. logistic regression: a learning-curve analysis, The Journal of Machine Learning Research, 4, p.211-255, 12/1/2003
Foster Provost , Venkateswarlu Kolluri, Data mining tasks and methods: scalability, Handbook of data mining and knowledge discovery, Oxford University Press, Inc., New York, NY, 2002
Gabriela Alexe , Sorin Alexe , Tibrius O. Bonates , Alexander Kogan, Logical analysis of data --- the vision of Peter L. Hammer, Annals of Mathematics and Artificial Intelligence, v.49 n.1-4, p.265-312, April 2007
Krzysztof J. Cios , Lukasz A. Kurgan, CLIP4: hybrid inductive machine learning algorithm that generates inequality rules, Information Sciences: an International Journal, v.163 n.1-3, p.37-83, 14 June 2004
Foster Provost , Venkateswarlu Kolluri, A Survey of Methods for Scaling Up Inductive Algorithms, Data Mining and Knowledge Discovery, v.3 n.2, p.131-169, June 1999
S. B. Kotsiantis , I. D. Zaharakis , P. E. Pintelas, Machine learning: a review of classification and combining techniques, Artificial Intelligence Review, v.26 n.3, p.159-190, November 2006 | decision tree;classification tree;statistical classifier;neural net |
352664 | A Study of Reinforcement Learning in the Continuous Case by the Means of Viscosity Solutions. | This paper proposes a study of Reinforcement Learning (RL) for continuous state-space and time control problems, based on the theoretical framework of viscosity solutions (VSs). We use the method of dynamic programming (DP) which introduces the value function (VF), expectation of the best future cumulative reinforcement. In the continuous case, the value function satisfies a non-linear first (or second) order (depending on the deterministic or stochastic aspect of the process) differential equation called the Hamilton-Jacobi-Bellman (HJB) equation. It is well known that there exists an infinity of generalized solutions (differentiable almost everywhere) to this equation, other than the VF. We show that gradient-descent methods may converge to one of these generalized solutions, thus failing to find the optimal control.In order to solve the HJB equation, we use the powerful framework of viscosity solutions and state that there exists a unique viscosity solution to the HJB equation, which is the value function. Then, we use another main result of VSs (their stability when passing to the limit) to prove the convergence of numerical approximations schemes based on finite difference (FD) and finite element (FE) methods. These methods discretize, at some resolution, the HJB equation into a DP equation of a Markov Decision Process (MDP), which can be solved by DP methods (thanks to a strong contraction property) if all the initial data (the state dynamics and the reinforcement function) were perfectly known. However, in the RL approach, as we consider a system in interaction with some a priori (at least partially) unknown environment, which learns from experience, the initial data are not perfectly known but have to be approximated during learning. The main contribution of this work is to derive a general convergence theorem for RL algorithms when one uses only approximations (in a sense of satisfying some weak contraction property) of the initial data. This result can be used for model-based or model-free RL algorithms, with off-line or on-line updating methods, for deterministic or stochastic state dynamics (though this latter case is not described here), and based on FE or FD discretization methods. It is illustrated with several RL algorithms and one numerical simulation for the Car on the Hill problem. | Introduction
This paper is about Reinforcement Learning (RL) in the continuous state-space and
time case. RL techniques (see (Kaelbling, Littman, & Moore, 1996) for a survey)
are adaptive methods for solving optimal control problems for which only a partial
amount of initial data are available to the system that learns. In this paper, we
focus on the Dynamic Programming (DP) method which introduces a function,
EMI MUNOS
called the value function (VF) (or cost function), that estimates the best future
cumulative reinforcement (or cost) as a function of initial states.
RL in the continuous case is a difficult problem for at least two reasons. Since we
consider a continuous state-space, the first reason is that the value function has
to be approximated, either by using discretization (with grids or triangulations) or
general approximation (such as neural networks, polynomial functions, fuzzy sets,
etc.) methods. RL algorithms for continuous state-space have been implemented
with neural networks (see for example (Barto, Sutton, & Anderson, 1983), (Barto,
1990), (Gullapalli, 1992), (Williams, 1992), (Lin, 1993), (Sutton & Whitehead,
1993), (Harmon, Baird, & Klopf, 1996), and (Bertsekas & Tsitsiklis, 1996)), fuzzy
sets (see (Now'e, 1995), (Glorennec & Jouffe, 1997)), approximators based on state
aggregation (see (Singh, Jaakkola, & Jordan, 1994)), clustering (see (Mahadevan
sparse-coarse-coded functions (see (Sutton, 1996)) and variable
resolution grids (see (Moore, 1991), (Moore & Atkeson, 1995)). However, as it has
been pointed out by several authors, the combination of DP methods with function
approximators may produce unstable or divergent results even when applied to
very simple problems (see (Boyan & Moore, 1995), (Baird, 1995), (Gordon, 1995)).
Some results using clever algorithms (like Residual algorithms of (Baird, 1995)) or
particular classes of approximation functions (like the Averagers of (Gordon, 1995))
can lead to the convergence to a local or global solution within the class of functions
considered.
Anyway, it is difficult to define the class of functions (for a neural network, the
suitable architecture) within which the optimal value function could be approxi-
mated, knowing that we have little prior knowledge of its smoothness properties.
The second reason is because we consider a continuous-time variable. Indeed,
the value function derived from the DP equation (see (Bellman, 1957)), relates
the value at some state as a function of the values at successor states. In the
continuous-time limit, as the successor states get infinitely closer, the value at
some point becomes a function of its differential, defining a non linear differential
equation, called the Hamilton-Jacobi-Bellman (HJB) equation.
In the discrete-time case, the resolution of the Markov Decision Process (MDP)
is equivalent to the resolution, on the whole state-space, of the DP equation ; this
property provides us with DP or RL algorithms that locally solve the DP equation
and lead to the optimal solution. With continuous time, it is no longer the case
since the HJB equation holds only if the value function is differentiable. And in
general, the value function is not differentiable everywhere (even for smooth initial
data), thus this equation cannot be solved in the usual sense, because this leads to
either no solution (if we look for classical solutions, i.e. differentiable everywhere)
or an infinity of solutions (if we look for generalized solutions, i.e. differentiable
almost everywhere).
This fact, which will be illustrated with a very simple 1-dimensional example,
explains why there could be many "bad" solutions to gradient-descent methods for
RL. Indeed, such methods intend to minimize the integral of some Hamiltonian.
But the generalized solutions of the HJB equation are global optima of this problem,
REINFORCEMENT LEARNING BY THE MEANS OF VISCOSITY SOLUTIONS 3
so the gradient-descent methods may lead to approximate any (among an infinity
of) generalized solutions giving little chance to reach the desired value function.
In order to deal with the problem of integrating the HJB equation, we use the
formalism of Viscosity Solutions (VSs), introduced by Crandall and Lions (in (Cran-
dall & Lions, 1983) ; see the user's guide (Crandall, Ishii, & Lions, 1992)) in order
to define an adequate class (which appears as a weak formulation) of solutions to
non-linear first (and second) order differential equation such as HJB equations.
The main properties of VSs are their existence, their uniqueness and the fact that
the value function is a VS. Thus, for a large class of optimal control problems, there
exists a unique VS to the HJB equation, which is the value function. Furthermore,
VSs have remarkable stability properties when passing to the limit, from which we
can derive proofs of convergence for discretization methods.
Our approach here consists in defining a class of convergent numerical schemes,
among which are the finite element (FE) and finite difference (FD) approximation
schemes introduced in (Kushner, 1990) and (Kushner & Dupuis, 1992) to discretize,
at some resolution ffi , the HJB equation into a DP equation for some discrete Markov
Decision Process. We apply a result of convergence (from (Barles & Souganidis,
1991)) to prove the convergence of the value function V ffi of the discretized MDP to
the value function V of the continuous problem as the discretization step ffi tends
to zero.
The DP equation of the discretized MDP could be solved by any DP method
(because the DP equation satisfies a "strong" contraction property leading successive
iterations to converge to the value function, the unique fixed point of this
equation), but only if the data (the transition probabilities and the reinforcements)
were perfectly known by the learner, which is not the case in the RL approach.
Thus, we propose a result of convergence for RL algorithms when we only use
"approximations" of these data (in the sense that the approximated DP equation
need to satisfy some "weak" contraction property). The convergence occurs as the
number of iterations tends to infinity and the discretization step tends to zero.
This result applies to model-based or model-free RL algorithms, for off-line or on-line
methods, for deterministic or stochastic state dynamics, and for FE or FD
based discretization methods. It is illustrated with several RL algorithms and one
numerical simulation for the "Car on the Hill" problem.
In what follows, we consider the discounted, infinite-time horizon case (for a
description of the finite-time horizon case, see (Munos, 1997a)) with deterministic
state dynamics (for the stochastic case, see (Munos & Bourgine, 1997) or (Munos,
1997a)).
Section 2 introduces the formalism for RL in the continuous case, defines the
value function, states the HJB equation and presents a result showing continuity
of the VF.
Section 3 illustrates the problems of classical solutions to the HJB equation with
a simple 1-dimensional example and introduces the notion of viscosity solutions.
Section 4 is concerned with numerical approximation of the value function using
discretization schemes. The finite element and finite difference methods are
EMI MUNOS
presented and a general convergence theorem (whose proof is in appendix A) is
stated.
Section 5 states a convergence theorem (whose proof is in appendix B) for a
general class of RL algorithms and illustrates it with several algorithms.
Section 6 presents a simple numerical simulation for the "Car on the Hill" problem
2. A formalism for reinforcement learning in the continuous case
The objective of reinforcement learning is to learn from experience how to influence
the behavior of a dynamic system in order to maximize some payoff function
called the reinforcement or reward function (or equivalently to minimize some cost
function). This is a problem of optimal control in which the state dynamics and
the reinforcement function are, a priori, at least partially unknown.
In this paper we are concerned with deterministic problems in which the dynamics
of the system is governed by a controlled differential equation. For similar results
in the stochastic case, see (Munos & Bourgine, 1997), (Munos, 1997a), for a related
work using multi-grid methods, see (Pareigis, 1996).
The two possible approaches for optimal control are Pontryagin's maximum principle
(for theoretical work, see (Pontryagin, Boltyanskii, Gamkriledze, & Mischenko,
1962) and more recently (Fleming and Rishel1975), for a study of Temporal Differ-
ence, see (Doya, 1996), and for an application to the control with neural networks,
see (Bersini & Gorrini, 1997)) and the Bellman's Dynamic Programming (DP) (in-
troduced in (Bellman, 1957)) approach considered in this paper.
2.1. Deterministic optimal control for discounted infinite-time horizon problems
In what follows, we consider infinite-time horizon problems under the discounted
framework. In that case, the state dynamics do not depend on the time. For a
study of the finite time horizon case (for which there is a dependency in time), see
(Munos, 1997a).
Let x(t) be the state of the system, which belongs to the state-space O, closure of
an open subset O ae IR d . The evolution of the system depends on the current state
x(t) and control (or action) u(t) 2 U , where U , closed subset, is the control space ;
it is defined by the controlled differential equation :
dt
where the control u(t) is a bounded, Lebesgue measurable function with values in
U . The function f is called the state dynamics. We assume that f is Lipschitzian
with respect to the first variable : there exists some constant L f ? 0 such that :
For initial state x 0 at time the choice of a control u(t) leads to a unique
(because the state dynamics (1) is deterministic) trajectory x(t) (see figure 1).
REINFORCEMENT LEARNING BY THE MEANS OF VISCOSITY SOLUTIONS 5
Definition 1. We define the discounted reinforcement functional J , which depends
on initial data x 0 and control u(t) for 0 - t - , with - the exit time of x(t)
from O (with the trajectory always stays inside O) :
with r(x; u) the current reinforcement (defined on O) and R(x) the boundary reinforcement
(defined on @O, the boundary of the state-space). fl 2 [0; 1) is the
discount factor which weights short-term rewards more than long-term ones (and
ensures the convergence of the integral).
The objective of the control problem is to find, for any initial state x 0 , the
control u (t) that optimizes the reinforcement functional J(x
x
Figure
1. The state-space O. From initial state x 0 at the choice of control u(t) leads to the
trajectory x(t) for 0 - t -; where - is the exit time from the state-space.
Remark. Unlike the discrete case, in the continuous case, we need to consider
two different reinforcement functions : r is obtained and accumulated during the
running of the trajectory, whereas R occurs whenever the trajectory exits from the
state-space (if it does). This formalism enables us to consider many optimal control
problem, such as reaching a target while avoiding obstacles, viability problems, and
many other optimization problems.
Definition 2. We define the value function, the maximum value of the reinforcement
functional as a function of initial state at time
EMI MUNOS
Before giving some properties of the value function (HJB equation, continuity and
differentiability properties), let us first describe the reinforcement learning frame-work
considered here and the constraints it implies.
2.2. The reinforcement learning approach
RL techniques are adaptive methods for solving optimal control problems whose
data are a priori (at least partially) unknown. Learning occurs iteratively, based
on the experience of the interactions between the system and the environment,
through the (current and boundary) reinforcement signals.
The objective of RL is to find the optimal control, and the techniques used are
those of DP. However, in the RL approach, the state dynamics f(x; u), and the
reinforcement functions r(x; u); R(x) are partially unknown to the system. Thus
RL is a constructive and iterative process that estimates the value function by
successive approximations.
The learning process includes both a mechanism for the choice of the control,
which has to deal with the exploration versus exploitation dilemma (exploration
provides the system with new information about the unknown data, whereas exploitation
consists in optimizing the estimated values based on the current knowl-
edge) (see (Meuleau, 1996)), and a mechanism for integrating new information for
refining the approximation of the value function. The latter topic is the object of
this paper.
The study and the numerical approximations of the value function is of great
importance in RL and DP because from this function we can deduce an optimal
feed-back controller. The next section shows that the value function satisfies a local
property, called the Hamilton-Jacobi-Bellman equation, and points out its relation
to the optimal control.
2.3. The Hamilton-Jacobi-Bellman equation
Using the dynamic programming principle (introduced in (Bellman, 1957)), we can
prove that the value function satisfies a local condition, called the Hamilton-Jacobi-
Bellman (HJB) equation (see (Fleming & Soner, 1993) for a complete survey). In
the deterministic case studied here, it is a first-order non-linear partial differential
equation (in the stochastic case, we can prove that a similar equation of order two
holds). Here we assume that U is a compact set.
Theorem 1 (Hamilton-Jacobi-Bellman) If the value function V is differentiable
at x, let DV (x) be the gradient of V at x, then the Hamilton-Jacobi-Bellman
u2U
holds at x 2 O. Additionally, V satisfies the following boundary condition :
REINFORCEMENT LEARNING BY THE MEANS OF VISCOSITY SOLUTIONS 7
Remark. The boundary condition is an inequality because at some boundary point
(for example at x 1 2 @O on figure 1) there may exist a control u(t) such that
the corresponding trajectory stays inside O and whose reinforcement functional is
strictly superior to the immediate boundary reinforcement R(x 1 ). In such cases,
holds with a strict inequality.
Remark. Using an equivalent definition, the HJB equation (5) means that V is
the solution of the equation :
with the Hamiltonian H defined, for any differentiable function W , by :
u2U
Dynamic programming computes the value function in order to find the optimal
control with a feed-back control policy, that is a function -(x) : O ! U such that
the optimal control u (t) at time t depends on current state x(t) : u
Indeed, from the value function, we deduce the following optimal feed-back control
policy
u2U
Now that we have pointed out the importance of computing the value function
for defining the optimal control, we show some properties of V (continuity, dif-
ferentiability) and how to integrate (and in what sense) the HJB equation for approximating
.
2.4. Continuity of the value function
The property of continuity of the value function may be obtained under the following
assumption concerning the state dynamics f around the boundary @O (which
is assumed smooth, i.e. @O 2 C
For all x 2 @O, let \Gamma! n (x) be the outward normal of O at x (for example, see
figure 1), we assume that :
These hypotheses mean that at any point of the boundary, there ought not be only
trajectories tangential to the boundary of the state space.
Theorem 2 (Continuity) Suppose that (2) and (9) are satisfied, then the value
function is continuous in O.
The proof of this theorem can be found in (Barles & Perthame, 1990).
EMI MUNOS
3. Introduction to viscosity solutions
From theorem 1, we know that if the value function is differentiable then it solves
the HJB equation. However, in general, the value function is not differentiable
everywhere even when the data of the problem are smooth. Thus, we cannot expect
to find classical solutions (i.e. differentiable everywhere) to the HJB equation. Now,
if we look for generalized solutions (i.e. differentiable almost everywhere), we find
that there are many solutions other than the value function that solve the HJB
equation.
Therefore, we need to define a weak class of solutions to this equation. Crandall
and Lions introduced such a weak formulation by defining the notion of Viscosity
Solutions (VSs) in (Crandall & Lions, 1983). For a complete survey, see (Crandall
et al., 1992), (Barles, 1994) or (Fleming & Soner, 1993). This notion has been
developed for a very broad class of non-linear first and second order differential
equations (including HJB equations for the stochastic case of controlled diffusion
processes). Among the important properties of viscosity solutions are some uniqueness
results, the stability of solutions to approximated equations when passing to
the limit (this very important result will be used to prove the convergence of the
approximation schemes in section 4.4) and mainly the fact that the value function is
the unique viscosity solution of the HJB equation (5) with the boundary condition
(6).
First, let us illustrate with a simple example the problems raised here when one
looks for classical or generalized solutions to the HJB equation.
3.1. 3 problems illustrated with a simple example
Let us study a very simple control problem in 1 dimension. Let the state x(t) 2
[0; 1], the control u(t) 2 f\Gamma1; +1g and the state dynamics be : dx
Consider a current reinforcement everywhere and a boundary reinforcement
defined by R(0) and R(1). In this example, we deduce that the value function is :
and the HJB equation is :
with the boundary conditions V (0) - R(0) and V (1) - R(1):
1. First problem : there is no classical solution to the HJB equation.
0:3. The corresponding value function is
plotted in figure 2. We observe that V is not differentiable everywhere, thus
does not satisfy the HJB equation everywhere : there is no classical solution to
the HJB equation.
REINFORCEMENT LEARNING BY THE MEANS OF VISCOSITY SOLUTIONS 9
x
Optimal
Value function
Figure
2. The value function is not differentiable everywhere.
2. Second problem : there are several generalized solutions.
If one looks for generalized solutions that satisfy the HJB equation almost
everywhere, we find many functions other than the value function. An example
of a function satisfying (11) almost everywhere with the boundary conditions
is illustrated in figure 3.
x
Optimal
Value function
Figure
3. There are many generalized solutions other than the value function.
Remark. This problem is of great importance when one wants to use gradient-descent
methods with some general function approximator, like neural networks,
to approximate the value function. The use of gradient-descent methods may
lead to approximate any of the generalized solutions of the HJB equation and
thus fail to find the value function. Indeed, suppose that we use a gradient-descent
method for finding a function W minimizing the error :
EMI MUNOS
Z
x2O
with H the Hamiltonian defined in section 2.3. Then, the method will converge,
in the best case, to any generalized solution V g of (7) (because these functions
are global optima of this minimization problem, since their error E(V
which are probably different from the value function V . Moreover the control
induced by such functions (by the closed loop policy (8)) might be very different
from the optimal control (defined by V ). For example, the function plotted in
figure 3 generates a control (given by the direction of the gradient) very different
from the optimal control, defined by the value function plotted in figure 2.
In fact, in such continuous time and space problems, there exists an infinity of
global minima for gradient descent methods, and these functions may be very
different from the expected value function.
In the case of neural networks, we usually use smooth functions (differentiable
everywhere), thus neither the value function V (figure 2), nor a generalized
solution V g (figure 3) can be exactly represented, but both can be approximated.
Let us denote ~
V and ~
, the best approximations of V and V g in the network.
Then ~
V and ~
are local minima of the gradient-descent method minimizing E,
but nothing proves that ~
V is a global minimum. In this example, it could seem
that V is "smoother" than the generalized solutions (because it has only one
discontinuity instead of several ones) in the sense that E( ~
E( ~
is not true in general. In any case, in the continuous-time case, when we use
a smooth function approximator, there exists an infinity of local solutions for
the problem of minimizing the error E and nothing proves that the expected
~
V is a global solution. See (Munos, Baird, & Moore, 1999) for some numerical
experiments on simple (one and two dimensional) problems.
When time is discretized, this problem disappears, but we still have to be careful
when passing to the limit. In this paper, we describe discretization methods
that converge to the value function when passing to the limit of the continuous
case.
3. Third problem : the boundary condition is an inequality.
Here we illustrate the problem of the inequality of the boundary condition. Let
5. The corresponding value function is plotted in figure 4.
We observe that V (0) is strictly superior to the boundary reinforcement R(0).
This strict inequality occurs at any boundary point x 2 @O for which there
exists a control u(t) such that the trajectory goes immediately inside O and
generates a better reinforcement functional than the boundary reinforcement
R(x) obtained by exiting from O at x:
We will give (in definition 4 that follows) a weak (viscosity) formulation of the
boundary condition (6).
REINFORCEMENT LEARNING BY THE MEANS OF VISCOSITY SOLUTIONS 11
x
Value function
Figure
4. The boundary condition may hold with a strict inequality condition
3.2. Definition of viscosity solutions
In this section, we define the notion of viscosity solutions for continuous functions
(a definition for discontinuous functions is given in appendix A).
Definition 3 (Viscosity solution). Let W be a continuous real-valued function
defined in O.
ffl W is a viscosity sub-solution of (7) in O if for all functions '
local maximum of W \Gamma ' such that W
ffl W is a viscosity super-solution of (7) in O if for all functions '
local minimum of W \Gamma ' such that W
ffl W is a viscosity solution of (7) in O if it is a viscosity sub-solution and a viscosity
super-solution of (7) in O:
3.3. Some properties of viscosity solutions
The following theorem (whose proof can be found in (Crandall et al., 1992)) states
that the value function is a viscosity solution.
Theorem 3 Suppose that the hypotheses of Theorem 2 hold. Then the value function
V is a viscosity solution of (7) in O:
EMI MUNOS
In order to deal with the inequality of the boundary condition (6), we define
a viscosity formulation in a differential type condition instead of a pure Dirichlet
condition.
Definition 4 (Viscosity boundary condition). Let W be a continuous real-valued
function defined in O,
ffl W is a viscosity sub-solution of (7) in O with the boundary condition (6) if it
is a viscosity sub-solution of (7) in O and for all functions '
local maximum of W \Gamma ' such that W
minfH(x;W;DW
ffl W is a viscosity super-solution of (7) in O with the boundary condition (6) if it
is a viscosity super-solution of (7) in O and for all functions '
local minimum of W \Gamma ' such that W
minfH(x;W;DW
ffl W is a viscosity solution of (7) in O with the boundary condition (6) if it is a
viscosity sub- and super-solution of (7) in O with the boundary condition (6).
Remark. When the Hamiltonian H is related to an optimal control problem
(which is the case here), the condition (13) is simply equivalent to the boundary
inequality (6).
With this definition, theorem 3 extends to viscosity solutions with boundary
conditions. Moreover, from a result of uniqueness, we obtain the following theorem
(whose proof is in (Crandall et al., 1992) or (Fleming & Soner,
Theorem 4 Suppose that the hypotheses of theorem 2 hold. Then the value function
V is the unique viscosity solution of (7) in O with the boundary condition
(6).
Remark. This very important theorem shows the relevance of the viscosity solutions
formalism for HJB equations. Moreover this provides us with a very useful
framework (as will be illustrated in next few sections) for proving the convergence
of numerical approximations to the value function.
Now we study numerical approximations of the value function. We define approximation
schemes by discretizing the HJB equation with finite element or finite
difference methods, and prove the convergence to the viscosity solution of the HJB
equation, thus to the value function of the control problem.
4. Approximation with convergent numerical schemes
4.1.
Introduction
The main idea is to discretize the HJB equation into a Dynamic Programming
(DP) equation for some stochastic Markovian Decision Process (MDP). For any
resolution ffi , we can solve the MDP and find the discretized value function V ffi by
using DP techniques, which are guaranteed to converge since the DP equation is
a fixed-point equation satisfying some strong contraction property (see (Puterman,
1994), (Bertsekas, 1987)). We are also interested in the convergence properties of
the discretized V ffi to the value function V as ffi decreases to 0.
From (Kushner, 1990) and (Kushner & Dupuis, 1992), we define two classes
of approximation schemes based on finite difference (FD) (section 4.2) and finite
element methods (section 4.3). Section 4.4 provides a very general theorem of
convergence (deduced from the abstract formulation of (Barles & Souganidis, 1991)
and using the stability properties of viscosity solutions), that covers both FE and
FD methods (the only important required properties for the convergence are the
monotonicity and the consistency of the scheme).
In the following, we assume that the control space U is approximated by finite
control spaces U ffi such that :
ae U
4.2. Approximation with finite difference methods
d be a basis for IR d . The dynamics are Let the
positive and negative parts of f i be
any discretization step ffi , let us consider the lattices : ffi Z
where
are any integers, and \Sigma the frontier of \Sigma ffi , denote
the set of points f- 2 ffi Z d n O such that at least one adjacent point - \Sigma
(see figure 5). Let us denote by jjyjj
the 1-norm of any vector y.
O
S d S d
Figure
5. The discretized state-space \Sigma ffi (the dots) and its frontier @ \Sigma ffi (the crosses).
EMI MUNOS
The FD method consists of replacing the gradient DV (-) by the forward and
backward difference quotients of V at - 2 \Sigma ffi in direction e
Thus the HJB equation can be approximated by the following equation :
Knowing that (\Deltat ln fl) is an approximation of (fl \Deltat \Gamma 1) as \Deltat tends to 0, we deduce
the following equivalent approximation equation : for - 2 \Sigma ffi ,
with p(- 0 j-;
for
which is a DP equation for a finite Markovian Decision Process whose state-space
is space is U ffi and probabilities of transition are p(- 0 j-; u) (see figure 6
for a geometrical interpretation).12
x
x
x
x
x
x
u'
x
. p( | ,u)
. p( | ,u)
p( | ,u)
x
x
d
d State Control Next
state
x
d
Prob.
Figure
6. A geometrical interpretation of the FD discretization. The continuous process (on the
left) is discretized at some resolution ffi into an MDP (right). The transition probabilities p(-
of the MDP are the coordinates of the vector 1
(j \Gamma -) with j the projection of - onto the segment
in a direction parallel to f(-; u).
From the boundary condition, we define the absorbing terminal states :
For
REINFORCEMENT LEARNING BY THE MEANS OF VISCOSITY SOLUTIONS 15
By defining F ffi
FD the finite difference scheme :
DP equation
FD
This equation states that V ffi is a fixed point of F ffi
FD . Moreover, as f is bounded
from above (with some value M f
FD satisfies the following strong contraction
and since - ! 1, there exists a fixed point which is the value function
is unique and can be computed by DP iterative methods (see (Puterman, 1994),
(Bertsekas, 1987)).
Computation of V ffi and convergence. : There are two standard methods
for computing the value function V ffi of some MDP : value iteration (V ffi is the
limit of a sequence of successive iterations V ffi
FD
) and policy iteration
(approximations in policy space by alternative policy evaluation steps and policy
improvement steps). See (Puterman, 1994), (Bertsekas, 1987) or (Bertsekas &
Tsitsiklis, 1996) for more information about DP theory. In section 5, we describe
RL methods for computing iteratively the approximated value functions
In the following section, we study a similar method for discretizing the continuous
process into an MDP by using finite element methods. The convergence of these
two methods (i.e. the convergence of the discretized V ffi to the value function V as
tends to 0) will be derived from a general theorem in section 4.4.
4.3. Approximations with finite element methods
We use a finite element (FE) method (with linear simplexes) based on a triangulation
covering the state-space (see figure 7).
The value function V is approximated by piecewise linear functions V ffi defined
by their values at the vertices f-g of the triangulation \Sigma ffi . The value of V ffi at any
point x inside some simplex (- linear combination of V ffi at the vertices
d
with being the barycentric coordinates of x inside the simplex (-
(We recall that the definition of the barycentric coordinates is that - i
EMI MUNOS
xd d
x
Figure
7. Triangulation \Sigma ffi of the state-space. V ffi (x) is a linear combination of the V ffi (- i ), for
weighted by the barycentric coordinates - i (x).
By using a finite element approximation scheme derived from (Kushner, 1990),
the continuous HJB equation is approximated by the following equation :
where j(-; u) is a point inside \Sigma ffi such that j(-;
We require that -
satisfies the following condition, for some positive constants k 1 and
Remark. It is interesting to notice that this time discretization function -; u)
does not need to be constant and can depend on the state - and the control u.
This provides us with some freedom on the choice of these parameters, assuming
that equation (19) still holds. For a discussion on the choice of a constant time
discretization function - according to the space discretization size ffi in order to
optimize the precision of the approximations, see (Pareigis, 1997).
Let us denote (- 0 ; :::; - d ) the simplex containing j(-; u): As V ffi is linear inside the
simplex, this equation can be written :
d
which is a DP equation for a Markov Decision Process whose state-space is the
set of vertices f-g and the probability of transition from (state -, control u) to
next states - are the barycentric coordinates of j(-; u) inside simplex
8). The boundary states satisfy
the terminal condition :
REINFORCEMENT LEARNING BY THE MEANS OF VISCOSITY SOLUTIONS 17
x
x
x
l (h )x
xx
x
x
x
l
l
x
x
x
x
hState Control Proba. Next
state
x
l (h )x
Figure
8. A finite element approximation. Consider a vertex - and j 1
linear combination of V weighted by the barycentric coordinates
). Thus, the probabilities of transition of the MDP are these barycentric coordinates.
For
By defining F ffi
FE the finite element scheme,
d
the approximated value function V ffi satisfies the DP equation
Similarly to the FD scheme, F ffi
FE satisfies the following "strong" contraction
and since - ! 1, there is a unique solution, V ffi to (23) with (21) which can be
computed by DP techniques.
4.4. Convergence of the approximation schemes
In this section, we present a convergence theorem for a general class of approximation
schemes. We use the stability properties of viscosity solutions (described
in (Barles & Souganidis, 1991)) to obtain the convergence. Another kind of convergence
result, using probabilistic considerations, can be found in (Kushner &
Dupuis, 1992), but such results do not treat the problem with the general boundary
condition (9). In fact, the only important required properties for convergence
is monotonicity (property (27)) and consistency (properties (30) and (31) below).
As a corollary, we deduce that the FE and the FD schemes studied in the previous
sections are convergent.
EMI MUNOS
4.4.1. A general convergence theorem. Let \Sigma ffi and @ \Sigma ffi be two discrete and finite
subsets of IR d . We assume that for all x 2 O; lim ffi#0 dist(x; \Sigma
be an operator on the space of bounded
functions on \Sigma ffi . We are concerned with the convergence of the solution V ffi to the
dynamic programming equation :
with the boundary condition :
We make the following assumptions on F
ffl For any constant c,
ffl For any ffi ,
there exists a solution V ffi to (25) and (26) which is (29)
bounded with a constant M V independent of ffi:
Consistency : there exists a constant k ? 0 such that :
lim inf
lim sup
Remark. Conditions (30) and (31) are satisfied in the particular case of :
lim
Theorem 5 (Convergence of the scheme) Assume that the hypotheses of theorem
are satisfied. Assume that (27), (28) (30) and (31) hold, then F ffi is a
convergent approximation scheme, i.e. the solutions V ffi of (25) and
lim
\Gamma!x
uniformly on any
REINFORCEMENT LEARNING BY THE MEANS OF VISCOSITY SOLUTIONS 19
4.4.2. Outline of the proof. We use the procedure described in (Barles and
Perthame1988). The idea is to define the largest limit function V
and the smallest limit function V prove that they are respectively
discontinuous sub- and super viscosity solutions. This proof, based on the
general convergence theorem of (Barles & Souganidis, 1991), is given in appendix
A. Then we use a comparison result which states that if (9) holds then viscosity
sub-solutions are less than viscosity super-solutions, thus V sup - V inf . By definition
and the limit function V is the viscosity solution
of the HJB equation, thus (from theorem 4) the value function of the problem.
4.4.3. FD and FE approximation schemes converge
Corollary 1 The approximation schemes F ffi
FD and F ffi
FE are convergent.
Indeed, for the finite difference scheme, it is obvious that since p(- 0 j-; u) are
considered as transition probabilities, the approximation scheme F ffi
FD satisfies (27)
and (28). As (17) is a DP equation for some MDP, DP theory ensures that (29) is
true. We can check that the scheme is also consistent : conditions (30) and (31)
hold with
FD satisfy the hypotheses of theorem 5.
Similarly, for the finite element scheme, from the basic properties of the barycentric
coordinates - i (x), the approximation scheme F ffi
FE satisfies (27). From (19),
condition holds. DP theory ensures that (29) is true. The scheme is consistent
and conditions (30) and (31) hold with
FE satisfies the hypotheses
of theorem 5.
4.5. Summary of the previous results of convergence
For any given discretization step ffi , from the "strong" contraction property (18) or
(24), DP theory ensures that the values V ffi
iterated by some DP algorithm converge
to the value V ffi of the approximation scheme F ffi as n tends to infinity. From the
convergence of the scheme (theorem 5), the V ffi tend to the value function V of the
continuous problem as ffi tends to 0 (see figure 9).
d
d
the "strong"
contraction
property
HJB equation DP equation
Figure
9. The HJB equation is discretized, for some resolution ffi; into a DP equation whose
solution is V ffi . The convergence of the scheme ensures that V 0: Thanks to the
"strong" contraction property, the iterated values V ffi
n tend to V ffi as
EMI MUNOS
Remark. Theorem 5 gives a result of convergence on any
that the hypothesis (9), for the continuity of V , is satisfied. However, if this
hypothesis is not satisfied, but if the value function is continuous, the theorem still
applies. Now, if (9) is not satisfied and the value function is discontinuous at some
area, then we still have the convergence on any
O where the value
function is continuous.
5. Designing convergent reinforcement learning algorithms
In order to solve the DP equation (14) or (20), one can use DP off-line methods
-such as value iteration, policy iteration, modified policy iteration (see (Puterman,
1994)), with synchronous or asynchronous back-ups, or on-line methods -like Real
Time DP (see (Barto, Bradtke, & Singh, 1991), (Bertsekas & Tsitsiklis, 1996)).
For example, by introducing the Q-values Q
equation (20) can be solved by
successive back-ups (indexed by n) of states -; control u (in any order provided
that every state and control are updated regularly) by :
The values V ffi
n of this algorithm converges to the value function V ffi of the discretized
MDP as n !1.
However, in the RL approach, the state dynamics f and the reinforcement functions
are unknown to the learner. Thus, the right side of the iterative rule (33)
is unknown and has to be approximated thanks to the available knowledge. In the
RL terminology, there are two possible approaches for updating the values :
ffl The model-based approach consists of first, learning a model of the state dynamics
and of the reinforcement functions, and then using DP algorithms based on
such a rule (33) with the approximated model instead of the true values. The
learning (the updating of the estimated Q-value Q
iteratively during
the simulation of trajectories (on-line learning) or at the end (at the exit
time) of one or several trajectories (off-line or batch learning).
ffl The model-free approach consists of updating incrementally the estimated values
n or Q-value Q
n of the visited states without learning any model.
In what follows, we propose a convergence theorem that applies to a large class
of RL algorithms (model-based or model-free, on-line or off-line, for deterministic
or stochastic dynamics) provided that the updating rule satisfies some "weak"
contraction property with respect to some convergent approximation scheme
such as the FD and FE schemes studied previously.
5.1. Convergence of RL algorithms
The following theorem gives a general condition for which an RL algorithm converges
to the optimal solution of the continuous problem. The idea is that the
REINFORCEMENT LEARNING BY THE MEANS OF VISCOSITY SOLUTIONS 21
updated values (by any model-free or model-based method) must be close enough
to those of a convergent approximation scheme so that their difference satisfies the
"weak" contraction property (34) below.
Theorem 6 (Convergence of RL algorithms) For any ffi , let us build finite
subsets \Sigma ffi and @ \Sigma ffi satisfying the properties of section 4.4. We consider an algorithm
that leads to update every state - 2 \Sigma ffi regularly and every state - 2 @ \Sigma ffi at
least once. Let F ffi be a convergent approximation scheme (for example (22)) and
V ffi be the solution to (25) and (26). We assume that the values updated at the
iteration n satisfy the following properties
in the sense of the following "weak"
contraction property :
for some positive constant k and some function e(ffi) that tends to 0 as
in the sense :
for some positive constant kR ;
then for any
ae O; for all " ? 0, there exists \Delta such that for any
there exists N; for all n - N ,
sup
This result states that when the hypotheses of the theorem applies (mainly when
we find some updating rule satisfying the weak contraction property (34)) then
the values V ffi
computed by the algorithm converge to the value function V of the
continuous problem as the discretization step ffi tends to zero and the number of
iterations n tends to infinity.
5.1.1. Outline of the proof The proof of this theorem is given in appendix B. If
condition (34) were a strong contraction property such as
for some constant - ! 1; then the convergence would be obvious since from (25)
and from the fact that all the states are updated regularly, for a fixed
would
converge to V ffi as From the fact (theorem 5) that V ffi converges to V as
could deduce that V ffi
EMI MUNOS
If it is not the case, we can no longer expect that V ffi
if (34) holds, we can prove (this is the object of section B.2 in appendix B) that
for any " ? 0; there exists small enough values of ffi such that at some stage N ,
This result together with the convergence of the scheme
leads to the convergence of the algorithm as ffi # 0 and n !1 (see figure 10).
contraction
property
the "weak"
RL with
d
d
HJB equation DP equation
Figure
10. The values V ffi
iterated by an RL algorithm do not converge to V ffi as n !1. However,
if the "weak" contraction property is satisfied, the V ffi
tend to V as
5.1.2. The challenge of designing convergent algorithms. In general the "strong"
contraction property (36) is impossible to obtain unless we have perfect knowledge
of the dynamics f and the reinforcement functions r and R. In the RL approach,
these components are estimated and approximated during some learning phase.
Thus the iterated values V ffi
are imperfect, but may be "good enough" to satisfy
the weak contraction property (34). Defining such "good" approximations is
the challenge for designing convergent RL algorithms.
In order to illustrate the method, we present in section 5.2 a procedure for designing
model-based algorithms, and in section 5.3, we give a model-free algorithm
based on a FE approximation scheme.
5.2. Model-based algorithms
The basic idea is to build a model of the state dynamics f and the reinforcement
functions r and R at states - from the local knowledge obtained through the simulation
of trajectories. So, if some trajectory xn (t) goes inside the neighborhood
of - (by defining the neighborhood as an area whose diameter is bounded by kN :ffi
for some positive constant kN ) at some time t n and keep a constant control u for
a period - n (from )), we can build the model of
f(-; u) and r(-; u) :
(see figure 11). Then we can approximate the scheme (22), by the following values
using the previous model : the Q-values Q
are updated according to :
(for any function -; u) satisfying (19)), which corresponds to the iterative rule
(33) with the model f f n and er instead of f and r.
y
x
x
x
Figure
11. A trajectory goes through the neighborhood (the grey area) of -. The state dynamics
is approximated by ~
-n
It is easy to prove (see (Munos & Moore, 1998) or (Munos, 1997a)) that assuming
some smoothness assumptions (r; R Lipschitzian), the approximated V ffi
n satisfy the
condition (34) and theorem 6 applies.
Remark. Using the same model, we can build a similar convergent RL algorithm
based on the finite difference scheme (22) (see (Munos, 1998)). Thus, it appears
quite easy to design model-based algorithms satisfying the condition (34).
Remark. This method can also be used in the stochastic case, for which a model
of the state dynamics is the average, for several trajectories, of such yn \Gammax n
-n , and a
model of the noise is their variance (see (Munos & Bourgine, 1997)).
Furthermore, it is possible to design model-free algorithms satisfying the condition
(34), which is the topic of the following section.
5.3. A Model-free algorithm
The Finite Element RL algorithm. Consider a triangulation \Sigma ffi satisfying the
properties of section 4.3. The direct RL approach consists of updating on-line the
Q-values of the vertices without learning any model of the dynamics.
We consider the FE scheme (22) with -; u) being such that j(-;
-; u):f(-; u) is the projection of - onto the opposite side of the simplex, in a
parallel direction to f(-; u) (see figure 12). If we suppose that the simplexes are
EMI MUNOS
non degenerated (i.e. 9k ae such that the radius of the sphere inscribed in each
simplex is superior to k ae ffi ) then (19) holds.
Let us consider that a trajectory x(t) goes through a simplex. Let
the input point and be the output point. The control u is assumed to be
constant inside the simplex.
x
x
x
y
x
Figure
12. A trajectory going through a simplex. j(-; u) is the projection of - onto the opposite
side of the simplex. y\Gammax
is a good approximation of j(-; u) \Gamma -.
As the values -; u) and j(-; u) are unknown to the system, we make the following
estimations (from Thales'
ffl -; u) is approximated by -
(where - (x) is the - \Gammabarycentric coordinate
of x inside the simplex)
ffl j(-; u) is approximated by -
which only use the knowledge of the state at the input and output points (x and
y), the running time - of the trajectory inside the simplex and the barycentric
coordinate - (x) (which can be computed as soon as the system knows the vertices
of the input side of the simplex). Besides, r(-; u) is approximated by the current
reinforcement r(x; u) at the input point.
Thanks to the linearity of V ffi inside the simplex, V ffi (j(-; u)) is approximated by
the algorithm consists in updating the quality Q
with the estimation :
and if the system exits from the state-space inside the simplex (i.e. y 2 @O), then
update the closest vertex of the simplex with :
By assuming some additional regularity assumptions (r and R Lipschitzian, f
bounded from below), the values V ffi
and (35) which proves the
convergence of the model-free RL algorithm based on the FE scheme (see (Munos,
1996) for the proof).
In a similar way, we can design a direct RL algorithm based on the finite difference
scheme F ffi
FD (16) and prove its convergence (see (Munos, 1997b)).
6. A numerical simulation for the "Car on the Hill" problem
For a description of the dynamics of this problem, see (Moore & Atkeson, 1995).
This problem has a state-space of dimension 2 : the position and the velocity of
the car. In our experiments, we chose the reinforcement functions as follows : the
current reinforcement r(x; u) is zero everywhere. The terminal reinforcement R(x)
is \Gamma1 if the car exits from the left side of the state-space, and varies linearly between
depending on the velocity of the car when it exits from the right side of
the state-space. The best reinforcement +1 occurs when the car reaches the right
boundary with a null velocity (see figure 13). The control u has only 2 possible
positive or negative thrust.
Thrust
Gravitation
Resistance
Goal
Position
x
R=+1 for null velocity
R=-1 for max. velocity
Reinforcement
Figure
13. The "Car on the Hill" problem.
In order to approximate the value function, we used 3 different triangulations
composed respectively of 9 by 9, 17 by 17 and 33 by 33 states (see
figure 14), and, for each of these, we ran the two algorithms that follows :
ffl An asynchronous Real Time DP (based on the updating rule (33)), assuming
that we have a perfect model of the initial data (the state dynamics and the
reinforcement functions).
ffl An asynchronous Finite Element RL algorithm, described in section 5.3 (based
on the updating rule (37)), for which the initial data are approximated by parts
of trajectories selected at random.
EMI MUNOS
Velocity
Triangulation 2 Triangulation 3
Position
Triangulation 1
Figure
14. Three triangulations used for the simulations.
In order to evaluate the quality of approximation of these methods, we also computed
a very good approximation of the value function V (plotted in figure 15) by
using DP (with rule (33)) on a very dense triangulation (of 257 by 257 states) with
a perfect model of the initial data.
Figure
15. The value function of the "Car on the Hill", computed with a triangulation composed
of 257 by 257 states.
We have computed the approximation error En
being the discretization step of triangulation T k . For this problem, we notice
that hypothesis (9) does not hold (because all the trajectories are tangential to
the boundary of the state-space at the boundary states of zero velocity), and the
value function is discontinuous. A frontier of discontinuity happens because a point
beginning just above this frontier can eventually get a positive reward whereas any
point below is doomed to exit on the left side of the state-space. Thus, following
the remark in section 4.5, in order to compute En (T k ), we
chose\Omega to be the whole
state-space except some area around the discontinuity.
Figures
and 17 represent, respectively for the 2 algorithms, the approximation
error En (T k ) (for the 3 triangulations function of the number
of iterations n. We observe the following points :
of approximation
Triangulation 1
Triangulation 2
Triangulation 3
Figure
16. The approximation error En (T k ) of the values computed by the asynchronous Real
Time DP algorithm as a function of the number of iterations n for several triangulations.
of approximation
Triangulation 1
Triangulation 3
Triangulation 2
Figure
17. The approximation error En (T k ) of the values computed by the asynchronous Finite
Element RL algorithm.
ffl Whatever the resolution of the discretization ffi is, the values V ffi
computed by
RTDP converge, as n increases. Their limit is V ffi , solution of the DP equation
EMI MUNOS
(20). Moreover, we observe the convergence of the V ffi to the value function
V as the resolution ffi tends to zero. These results illustrate the convergence
properties showed in figures 9.
ffl For a given triangulation, the values V ffi
computed by FERL do not converge.
For discretization), the error of approximation decreases rapidly, and
then oscillates within a large range. For T 2 , the error decreases more slowly
(because there are more states to be updated) but then oscillates within a
smaller range. And for T 3 (dense discretization), the error decreases still more
slowly but eventually gets close to zero (while still oscillating). Thus, we observe
that, as illustrated in figure 10, for any given discretization step ffi , the values do
not converge. However, they oscillate within a range depending on ffi . Theorem
6 simply states that for any desired precision (8"), there exists a discretization
step ffi such that eventually (9N; 8n ? N ), the values will approximate the value
function at that precision (supjV ffi
").
7. Conclusion and future work
This paper proposes a formalism for the study of RL in the continuous state-space
and time case. The Hamilton-Jacobi-Bellman equation is stated and several properties
of its solutions are described. The notion of viscosity solution is introduced
and used to integrate the HJB equation for finding the value function. We describe
discretization methods (by using finite element and finite difference schemes) for
approximating the value function, and use the stability properties of the viscosity
solutions to prove their convergence.
Then, we propose a general method for designing convergent (model-based or
model-free) RL algorithms and illustrate it with several examples. The convergence
result is obtained by substituting the "strong" contraction property used to prove
the convergence of DP method (which cannot hold any more when the initial data
are not perfectly known) by some "weak" contraction property, that enables some
approximations of these data. The main theorem states a convergence result for
RL algorithms as the discretization step ffi tends to 0 and the number of iterations
n tends to infinity.
For practical applications of this method, we must combine to the learning dynamics
(n !1) some structural dynamics which operates on the discretization
process. For example, in (Munos, 1997c), an initial rough Delaunay triangulation
progressively refined (by adding new vertices) according to a local criterion
estimating the irregularities of the value function. In (Munos & Moore, 1999),
a Kuhn triangulation embedded in a kd-tree is adaptively refined by a non-local
splitting criterion that allows the cells to take into account their impact on other
cells when deciding whether to split.
Future theoretical work should consider the study of approximation schemes (and
the design of algorithms based on these scheme) for adaptive and variable resolution
discretizations (like the adaptive discretizations of (Munos & Moore, 1999;
Munos, 1997c), the parti-game algorithm of (Moore & Atkeson, 1995), the multi-
grid methods of (Akian, 1990) and (Pareigis, 1996), or the sparse grids of (Griebel,
1998)), the study of the rates of convergence of these algorithms (which already
exists in some cases, see (Dupuis & James, 1998)), and the study of generalized
control problems (with "jumps", generalized boundary conditions, etc.
To adequately address practical issues, extensive numerical simulations (and comparison
to other methods) have to be conducted, and in order to deal with high dimensional
state-spaces, future work should concentrate on designing relevant structural
dynamics and condensed function representations.
Acknowledgments
This work has been funded by DASSAULT-AVIATION, France, and has been
carried out at the Laboratory of Engineering for Complex Systems (LISC) of the
CEMAGREF, France. I would like to thank Paul Bourgine, Martine Naillon, Guillaume
Guy Barles and Andrew Moore.
I am also very grateful to my parents, my mentor Daisaku Ikeda and my best
friend Guyl'ene.
Appendix
A
Proof of theorem 5
A.1. Outline of the proof
We use the Barles and Perthame procedure in (Barles & Perthame, 1988). First
we give a definition of discontinuous viscosity solutions. Then we define the largest
limit function V sup and the smallest limit function V inf and prove (following (Barles
& Souganidis, 1991)), in lemma (1), that V sup (respectively V inf ) is a discontinuous
viscosity sub-solution (resp. super-solution). Then we use a strong comparison
result (lemma 2) which states that if (9) holds then viscosity sub-solutions are less
than viscosity super-solutions, thus V sup - V inf . By definition V sup - V inf , thus
and the limit function V is the viscosity solution of the HJB
equation, and thus the value function of the problem.
A.2. Definition of discontinuous viscosity solutions
Let us recall the notions of the upper semi-continuous envelope W and the lower
semi-continuous envelope W of a real valued function W :
Definition 5. Let W be a locally bounded real valued function defined on O.
EMI MUNOS
ffl W is a viscosity sub-solution of H(x; W;DW O if for all functions
local maximum of W \Lambda \Gamma ' such that W
we have :
ffl W is a viscosity super-solution of H(x; W;DW O if for all functions
local minimum of W \Lambda \Gamma ' such that W
we have :
ffl W is a viscosity solution of H(x; W;DW O if it is a viscosity sub-solution
and a viscosity super-solution of H(x; W;DW
A.3. V sup and V inf are viscosity sub- and super-solutions
Lemma 1 The two limit functions V sup and
are respectively viscosity sub- and super-solutions.
Proof: Let us prove that V sup is a sub-solution. The proof that V inf is a supersolution
is similar. Let ' be a smooth test function such that V sup \Gamma ' has a
maximum (which can be assumed to be strict) at x such that V sup
n be a sequence converging to zero. Then V has a maximum at - n which
tends to x as tends to 0. Thus, for all - 2 \Sigma
By (27), we have :
\Theta
By (28), we obtain :
By
\Theta
\Theta
As tends to 0, the left side of this inequality tends to 0 as
Thus, by (31), we have :
Thus V sup is a viscosity sub-solution.
A.4. Comparison principle between viscosity sub- and super-solutions
Assume (9), then (7) and (6) has a weak comparison principle, i.e. for
any viscosity sub-solution W and super-solution W of (7) and (6), for all x 2 O
we have :
For a proof of this comparison result between viscosity sub- and super-solutions
see (Barles, 1994), (Barles & Perthame, 1988), (Barles & Perthame, 1990) or for
slightly different hypothesis (Fleming & Soner, 1993).
A.5. Proof of theorem 5
Proof: From lemma 1, the largest limit function V sup and the smallest limit
function V inf are respectively viscosity sub-solution and super-solution of the HJB
equation. From the comparison result of lemma 2, V sup - V inf . But by their
definition and the approximation scheme V ffi
converges to the limit function V , which is the viscosity solution of the HJB equation
thus the value function of the problem, and (32) holds true.
Appendix
Proof of theorem 6
B.1. Outline of the proof
We know that from the convergence of the scheme V ffi (theorem 5), for any compact
ae O; for any " 1 ? 0; there exists a discretization step ffi such that :
sup
Let us define :
As we have seen in section 5.1.1, if we had the strong contraction property (36),
then for any ffi; E
would converge to 0 as As we only have the weak
contraction property
the idea of the following proof is that for any " 2 ? 0; there exists ffi and a stage N;
such that for n - N ,
EMI MUNOS
Then we deduce that for any " ? 0; we can find
sup
B.2. A sufficient condition for
Lemma 3 Let us suppose that there exists some constant ff ? 0 such that for any
state - updated at stage n, the following condition hold :
then there exists N such that for n - N;E ffi
Proof: As the algorithm updates every state - 2 \Sigma ffi regularly, there exists an
integer m such that at stage n+m all the states - 2 \Sigma ffi have been updated at least
once since stage n. Thus, from (B.2) and (B.3) we have :
Thus, there exists N 1 such that : 8n -
sup
Moreover, all states - 2 @ \Sigma ffi are updated at least once, thus there exists N 2 such
that
for any ffi -
Thus from (B.4) and (B.5), for n -
Lemma 4 For any " 1 ? 0; there exists \Delta 2 such that for the conditions
(B.2) and (B.3) are satisfied.
Proof: Let us consider a From the convergence of e(ffi) to 0 when
# 0, there exists \Delta 1 such that for
Let us prove that (B.2) and (B.3) hold. Let E
then from (34),
From (B.6),
" 2and (B.2) holds for
from (34), we have :
and condition (B.3) holds.
B.3. Convergence of the algorithm
Proof: Let us prove theorem 6. For any
ae O; for all " ? 0, let us
". From lemma 4, for
conditions (B.2) and (B.3) are satisfied, and from lemma 3, there exists N; for all
Moreover, from the convergence of the approximation scheme, theorem 5 implies
that for any
ae O; there exists \Delta 2 such that for all
sup
Thus for finite discretized state-space \Sigma ffi and @ \Sigma ffi
satisfying the properties of section 4.4, there exists N; for all n - N ,
sup
--R
M'ethodes multigrilles en contr-ole stochastique
Solutions de viscosit'e des 'equations de Hamilton-Jacobi, Vol. <Volume>17</Volume> of Math'ematiques et Applications.
time problems in optimal control and vanishing viscosity solutions of hamilton-jacobi equations
Comparison principle for dirichlet-type hamilton-jacobi equations and singular perturbations of degenerated elliptic equations
Convergence of approximation schemes for fully nonlinear second order equations.
Neural networks for control.
Neuronlike adaptive elements that can solve difficult learning control problems.
Dynamic Programming.
A simplification of the back-propagation-through- time algorithm for optimal neurocontrol
Dynamic Programming
Generalization in reinforcement learning
User's guide to viscosity solutions of second order partial differential equations.
Viscosity solutions of hamilton-jacobi equations
Rates of convergence for approximation schemes in optimal control.
Controlled Markov Processes and Viscosity Solutions.
Fuzzy q-learning
Stable function approximation in dynamic programming.
Adaptive sparse grid multilevel methods for elliptic pdes based on finite differences.
Reinforcement Learning and its application to control.
Reinforcement learning applied to a differential game.
Reinforcement learning: a survey.
Numerical methods for stochastic control problems in continuous time.
Reinforcement Learning for Robots using Neural Networks.
Automatic programming of behavior-based robots using reinforcement learning
Le dilemme Exploration/Exploitation dans les syst'emes d'apprentissage par renforcement.
Variable resolution dynamic programming: Efficiently learning action maps in multivariate real-valued state-spaces
The parti-game algorithm for variable resolution reinforcement learning in multidimensional state space
A general convergence theorem for reinforcement learning in the continuous case.
Gradient descent approaches to neural- net-based solutions of the hamilton-jacobi-bellman equation
Reinforcement learning for continuous stochastic control problems.
Barycentric interpolators for continuous space and time reinforcement learning.
Variable resolution discretization for high-accuracy solutions of optimal control problems
Fuzzy reinforcement learning.
Adaptive choice of grid and time in reinforcement learning.
Neural Information Processing Systems.
The Mathematical Theory of Optimal Processes.
Markov Decision Processes
Reinforcement learning with soft state aggregation.
Online learning with random representations.
International Conference on Machine Learning.
Simple statistical gradient-following algorithms for connectionist reinforcement learning
--TR
Dynamic programming: deterministic and stochastic models
Numerical methods for stochastic control problems in continuous time
Connectionist learning for control
Automatic programming of behavior-based robots using reinforcement learning
Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning
Reinforcement learning and its application to control
Numerical methods for stochastic control problems in continuous time
Reinforcement learning for robots using neural networks
The Parti-game Algorithm for Variable Resolution Reinforcement Learning in Multidimensional State-spaces
Rates of Convergence for Approximation Schemes in Optimal Control
Reinforcement learning for continuous stochastic control problems
Adaptive choice of grid and time in reinforcement learning
Barycentric interpolators for continuous space MYAMPERSANDamp; time reinforcement learning
Markov Decision Processes
Neuro-Dynamic Programming
Finite-Element Methods with Local Triangulation Refinement for Continuous Reimforcement Learning Problems
A General Convergence Method for Reinforcement Learning in the Continuous Case
Variable Resolution Discretization for High-Accuracy Solutions of Optimal Control Problems
Dynamic Programming
--CTR
Rmi Munos , Andrew Moore, Variable Resolution Discretization in Optimal Control, Machine Learning, v.49 n.2-3, p.291-323, November-December 2002 | optimal control;reinforcement learning;viscosity solutions;finite difference and finite element methods;dynamic programming;Hamilton-Jacobi-Bellman equation |
352748 | Robust Plane Sweep for Intersecting Segments. | In this paper, we reexamine in the framework of robust computation the Bentley--Ottmann algorithm for reporting intersecting pairs of segments in the plane. This algorithm has been reported as being very sensitive to numerical errors. Indeed, a simple analysis reveals that it involves predicates of degree 5, presumably never evaluated exactly in most implementations. Within the exact-computation paradigm we introduce two models of computation aimed at replacing the conventional model of real-number arithmetic. The first model (predicate arithmetic) assumes the exact evaluation of the signs of algebraic expressions of some degree, and the second model (exact arithmetic) assumes the exact computation of the value of such (bounded-degree) expressions. We identify the characteristic geometric property enabling the correct report of all intersections by plane sweeps. Verification of this property involves only predicates of (optimal) degree 2, but its straightforward implementation appears highly inefficient. We then present algorithmic variants that have low degree under these models and achieve the same performance as the original Bentley--Ottmann algorithm. The technique is applicable to a more general case of curved segments. | Introduction
As is well known, Computational Geometry has traditionally adopted the arithmetic
model of exact computation over the real numbers. This model has been extremely
productive in terms of algorithmic research, since it has permitted a vast community
to focus on the elucidation of the combinatorial (topological) properties of geometric
problems, thereby leading to sophisticated and eOEcient algorithms. Such approach,
however, has a substantial shortcoming, since all computer calculations have -nite
precision, a feature which aoeects not only the quality of the results but even the
validity of speci-c algorithms. In other words, in this model algorithm correctness
does not automatically translate into program correctness. In fact, there are several
reports of failures of implementations of theoretically correct algorithms (see e.g.
[For87, Hof89]). This state-of-aoeairs has engendered a vigorous debate within the
research community, as is amply documented in the literature. Several proposals
have been made to remedy this unsatisfactory situation. They can be split into two
broad categories according to whether they perform exact computations (see, e.g.,
or approximate computations (see, e.g., [Mil88,
HHK89, Mil89]).
This paper -ne-tunes the exact-computation paradigm. The numerical computations
of a geometric algorithm are basically of two types: tests (predicates) and
constructions, with clearly distinct roles. Tests are associated with branching decisions
in the algorithm that determine the AEow of control, whereas constructions
are needed to produce the output data. While approximations in the execution of
constructions are often acceptable, approximations in the execution of tests may produce
incorrect branching, leading to the inconsistencies which are the object of the
criticisms leveled against geometric algorithms. The exact-computation paradigm
therefore requires that tests be executed with total accuracy. This will guarantee
that the result of a geometric algorithm will be topologically correct albeit geometrically
approximate. This also means that robustness is in principle achievable if
one is willing to employ the required precision. The reported failures of structurally
correct algorithms are entirely attributable to non-compliance with this criterion.
Therefore, geometric algorithms can also be characterized on the basis of the complexity
of their predicates. The complexity of a predicate is expressed by the degree
of a homogeneous polynomial embodying its evaluation. The degree of an algorithm
INRIA
is the maximum degree of its predicates, and an algorithm is robust if the adopted
precision matches the degree requirements.
The idegree criterionj is a design principle aimed at developing low-degree algo-
rithms. This approach involves re-examining under the degree criterion the rich
body of geometric algorithms known today, possibly without negatively aoeecting
traditional algorithmic eOEciency. A previous paper [LPT96] considered as an illustration
of this approach the issue of proximity queries in two and three dimensions.
As an additional case of degree-driven algorithm design, in this paper we confront
another class of important geometric problems, which have caused considerable dif-
-culties in actual implementations: plane-sweep problems for sets of segments. As
we shall see, plane-sweep applications involve a number of predicates of dioeerent
degree and algorithmic power. Their analysis will lead not only to new and robust
implementations (an outcome of substantial practical interest) but elucidate on a
theoretical level some deeper issues pertaining to the structure of several related
problems and the mechanism of plane-sweeps.
Plane sweep of intersecting segments
Given is a -nite set S of line segments in the plane. Each segment is de-ned by
the coordinates of its two endpoints. We discuss the three following problems (see
Figure
report the pairs of segments of S that intersect .
construct the arrangement A of S, i.e., the incidence structure of the graph
obtained interpreting the union of the segments as a planar graph.
Pb3: construct the trapezoidal map T of S. T is obtained by drawing two vertical
line segments (walls), one above and one below each endpoint of the segments
and each intersection point. The walls are extended either until they meet
another segment of S or to in-nity.
be the segments of S and let k be the number of intersecting pairs.
We say that the segments are in general position if any two intersecting segments
intersect in a single point, and all endpoints and intersection points are distinct.
RR n# 3270
Figure
1: S, A and T .
INRIA
The number of intersection points is no more than the number of intersecting pairs
of segments and both are equal if the segments are in general position. Therefore,
the number of vertices of A is at most k, the number of edges of A is at most
and the number of vertical walls in T is at most 2(n + k), the bounds being
tight when the segments are in general position. Thus the sizes of both A and T
are O(n + k). We didn't consider here the 2-dimensional faces of either A or T .
Including them would not change the problems we address.
3 Algebraic degree and arithmetic models
It is well known that the eOEcient algorithms that solve Pb1-Pb3 are very unstable
when implemented as programs, and several frustrating experiences have been reported
[For85]. This motivates us to carefully analyze the predicates involved in those
algorithms. We -rst introduce here some terminology borrowed from [LPT96]. We
consider each input data (i.e., coordinates of an endpoint of some segment of S) as
a variable.
An elementary predicate is the sign \Gamma, 0, or + of a homogeneous multivariate polynomial
whose arguments are a subset of the input variables. The degree of an
elementary predicate is de-ned as the maximum degree of the irreducible factors
(over the rationals) of the polynomials that occur in the predicate and that do not
have a constant sign. A predicate is more generally a boolean function of elementary
predicates. Its degree is the maximum degree of its elementary predicates.
The degree of an algorithm A is de-ned as the maximum degree of its predicates.
The degree of a problem P is de-ned as the minimum degree of any algorithm that
solves P .
In most problems in Computational Geometry, However, as d aoeects the
speed and/or robustness of an algorithm, it is important to measure d precisely.
In the rest of this paper we consider the degree as an additional measure of algorithmic
complexity. Note that qualitatively degree and memory requirement are similar,
since the arithmetic capabilities demanded by a given degree must be available, albeit
they may be never resorted to in an actual run of the algorithm (since the input
may be such that predicates may be evaluated reliably with lower precision).
RR n# 3270
We will consider two arithmetic models. In the -rst one, called the predicate arithmetic
of degree d, the only numerical operations that are allowed are the evaluations
of predicates of degree at most d. Algorithms of degree d can therefore be implemented
exactly in the predicate arithmetic model of degree d. This model is motivated
by recent results that show that evaluating the sign of a polynomial expression may
be faster than computing its value (see [ABD
This model is however very conservative since the non-availability of the arithmetics
required by a predicate is assimilated to an entirely random choice of the value of
the predicate.
The second model, called the exact arithmetic of degree d, is more demanding.
It assumes that values (and not just signs) of polynomials of degree at most d
be represented and computed exactly (i.e., roughly as d-fold precision integers).
However, higher-degree operations (e.g., a multiplication one of whose factors is a
d-fold precision integer) are appropriately rounded. Typical rounding is rounding to
the nearest representable number but less accurate rounding can also be adequate
as will be demonstrated later. Let A be an algorithm of degree d. If each input
data is a b-bit integer, the size of each monomial occuring in a predicate of A is
upper bounded by 2 (b+1)d . Moreover, let v be the number of variables that occur in
a predicate; for most geometric problems and, in particular, for those considered in
this paper, v is a small constant. It follows that an algorithm of degree d requires
precision log v) in the exact arithmetic model of degree d.
4 The predicates for Pb1-Pb3
We use the following notations. The coordinates of point A i are denoted x i and y i .
means that the x-coordinate of point A i is smaller than the x-coordinate
of point A j . Similarly for ! y . [A i A j ] denotes the line segment whose left and
right endpoints are respectively A i and A j , while denotes the line containing
means that point A i lies below line
4.1 Predicates
Pb1 only requires that we check if two line segments intersect (Predicate 2 0 below).
INRIA
Pb2 requires in addition the ability to sort intersection points along a line segment
(Predicate 4 below).
Pb3 requires the ability to execute all the predicates listed below :
Two other predicates appear in some algorithms that report segment intersections :
4.2 Algebraic degree of the predicates
We now analyze the algebraic degree of the predicates introduced above.
Proposition 1 The degree of Predicates i and i 0
Proof. We -rst provide explicit formulae for the predicates.
Evaluating Predicate 2 is equivalent to evaluating the sign of :
can be implemented as follows for the case A 0 ! x A 2 (otherwise we
exchange the roles of [A 0 A 1 ] and [A 2 A 3
RR n# 3270
if
else return false
else
if
else return false
Therefore, in all cases, Predicate 2 0 reduces to Predicate 2.
The intersection point I
l ] is given by :
I
I
(1)
with
Predicate 4 reduces to evaluating orient(I; A 0 ; A 1 ) where I is the intersection of
It follows from (1) that this is equivalent to evaluating the sign
of
Explicit formulas for Predicates 3, 4 and 5 can be immediately deduced from the
coordinates of the intersection points I
which are given by (1). If A 4 A is clear from Equation (1) that
is a common factor of x I
INRIA
do not intersect, Predicate 3 0 reduces to Predicate 2. Otherwise,
it reduces to Predicate 3.
The above discussion shows that the degree of predicates i and i 0 is at most i. To
establish that it is exactly i, we have shown in Appendix that the polynomials of
Predicates 2, 3, 4 0 and 5, as well as the factor other than involved in
Predicate 4, are irreducible over the rationals.
It follows that the proposition is proved for all predicates.Recalling the requirements of the various problems in terms of predicates, we have :
Proposition 2 The algebraic degrees of Pb1, Pb2 and Pb3 are respectively 2, 4 and
5.
4.3 Implementation of Predicate 3 with exact arithmetic of
degree 2
As it will be useful in the sequel, we show how to implement Predicate 3 (of degree
under the exact arithmetic of degree 2. From Equation (1) we know that Predicate
3 can be written as :
For convenience, let
and x
We stipulate to employ AEoating point arithmetic conforming to the ieee 754 standard
[Gol91]. In this standard, simple precision allows us to represent b-bit integers
with double precision allows us to represent
53. The coordinates of the endpoints of the segments are represented
in simple precision and the computations are carried out in double precision. We
denote
\Phi,\Omega and ff the rounded arithmetic operations +; \Theta and =. In the ieee
754 standard, all four arithmetic operations are exactly rounded, i.e., the computed
result is the AEoating point number that best approximates the exact result.
RR n# 3270
are polynomials of degree 2, the four
terms x 01 , x 21 , A and B in Inequality (2) can be computed exactly and the following
monotonicity property is a direct consequence of exact rounding of arithmetic
operations
Monotonicity property
21\Omega A =) x 01 \Theta B ! x 21 \Theta A:
This implies that the comparison between the two computed expressions x
and x
21\Omega A evaluates Predicate 3 except when these numbers are equal.
In most algorithms, an intersection point is compared with many endpoints. It is
therefore more eOEcient to compute and store the coordinates of each intersection
point and to perform comparisons with the computed abscissae rather than evaluating
(2) repeatedly. We now illustrate an eoeective rounding procedure of the
x-coordinates of intersection points.
Lemma 3 If the coordinates of the endpoints of the segments are simple precision
integers, then the abscissa x I of an intersection point can be rounded to one of its two
nearest simple precision integers using only double precision AEoating point arithmetic
operations.
Proof. We assume that the coordinates of the endpoints of the segments are represented
as b-bit integers stored as simple precision AEoating point numbers. The
computations are carried out in double precision.
The rounded value ~
x I of x I is given by :
~
21\Omega
where bXe denotes the integer nearest to X (with any tie breaking rule). If
is a strict bound to the modulus of the relative error of all arithmetic operations,
~
21\Omega the following relations :
As x21 A
INRIA
21\Omega
We round ~
X to the nearest integer b ~
Xe. Since b ~
Xe and x 1 are (b
there is no error in the addition. Therefore, ~ x I is a (b + 2)-bit integer and the
absolute error on ~
x I is smaller than 1. 2
It follows that, under the hypothesis of the lemma, if E is an endpoint, I is an
intersection point and ~
I the corresponding rounded point, the following monotonicity
property holds :
Monotonicity property
I
I
Notice that the monotonicity property does not necessarily hold for two intersection
points.
Remark 1. A result similar to Lemma 3 has been obtained by Priest [Pri92]
for points with AEoating point coordinates. More precisely, if the endpoints of the
segments are represented as simple precision AEoating-point numbers, Priest [Pri92]
has proposed a rather complicated algorithm that uses double precision AEoating
point arithmetic and rounds x I to the nearest simple precision AEoating point number.
This stronger result also implies the monotonicity property.
4.4 Algebraic degree of the algorithms
The naive algorithm for detecting segment intersections (Pb1) evaluates \Theta(n 2 ) Predicates
thus is of degree 2, which is degree-optimal by the proposition above.
Although the time-complexity of the naive algorithm is worst-case optimal, since
1), it is worth looking for an output sensitive algorithm whose complexity
depends on both n and k. Chazelle and Edelsbrunner [CE92] have shown
that
is a lower bound for Pb1, and therefore also for Pb2 and Pb3.
A very recent algorithm of Balaban [Bal95] solves Pb1 optimally in O(n log n
time using O(n) space. This algorithm does not solve Pb2 nor Pb3 and, since it
uses Predicate 3 0 , its degree is 3.
RR n# 3270
Pb2 can be solved by -rst solving Pb1 and subsequently sorting the reported intersection
points along each segment. This can easily be done in O((n
by a simple algorithm of degree 4 using O(n) space. A direct (and asymptotically
more eOEcient) solution to Pb2 has been proposed by Chazelle and Edelsbrunner
[CE92]. Its time complexity is O(n log
algorithm, which constructs the arrangement of the segments, is of degree 4.
A solution to Pb3 can be deduced from a solution to Pb2 in O(n+k) time using a very
complicated algorithm of Chazelle [Cha91]. A deterministic and simple algorithm
due to Bentley and Ottmann [BO79] solves Pb3 in O((n n) time, which is
slightly suboptimal, using O(n) space. This classical algorithm uses the sweep-line
paradigm and evaluates O((n n) predicates of all types discussed above,
and therefore has degree 5. Incremental randomized algorithms [CS89, BDS
construct the trapezoidal map of the segments and thus solve Pb3 and have degree 5.
Their time complexity and space requirements are optimal (though only as expected
performances).
In this paper, we revisit the Bentley-Ottmann algorithm and show that a variant of
degree 3 (instead of 5) can solve Pb 1 with no sacri-ce of performance (Section 6.1).
Although this algorithm is slightly suboptimal with respect to time complexity, it is
much simpler than Balaban's algorithm. We also present two variants of the sweep
line algorithms. The -rst one (Section 6.2) uses only predicates of degree at most 2
and applies to the restricted but important special case where the segments belong
to two subsets of non intersecting segments. The second one (Section 7) uses the
exact arithmetic of degree 2. All these results are based on a (non-eOEcient) lazy
sweep-line algorithm (to be presented in Section 5) that solves Pb1 by evaluating
predicates of degree at most 2.
Remark 2. When the segments are not in general position, the number s of intersection
points can be less than the number k of intersecting pairs. In the extreme,
Some algorithms can be adapted so that their time
complexities depend on s rather than k [BMS94]. However, a lower bound on the
degree of such algorithms is 4 since they must be able to detect if two intersection
points are identical, therefore to evaluate Predicate 4 0 .
INRIA
5 A lazy sweep-line algorithm
Let S be a set of n segments whose endpoints are . For a succint review,
the standard algorithm -rst sorts increasing x-coordinates and stores
the sorted points in a priority queue X. Next, the algorithm begins sweeping the
plane with a vertical line L and maintains a data structure Y that represents a sub-set
of the segments of S (those currently intersected by L, ordered according to the
ordinates of their intersections with L). Intersections are detected in correspondence
of adjacencies created in Y , either by insertion/deletion of segment endpoints, or by
order exchanges at intersections. An intersection, upon detection, is inserted into
X according to its abscissa. Of course, a given intersection may be detected several
times. Multiple detections can be resolved by performing a preliminary membership
test for an intersection in X and omitting insertion if the intersection has been previously
recorded. We stipulate to use another policy to resolve multiple detections,
namely to remove from X an intersection point I whose associated segments are no
longer adjacent in Y . Event I will be reinserted in X when the segments become
again adjacent in Y . This policy has also the advantage of reducing the storage
requirement of Bentley-Ottmann's algorithm to O(n) [Bro81].
We now describe a modi-cation of the sweep-line algorithm that does not need to
process the intersection points by increasing x-coordinates. First, the algorithm
sorts the endpoints of the segments by increasing x-coordinates into an array X.
be the sorted list of endpoints. The algorithm uses also a dictionary
Y that stores an ordered subset of the line segments.
The algorithm rests on the de-nitions of active and prime pairs to be given be-
low. We need the following notations. We denote by L(E i ) the vertical line passing
denotes the open vertical slab bounded by L(E i )
and denotes the semiclosed slab obtained by adjoining line
to the open slab For two segments S k and S l , we denote by A kl
their rightmost left endpoint, by B kl their leftmost right endpoint and by I kl their
common point when they intersect (see Figure 2). In addition, W kl denotes the set
of segment endpoints that belong to the (closed) region bounded by the vertical
lines and by the two segments (a double wedge). We denote by
kl the most recently processed element of W kl and by F kl the element of W kl to
be processed next. (Note that E kl and F kl are always de-ned, since they may res-
RR n# 3270
l
I kl
A kl
F kl
kl
kl
Figure
2: For the de-nitions of W \Gamma
kl
kl
pectively coincide with A kl and B kl .) Lastly, we de-ne sets W
kl
kl
as follows. If S k and S l do not intersect, W
kl
kl consists of all
points Otherwise, an endpoint E 2 W kl belongs to W
kl
(resp., to W \Gamma
kl if the slab does (resp., does not) contain
I kl .
De-nition 4 Let (S k ; S l ) be a pair of segments and assume without loss of generality
that S k
L(E kl ). The pair is said to be active if the following
conditions are satis-ed :
INRIA
1. S k and S l are adjacent in Y ,
2.
3. F kl
kl
Observe that the emptiness condition implies that the segments intersect.
De-nition 5 A pair of active segments (S k ; S l ) is said to be prime if the next element
to be processed belongs to W kl (therefore is F kl
kl
We say that an intersection (or an active pair) is processed when the algorithm
reports it, exchanges its members in Y , and updates the set of active pairs of segments
accordingly. After sorting the segment endpoints, the lazy sweep-line algorithm
works as follows. While there are active pairs, the algorithm selects any of
them and processes it. When there are no more active pairs the algorithm proceeds
to the next endpoint, i.e., it inserts or removes the corresponding segment in Y (as
appropriate) and updates the set of active pairs. Actually, the next endpoint may be
accessed once there are no more prime pairs (a subset of the active pairs), without
placing any deadline on the processing of the current active pairs as long as they
are not prime. When there are no more active pairs and no more endpoints to be
processed, the algorithm stops.
For reasons that will be clear below, the algorithm will not be speci-ed in the -nest
detail, since several dioeerent implementations are possible. The main issue is the
eOEcient detection of active and prime pairs and several solutions, all consistent with
the described lazy algorithm, will be discussed in Sections 6 and 7.
It should be noted that deciding if a pair of intersecting segments is active or prime
reduces to the evaluation of Predicates 2 only. Therefore, the algorithm just described
involves only Predicates 1 and 2 and is of degree 2 by Proposition 1. It should
also be pointed out that two intersection points or even an intersection point and
an endpoint won't necessarily be processed in the order of their x-coordinate. As a
consequence, Y won't necessarily represent the ordered set of segments intersecting
some vertical line L (as in the standard algorithm).
respectively, snapshots of the data structure Y immediately
before and after processing event
RR n# 3270
only by the segment S that has E i as one of its endpoints. Let
). The order relation in Y is denoted by !.
Theorem 6 If Predicates 1 and 2 are evaluated exactly, the described lazy sweep-line
algorithm will detect all pairs of segments that intersect.
Proof : The algorithm (correctly) sorts the endpoints of the segments
by increasing x-coordinates into X. Consequently, the set of segments that intersect
L(E) and the set of segments in Y (E) coincide for any endpoint E. The proof of
the theorem is articulated now as two lemmas and their implications.
Lemma 7 Two segments have exchanged their positions in Y if and only if they
intersect and if the pair has been processed.
Proof. Let us consider two segments, say S k and S l , that do not intersect. Without
loss of generality, let S k ! S l in Y Assume for a contradiction that S l ! S k in
positions because they will never form an
active pair, Therefore, S l ! S k in Y \Gamma (B kl ) can only happen if there exists a segment
l, that at some stage in the execution of the algorithm was present in Y
together with S k and S l and caused one of the following two events to occur :
1. Sm ? S l and the positions of Sm and S k are exchanged in Y
2. Sm ! S k and the positions of Sm and S l are exchanged in Y .
In both cases, the segments that exchange their positions are not consecutive in Y ,
violating Condition 1 of De-nition 4.
Therefore, two segments can exchange their positions in Y only if they intersect
and this can only happen when their intersection is processed. Moreover, when
the intersection has been processed, the segments are no longer active and cannot
exchange their positions a second time. 2
We say that an endpoint E of S is correctly placed if and only if the subset of the
segments that are below E (in the plane) coincides with the subset of the segments
INRIA
Otherwise, E is said to be misplaced.
Lemma 8 If Predicates 1 and 2 are evaluated exactly, both endpoints of every segment
are correctly placed.
Proof. Assume, for a contradiction, that E of S is the -rst endpoint to be misplaced
by the algorithm.
can be misplaced only if there exist at least two intersecting segments
S k and S l in Y \Gamma (E) such that E belongs to W kl .
Proof. First recall that Predicate 2 is the only predicate involved in placing S in
Y .
Consider -rst the case where E is the left endpoint of S. For any pair (S k ; S l ) of
segments in Y \Gamma (E), for which S is either above or below both S k and S l , the relative
position of S with respect to S k and S l does not depend on the relative order of S k
and S l in Y \Gamma (E). Therefore, E will be correctly placed in Y
If E is a right endpoint, since the left endpoint of S has been correctly placed, E can
only be misplaced if there exists a segment S intersecting S such that the
relative positions of S and S 0 in Y are the same is the rightmost
left endpoint of S and S 0 ) while this change has not been executed by the algorithm
in Y , i.e. S and S 0 have the same relative position in Y In that
l be two segments of Y \Gamma (E) such that E 2 W kl . Assume without loss
of generality that S k ! S l in Y (E kl ). Since E is the -rst endpoint to be misplaced,
we have S k
L(E kl ). For convenience, we will say that two segments
S p and S q have been exchanged between E 0 and E 00 , for two events
RR n# 3270
The case where E
kl cause any diOEculty since S k and S l cannot be
active between E kl and E and therefore S k and S l cannot be exchanged between E kl
and E, which implies that E is correctly placed with respect to S k and S l .
The case where E
kl diOEcult. E is not correctly placed only if S k
and S l are not exchanged between E kl and E, i.e., S k ! S l in both Y
shall prove that this is not possible and therefore conclude that S is
correctly placed into Y in this case as well.
Assume, for a contradiction, that S k and S l have not been exchanged between E kl
and E. As E belongs to W
kl l cannot be adjacent in Y \Gamma (E) since
otherwise they would constitute a prime pair and they would have been exchanged.
be the subsequence of segments of
occurring between S k and S l . Assume that (S k ; S l ) is a pair of intersecting
segments such that E
kl which r is minimal (i.e., for which the above
subsequence is shortest). A direct consequence of this de-nition is the following
belong to W
sects S l , E cannot belong to W
We distinguish the two following cases:
intersects S l to the left of L(E) because S k i
and S l are correctly
placed in Y is the -rst endpoint to be misplaced) and misplaced
in Y \Gamma (E). It then follows from Claim 2 that E
l
L(E). As E is the -rst end-point
to be misplaced, we have S l
since the pair
l ) is not active (between E k i l and E), and therefore cannot be exchanged, the
same inequality holds in Y \Gamma (E), which contradicts the de-nition of S k i
This case is entirely symetric to the previous one. It suOEces to
exchange the roles of S k and S l and to reverse the relations ! and ! y .
Since a contradiction has been reached in both cases, the lemma is proved. 2
We now complete the proof of the theorem. The previous lemma implies that the
endpoints are correctly processed. Indeed let E i be an endpoint. If E i is a right
endpoint, we simply remove the corresponding segment from Y and update the set
INRIA
of active segments. This can be done exactly since predicates of degree - 2 are
evaluated correctly. If E i is a left endpoint, it is correctly placed in Y on the basis
of the previous lemma.
The lemma also implies that all pairs that intersect have been processed. Indeed if
are two intersecting segments such that S p
L(B pq ), the lemma shows that S p ! S q in Y
in Y \Gamma (B pq ), which implies that the pair (S p ; S q ) has been processed (Lemma 7).
This concludes the proof of the theorem. 2
Remark 3. Handling the degenerate cases does not cause any diOEculty and the
previous algorithm will work with only minor changes. For the initial sorting of
the endpoints, we can take any order relation compatible with the order of their
x-coordinates, e.g., the lexicographic order.
Remark 4. Theorem 6 applies directly to pseudo-segments, i.e., curved segments
that intersect in at most one point. Lemmas 7 and 8 also extend to the case of
monotone arcs that may intersect in more than one point. To be more precise, in
Lemma 7, we have to replace iintersectj by iintersect an odd number of timesj;
Lemma 8 and its proof are unchanged provided that we de-ne W
kl
kl as the subset of W kl consisting of the endpoints E,
that the slab E) contains an odd number (resp. none or an even number)
of intersection points. As a consequence, the lazy algorithm (which still uses only
detect all pairs of arcs that intersect an odd number of
times.
Remark 5. For line segments, observe that checking whether a pair of segments
is active does not require to know (and therefore to maintain) E kl . In fact, we can
replace Condition 3 in the de-nition of an active pair by the following condition
, the two de-nitions are identical and
if I kl ! x E kl , the pair is not active since, by Lemma 8, Condition 2 of the de-nition
won't be satis-ed.
RR n# 3270
6 EOEcient implementations of the lazy algorithm in
the predicate arithmetic model
The diOEculty to eOEciently implement the lazy sweep-line algorithm using only predicates
of degree at most 2 (i.e., in the predicate arithmetic model of degree 2) is
due to veri-cation of the emptiness condition in De-nition 4 and of the condition
expressed by De-nition 5. One can easily check that various known implementations
of the sweep achieve straightforward veri-cation of the emptiness condition by introducing
algorithmic complications. The following subsection describes an eOEcient
implementation of the lazy algorithm in the predicate arithmetic model of degree 3.
The second subsection improves on this result in a special but important instance
of Pb1, namely the case of two sets of non-intersecting segments. The algorithm
presented there uses only predicates of degree at most 2.
6.1 Robustness of the standard sweep-line algorithm
We shall run our lazy algorithm under the predicate arithmetic model of degree 3.
We then have the capability to correctly compare the abscissae of an intersection
and of an endpoint. We re-ne the lazy algorithm in the following way. Let E i be
the last processed endpoint and let E i+1 be the endpoint to be processed next. An
active pair (S k ; S l ) that occurs in Y between Y processed
if and only if its intersection point I kl lies to the right of E i and not to the right of
As the slab is free of endpoints in its interior, any pair of adjacent segments
encountered in Y (between Y that intersect within the slab
is active. Moreover the intersection points of all prime pairs belong to the slab. It
follows that this instance of the lazy algorithm need not explicitly check whether a
pair is active or not, and therefore is much more eOEcient than the lazy algorithm
of Section 5. This algorithm is basically what the original algorithm of Bentley-
Ottmann becomes when predicates of degree at most 3 are evaluated (recall that
the standard algorithm requires the capability to correctly execute predicates of
degree up to 5).
With the policy concerning multiple detections of intersections that is stipulated at the beginning
of Section 5.
INRIA
We therefore conclude with the following theorem :
Theorem 9 If Predicates 1, 2 and 3 are evaluated exactly, the standard sweep-line
algorithm will solve Pb1 in O((n
It is now appropriate to brieAEy comment on the implementation details of the just
described modi-ed algorithm. Data structure Y is implemented as usual as a dic-
tionary. Data structure X, however, is even simpler than in the standard algorithm
(which uses a priority queue with dictionary access). Here X has a primary component
realized as a static dynamic search tree on the abscissae of the endpoints
points to a secondary data structure L(E j ) realized as a conventional
linked list, containing (in an arbitrary order) adjacent intersecting pairs in
Remember that each intersecting pair of L(E j ) is active. Insertion
into is performed at one of its ends and so is access for reporting (when
the plane sweep reaches slab To eoeect constant-time removal of a pair
due to loss of adjacency, however, a pointer could be maintained from a -xed
member of the pair (say, the one with smaller left endpoint in lexicographic order)
to the record stored in X (notice that the described insertion/removal policy, which
guarantees that the elements of X correspond to pairs of adjacent segments in Y ,
ensures that at most one record is to be pointed to by any member of Y ). We -nally
observe that a segment adjacency arising in Y during the execution of the algorithm
must be tested for intersection; however, an intersecting pair of adjacent segments
is eligible for insertion into X only as long as the plane sweep has not gone beyond
the slab containing the intersection in question. As regards the running time, beside
the initial sorting of the endpoints and the creation of the corresponding primary
tree in time O(n log n), it is easily seen that each intersection uses O(log n) time
(amortized), thereby achieving the performance of the standard algorithm.
Finally, we note that if only predicates of degree - 2 are evaluated correctly, the
algorithm of Bentley-Ottmann may fail to report the set of intersecting pairs of
segments. See Figure 3 for an example.
Remark 6. The fact that the sweep line algorithm does not need to sort intersection
points had already been observed by Myers [Mye85] and Schorn [Sch91]. Myers
does not use it for solving robustness problems but for developing an algorithm
with an expected running time of O(n log n + k). Schorn uses this fact to decrease
RR n# 3270
Figure
3: If the computed x-coordinate of the intersection point of S 1 and S 2 is
(erroneously) found to be smaller than the x-coordinate of the left endpoint of S 3
and if S 3 and S 4 are (correctly) inserted below S 2 and above S 1 respectively, then
the intersection between S 3 and S 4 will not be detected. Observe that the missed
intersection point can be arbitrarily far from the intersection point involved in the
wrong decision.
INRIA
the precision required by the sweep line algorithm from -ve fold to three fold, i.e.,
Schorn's algorithm uses exact arithmetic of degree 3. Using Theorem 6, we will
show in Section 7 that double precision suOEces.
6.2 Reporting intersections between two sets of nonintersecting
line segments
In this subsection, we consider two sets of line segments in the plane, S b (the blue
set) and S r (the red set), where no two segments in S b (similarly, in S r ) intersect.
Such a problem arises in many applications, including the union of two polygons
and the merge of two planar maps. We denote by n b and n r the cardinalities of S b
respectively, and let
Mairson and Stol- [MS88] have proposed an algorithm that works for arcs of curve
as well as for line segments. Its time complexity is O(n log n + k), which is optimal,
and requires O(n space (O(n) in case of line segments). The same asymptotic
time-bound has been obtained by Chazelle et al. [CEGS94] and by Chazelle and
Edelsbrunner [CE92]. The latter algorithm is not restricted to two sets of nonintersecting
line segments. Other algorithms have been proposed by Nievergelt and
Preparata [NP82] and by Guibas and Seidel [GS87] in the case where the segments
of S b (and S r ) are the edges of a subdivision with convex faces. With the exception
of the algorithm of Chazelle et al. [CEGS94], all these algorithms construct the
resulting arrangement and therefore have degree 4. The algorithm of Chazelle et al.
requires to sort the intersection points of two segments with a vertical line passing
through an endpoint. Therefore it is of degree 3.
We propose instead an algorithm that computes all the intersections but not the
arrangement. This algorithm uses only predicates of degree - 2 and has time
We say that a point E i is vertically visible from a segment S b 2 S b if the vertical line
segment joining E i with S b does not intersect any other segment in S b (the same
notion is applicable to S r ). For two intersecting segments S b 2 S b and S r
be a vertical line to the right of A br such that no other segment intersects L between
S b and S r (i.e., S b and S r are adjacent). We let T br denote the wedge de-ned by S b
and S r in the slab between L and L(I kl ).
RR n# 3270
Our algorithm is based on the following observation :
contains blue endpoints if and only if it contains a blue endpoint
that is vertically visible from S b . Similarly, T br contains red endpoints if and only if
it contains a red endpoint vertically visible from S r .
Proof : The suOEcient condition is trivial, so we only prove necessity. Assume
without loss of generality that S r
be the subset of
the blue endpoints that belong to T br and CH + (E) their upper convex hull. Clearly,
all vertices of CH + (E) are vertically visible from S b . 2
Our algorithm has two phases. The second one is the lazy algorithm of Section 5.
The -rst one can be considered as a preprocessing step that will help to eOEciently
-nd active pairs of segments.
More speci-cally, our objective is to develop a quick test of the emptiness condition
based on the previous lemma. The preprocessing phase is aimed at identifying the
candidate endpoints for their potential belonging to wedges formed by intersecting
adjacent pairs. Referring to S b (and analogously for S r ), we -rst sweep the segments
of S b and construct for each blue segment S b the lists
b and L
b of blue endpoints
that are vertically visible from S b and lie respectively below and above S b . The sweep
takes time O(n log n) and the constructed lists are sorted by increasing abscissa.
Since there is no intersection point, only predicates of degree - 2 are used. The
total size of the lists
r , and L
r is O(n).
As mentioned above, the crucial point is to decide whether the wedge T br of a pair
of intersecting segments S b and S r adjacent in Y contains or not endpoints of other
segments. Without any loss of generality we assume that S r ! S b in Y . If such
endpoints exist, then T br contains either a blue vertex of CH
red
vertex of CH
We will show below that, using predicates of degree - 2, the lists can be preprocessed
in time O(n log n) and that deciding whether T br contains or not endpoints can be
done in time O(log n), using only predicates of degree at most 2.
Assuming for the moment that this primitive is available, we can execute the plane
sweep algorithm described earlier. Speci-cally, we sweep S b and S r simultaneously,
INRIA
using the lazy sweep-line algorithm of Section 5 y . Each time we detect a pair of
(adjacent) intersecting segments S b and S r , we can decide in time O(log n) whether
they are "active" or "not active", using only predicates of degree - 2.
We sum up the results of this section in the following theorem :
Theorem 11 Given n line segments in the plane belonging to two sets S b and S r ,
where no two segments in S b (analogously, in S r ) intersect, there exists an algorithm
of optimal degree 2 that reports all intersecting pairs in O((n using
O(n) storage.
We now return to the implementation of the primitive described above. Suppose
that, for some segment S i or r) we have constructed the upper convex hull
Then we can detect in O(log n) time if an element of a list, say
lies above some segment S r . More speci-cally, we -rst identify among the edges of
slopes are respectively smaller and greater than
the slope of S r . This only requires the evaluation of O(log
predicates of degree
2. It then remains to decide whether the common endpoint E of the two reported
edges lies above or below the line containing S r . This can be answered by evaluating
the orientation predicate orient(E; A r
The crucial requirement of the adopted data structure is the ability to eOEciently
To this purpose, we propose the following solution.
The data structure associated with a list
or r) represents the upper
convex hull CH
. (Similarly, the data structure associated to a list L
represents the lower convex hull CH
.) This implies that a binary search
on the convex hull slopes uniquely identi-es the test vertex. Since the elements of
each list are already sorted by increasing x-coordinates, the data structures can be
constructed in time proportional to their sizes, therefore in O(n) time in total. It
can be easily checked that only orientation predicates (of degree 2) are involved in
this process. To guarantee the availability of CH
we have to ensure that
our data structure can eOEciently handle the deletion of elements. As elements are
deleted in order of increasing abscissa, this can be done in amortized O(log
per deletion [HS90, HS96]. It follows that preprocessing all lists takes O(n) time,
uses O(n) space and only requires the evaluation of predicates of degree - 2.
y We can adopt the policy of processing all active pairs before the next endpoint.
RR n# 3270
7 An eOEcient implementation of the lazy algorithm
under the exact arithmetic model of degree 2
We shall run the lazy algorithm of Section 6.1 under the exact arithmetic model of
degree 2, i.e. Predicates 1 and 2 are evaluated exactly but Predicate 3 is implemented
with exact arithmetic of degree 2 as explained in Section 4.3. Several intersection
points may now be found to have the same abscissa as an endpoint. We re-ne the
lazy algorithm in the following way. Let E i be the last processed endpoint and let
be the endpoint with an abscissa strictly greater than the abscissa of E i to be
processed next. An active pair (S k ; S l ) will be processed if and only if its intersection
point is found to lie to the right of E i and not to the right of E i+1 .
We claim that this policy leads to eOEcient veri-cation of the emptiness condition.
Indeed, the intersections of all prime pairs belong to
kl and, by the monotonicity property, I kl will be found to
be
The crucial observation that drastically reduces the time complexity is the following.
A pair of adjacent segments (S k ; S l ) encountered in Y between Y
whose intersection point is found to lie in slab active if and only if
kl (E kl ). Indeed, since I kl is found to be ! x E i+2 , the monotonicity property
implies that I kl ! x E i+2 . Therefore, when checking if a pair is active, it is suOEcient
to consider just the next endpoint, not all of them.
Theorem 6 therefore applies. If no two endpoints have the same x-coordinate, the algorithm
can use the same data structures as the algorithm in Section 6.1 and its time
complexity is clearly the same as for the Bentley-Ottmann's algorithm. Otherwise,
we construct X on the distinct abscissae of the endpoints and store all endpoints
with identical x-coordinates in a secondary search structure with endpoints sorted
by y-coordinates. This secondary structure will allow to determine if a pair is active
in logarithmic time by binary search. We conclude with the following theorem :
Theorem 12 Under the exact arithmetic model of degree 2, the instance of the lazy
algorithm described above solves Pb1 in O((n
INRIA
8 Conclusion
Further pursuing our investigations in the context of the exact-computation para-
digm, in this paper we have illustrated that important problems on segment sets
(such as intersection report, arrangement, and trapezoidal map), which are viewed
as equivalent under the Real-RAM model of computation, are distinct if their arithmetic
degree is taken into account. This sheds new light on robustness issues which
are intimately connected with the notion of algorithmic degree and illustrates the
richness of this new direction of research.
For example, we have shown that the well-known plane-sweep algorithm of Bentley-
Ottmann uses more machinery than strictly necessary, and can be appropriately
modi-ed to report segment intersections with arithmetic capabilities very close to
optimal and no sacri-ce in performance.
Another result of our work is that exact solutions of some problems can be obtained
even if approximate (or even random) evaluations of some predicates are performed.
More speci-cally, using less powerful arithmetic than demanded by the application,
we have been able to compute the vertices of an arrangement of line segments by
constructing an arrangement which may be dioeerent from the actual one (and may
not even correspond to any set of straight line segments) but still have the same
vertex set.
Our work shows that the sweep-line algorithm is more robust than usually believed,
proposes practical improvements leading to robust implementations, and provides
a better understanding of the sweeping line paradigm. The key to our technique is
to relax the horizontal ordering of the sweep. This is one step further after similar
attemps aimed though at dioeerent purposes [Mye85, MS88, EG89].
A host of interesting open questions remain. One such question is to devise an
output-sensitive algorithm for reporting segment intersections with optimal time
complexity and with optimal algorithmic degree (that is, 2). It would also be interesting
to examine the plane-sweep paradigm in general. For example, with regard
to the construction of Voronoi diagrams in the plane, one should elucidate the reasons
for the apparent gap between the algorithmic degrees of Fortune's plane-sweep
solution and of the (optimal) divide-and-conquer and incremental algorithms.
RR n# 3270
Acknowledgments
We are indebted to H. Br#nnimann for having pointed out an error in a previous
version of this paper and to O. Devillers for discussions that lead to Lemma 3. S.
Pion, M. Teillaud and M. Yvinec are also gratefully acknowledged for their comments
on this work.
--R
An optimal algorithm for
Applications of random sampling to on-line algorithms in computational geometry
Computing exact geometric predicates using modular arithmetic with single precision.
Jochen K
On degeneracy in geometric computations.
Algorithms for reporting and counting geometric intersections.
Introduction to higher algebra.
Comments on iAlgorithms for reporting and counting geometric intersectionsj.
EOEcient exact evaluation of signs of determinants.
An optimal algorithm for intersecting line segments in the plane.
Algorithms for bichromatic line segment problems and polyhedral terrains.
Triangulating a simple polygon in linear time.
Applications of random sampling in computational geometry
Topologically sweeping an arran- gement
Computational geometry in practice.
Computational geometry and software engineering: Towards a geometric computing environment.
EOEcient exact arithmetic for computational geometry.
What every computer scientist should know about AEoating- point arithmetic
Computing convolutions by reciprocal search.
Robust set operations on polyhedral solids.
The problems of accuracy and robustness in geometric computation.
Applications of a semi-dynamic convex hull algorithm
Robust proximity queries in implicit Voronoi diagrams.
Double precision geometry: a general technique for calculating line and segment intersections using rounded arithmetic.
An O(E log E
On properties of AEoating point arithmetics: numerical stability and the cost of accurate computations.
Robust Algorithms in a Program Library for Geometric Com- putation
Robust adaptive AEoating-point geometric predi- cates
Towards exact geometric computation.
--TR
--CTR
Olivier Devillers , Alexandra Fronville , Bernard Mourrain , Monique Teillaud, Algebraic methods and arithmetic filtering for exact predicates on circle arcs, Proceedings of the sixteenth annual symposium on Computational geometry, p.139-147, June 12-14, 2000, Clear Water Bay, Kowloon, Hong Kong
Ferran Hurtado , Giuseppe Liotta , Henk Meijer, Optimal and suboptimal robust algorithms for proximity graphs, Computational Geometry: Theory and Applications, v.25 n.1-2, p.35-49, May
Leonardo Guerreiro Azevedo , Ralf Hartmut Gting , Rafael Brand Rodrigues , Geraldo Zimbro , Jano Moreira de Souza, Filtering with raster signatures, Proceedings of the 14th annual ACM international symposium on Advances in geographic information systems, November 10-11, 2006, Arlington, Virginia, USA
Menelaos I. Karavelas , Ioannis Z. Emiris, Root comparison techniques applied to computing the additively weighted Voronoi diagram, Proceedings of the fourteenth annual ACM-SIAM symposium on Discrete algorithms, January 12-14, 2003, Baltimore, Maryland
Elmar Schmer , Nicola Wolpert, An exact and efficient approach for computing a cell in an arrangement of quadrics, Computational Geometry: Theory and Applications, v.33 n.1-2, p.65-97, January 2006 | segment intersection;robust algorithms;plane sweep;computational geometry |
352753 | Binary Space Partitions for Fat Rectangles. | We consider the practical problem of constructing binary space partitions (BSPs) for a set S of n orthogonal, nonintersecting, two-dimensional rectangles in ${\Bbb R}^3$ such that the aspect ratio of each rectangle in $S$ is at most $\alpha$, for some constant $\alpha \geq 1$. We present an $n2^{O(\sqrt{\log n})}$-time algorithm to build a binary space partition of size $n2^{O(\sqrt{\log n})}$ for $S$. We also show that if $m$ of the $n$ rectangles in $S$ have aspect ratios greater than $\alpha$, we can construct a BSP of size $n\sqrt{m}2^{O(\sqrt{\log n})}$ for $S$ in $n\sqrt{m}2^{O(\sqrt{\log n})}$ time. The constants of proportionality in the big-oh terms are linear in $\log \alpha$. We extend these results to cases in which the input contains nonorthogonal or intersecting objects. | Introduction
How to render a set of opaque or partially transparent objects
in IR 3 in a visually realistic way is a fundamental problem
in computer graphics [12, 22]. A central component of
this problem is hidden-surface removal: given a set of ob-
jects, a viewpoint, and an image plane, compute the scene
visible from the viewpoint on the image plane. Because
of its importance, the hidden-surface removal problem has
Support was provided by National Science Foundation research
grant CCR-93-01259, by Army Research Office MURI grant DAAH04-
96-1-0013, by a Sloan fellowship, by a National Science Foundation NYI
award and matching funds from Xerox Corp, and by a grant from the U.S.-
Israeli Binational Science Foundation. Address: Box 90129, Department
of Computer Science, Duke University, Durham, NC 27708-0129. Email:
pankaj@cs.duke.edu
y Support was provided by Army Research Office grant DAAH04-93-
G-0076. This work was partially done when the author was at Duke Uni-
versity. Address: Max-Planck-Institut f?r Informatik, Im Stadtwald, 66
Saarbr-ucken, Germany. Email: eddie@math.uri.edu
z This author is affiliated with Brown University. Support was provided
in part by National Science Foundation research grant CCR-9522047 and
by Army Research Office MURI grant DAAH04-96-1-0013. Address:
Box 90129, Department of Computer Science, Duke University, Durham,
NC 27708-0129. Email: tmax@cs.duke.edu
x Support was provided in part by National Science Foundation re-search
grant CCR-9522047, by Army Research Office grant DAAH04-
93-G-0076, and by Army Research Office MURI grant DAAH04-96-
1-0013. Address: Box 90129, Department of Computer Science, Duke
University, Durham, NC 27708-0129. Email: jsv@cs.duke.edu
been studied extensively in both the computer graphics and
the computational geometry communities [11, 12]. One of
the conceptually simplest solutions to this problem is the
z-buffer algorithm [6, 12]. This algorithm sequentially processes
the objects; and for each object it updates the pixels
of the image plane covered by the object, based on the distance
information stored in the z-buffer. A very fast hidden-surface
removal algorithm can be obtained by implementing
the z-buffer in hardware. However, the cost of a hardware
z-buffer is very high. Only special-purpose and costly
graphics engines contain fast z-buffers, and z-buffers implemented
in software are generally inefficient. Even when
fast hardware z-buffers are present, they are not fast enough
to handle the huge models (containing hundreds of millions
of polygons) that often have to be displayed in real time. As
a result, other methods have to be developed either to "cull
away" a large subset of invisible polygons so as to decrease
the rendering load on the z-buffer (when models are large;
e.g., see [23]) or to completely solve the hidden-surface removal
problem (when there are very slow or no z-buffers).
One technique to handle both of these problems is the binary
space partition (BSP) introduced by Fuchs et al. [14].
They used the BSP to implement the so-called "painter's al-
gorithm" for hidden-surface removal, which draws the objects
to be displayed on the screen in a back-to-front order
(in which no object is occluded by any object earlier in the
order). In general, it is not possible to find a back-to-front
order from a given viewpoint for an arbitrary set of objects.
By fragmenting the objects, the BSP ensures that from any
viewpoint a back-to-front order can be determined for the
fragments.
Informally, a BSP for a set of objects is a tree each of
whose nodes is associated with a convex region of space.
The regions associated with the leaves of the tree form a
convex decomposition of space, and the interior of each
region does not intersect any object. The fragments created
by the BSP are stored at appropriate nodes of the BSP.
Given a viewpoint p, the back-to-front order is determined
by a suitable traversal of the BSP. For each node v of the
BSP, the objects in one of v's subtrees are separated from
those in v's other subtree by a hyperplane. The viewpoint p
will lie in one of the regions bounded by the hyperplane
at v. The traversal recursively visits first the child of v corresponding
to the halfspace not containing p and then the
other child of v. The efficiency of the traversal, and thus
of the hidden-surface removal algorithm, depends upon the
size of the BSP.
The BSP has subsequently proven to be a versatile data
structure with applications in many other problems that
arise in practice-global illumination [5], shadow generation
[7, 8, 9], ray tracing [19], visibility problems [3, 23],
solid geometry [17, 18, 24], robotics [4], and approximation
algorithms for network flows and surface simplification
[2, 16].
Although several simple heuristics have been developed
for constructing BSPs of reasonable sizes [3, 13, 14, 23, 24],
provable bounds were first obtained by Paterson and Yao.
They show that a BSP of size O(n 2 ) can be constructed
for n disjoint triangles in IR 3 , which is optimal in the worst
case [20]. But in graphics-related applications, many common
environments like buildings are composed largely of
orthogonal rectangles. Moreover, many graphics algorithms
approximate non-orthogonal objects by their orthogonal
bounding boxes and work with the bounding boxes [12].
In another paper, Paterson and Yao show that a BSP of
size O(n
n) exists for n non-intersecting, orthogonal rectangles
in IR 3 [21]. This bound is optimal in the worst case.
In all known lower bound examples of orthogonal rectangles
in IR 3 requiring BSPs of
n), most of the rectangles
are "thin." For example, Paterson and Yao's lower
bound proof uses a configuration of \Theta(n) orthogonal rect-
angles, arranged in a p
n \Theta
n \Theta
n grid, for which any
BSP has
n). All rectangles in their construction
have aspect
n). Such configurations of thin
rectangles rarely occur in practice. Many real databases
consist mainly of "fat" rectangles, i.e., the aspect ratios of
these rectangles are bounded by a constant. An examination
of four datasets-the Sitterson Hall, the Orange United
Methodist Church Fellowship Hall, and the Sitterson Hall
Lobby databases from the University of North Carolina at
Chapel Hill and the model of Soda Hall from the University
of California at Berkeley-shows that most of the rectangles
in these models have aspect ratio less than 30.
It is natural to ask whether BSPs of near-linear size can
be constructed if most of the rectangles are "fat." We call a
rectangle fat if its aspect ratio (the ratio of the longer side
to the shorter side) is at most ff, for a fixed constant ff - 1.
A rectangle is said to be thin if its aspect ratio is greater
than ff. In this paper, we consider the following problem:
Given a set S of n non-intersecting, orthogonal,
two-dimensional rectangles in IR 3 , of which m
are thin and the remaining are fat, construct
a BSP for S.
We first show how to construct a BSP of
size O(
log n ) for n fat rectangles in IR 3 (i.e.,
We then show that if m ? 0, a BSP
of size n
m2 O(
log n ) can be built. This bound comes
close to the lower bound of \Omega\Gamma n
m) .
We finally prove two important extensions to these re-
sults. We show that an np2 O(
log n )-size BSP exists if p
of the n input objects are non-orthogonal. Unlike in the
case of orthogonal objects, fatness does not help in reducing
the worst-case size of BSPs for non-orthogonal objects.
In particular, there exists a set of n fat triangles in IR 3 for
which any BSP
non-orthogonal
objects can be approximated by orthogonal bounding boxes.
The resulting bounding boxes might intersect each other.
Motivated by this observation, we also consider the problem
where n fat rectangles contain k intersecting pairs
of rectangles, and we show that we can construct a BSP
of size (n +
k2 O(
log n ). There is a lower bound
of
on the size of such a BSP.
In all cases, the constant of proportionality in the big-oh
terms is linear in log ff, where ff is the maximum aspect ratio
of the fat rectangles. Our algorithms to construct these
BSPs run in time proportional to the size of the BSPs they
build, except in the case of non-orthogonal objects, when
the running time exceeds the size by a factor of p. Experiments
demonstrate that our algorithms work well in practice
and construct BSPs of near-linear size when most of
the rectangles are fat, and perform better than Paterson and
Yao's algorithm for orthogonal rectangles [1].
As far as we are aware, ours is the first work to consider
BSPs for the practical and common case of two-
dimensional, fat polygons in IR 3 . de Berg considers a
weaker model, the case of fat polyhedra in IR 3 (a polyhedron
is said to be fat if its volume is at least a constant
fraction of the volume of the smallest sphere enclosing it),
although his results extend to higher dimensions [10].
One of the main ingredients of our algorithm is
an O(n log n)-size BSP for a set of n fat rectangles that are
"long" with respect to a box B, i.e., none of the vertices of
the rectangles lie in the interior of B. To prove this result,
we crucially use the fatness of the rectangles. We can use
this procedure to construct a BSP of size O(n 4=3 ) for fat
rectangles. The algorithm repeatedly applies cuts that bisect
the set of vertices of rectangles in the input set S until
all sub-problems have long rectangles and the total size of
the sub-problems is O(n 4=3 ), at which point we can invoke
the algorithm for long rectangles. We improve the size of
the BSP to
log n ) by simultaneously simulating the
algorithm for long rectangles and partitioning the vertices
of rectangles in S in a clever manner.
The rest of the paper is organized as follows: Section 2
gives some preliminary definitions. In Section 3, we show
how to build an O(n log n)-size BSP for n long rectangles.
Then we show how to construct a BSP of size O(n 4=3 ) in
Section 4. Sections 5 and 6 present and analyze a better algorithm
that constructs a BSP of size O(
log n ) . We extend
this result in Section 7 to construct BSPs in cases when
some objects in the input are (i) thin, (ii) non-orthogonal, or
(iii) intersecting. We conclude in Section 8 with some open
problems.
Due to lack of space, we defer many proofs to the full
version of the paper.
2. Geometric preliminaries
A binary space partition B for a set S of pairwise-
disjoint, (d \Gamma 1)-dimensional, polyhedral objects in IR d
is a tree recursively defined as follows: Each node v
in B represents a convex region R v and a set of objects
that intersect R v . The region
associated with the root is IR d itself. If S v is
empty, then node v is a leaf of B. Otherwise, we partition
v's region R v into two convex regions by a cutting
the set of objects in S v that lie in H v . If we let H
be the positive halfspace and H \Gamma
v the negative halfspace
bounded by H v , the regions associated with the left and
right children of v are R
v and R
tively. The left subtree of v is a BSP for the set of objects
and the right subtree of v
is a BSP for the set of objects S
g.
The size of B is the number of nodes in B.
Suppose v is a node of B. In all our algorithms, the
region R v associated with v is a box (rectangular paral-
lelepiped). We say that a rectangle r is long with respect
to R v if none of the vertices of r lie in the interior of R v .
Otherwise, r is said to be short. A long rectangle is said to
its edges lie on the boundary of R v ; otherwise
it is non-free. A free cut is a cutting plane that divides S into
two non-empty sets and does not cross any rectangle in S.
Note that the plane containing a free rectangle is a free cut.
Free cuts will be very useful in preventing excessive fragmentation
of the objects in S.
We will often focus on a box B and construct a BSP for
the rectangles intersecting it. Given a set of rectangles R,
let
be the set of rectangles obtained by clipping the rectangles
in R within B. For a set of points P , let PB be the subset
of P lying in the interior of B.
Although a BSP is a tree, we will often discuss just how
to partition the region represented by a node into two convex
regions. We will not explicitly detail the associated construction
of the actual tree itself.
z-axis
y-axis
x-axis
Right face
Top face
Front face
Figure
1. Different classes of rectangles.
3. BSPs for long fat rectangles
Let S be a set of fat rectangles. Assume that all the rectangles
in S are long with respect to a box B. In this section,
we show how to build a BSP for SB , the set of rectangles
clipped within B. The box B has six faces-top, bottom,
front, back, right, and left. We assume, without loss of gen-
erality, that the back, bottom, left corner of B is the origin
(i.e., the back face of B lies on the yz-plane). See Figure 1.
A rectangle s belongs to the top class if two parallel
edges of s are contained in the top and bottom faces of B.
We similarly define the front and right classes. A long rectangle
belongs to at least one of these three classes; a non-free
rectangle belongs to a unique class. See Figure 1 for
examples of rectangles belonging to different classes.
In general, SB can have all three classes of rectangles.
We first exploit the fatness of the rectangles to prove that
whenever all three classes are present in SB , a small number
of cuts can divide B into boxes each of which has only two
classes of rectangles. Then we describe an algorithm that
constructs a BSP when all the rectangles belong to only two
classes.
We first state two preliminary lemmas that we will use
below and in Section 5. The first lemma characterizes a set
of rectangles that are long with respect to a box and belong
to one class. The second lemma applies to two classes of
long rectangles. The parameter a is real and non-negative.
C be a box, P a set of points in the interior
of C, and R a set of rectangles long with respect to C . If
the rectangles in RC belong to one class,
(i) there exists a face g of the box C that contains one of
the edges of each rectangle in RC .
be the set of vertices of the rectangles
in RC that lie in the interior of g.
In time, we can find a plane h
that partitions C into two boxes C 1 and C 2 such
that (jV " C
2.
C be a box, P a set of points in the interior
of C, and R a set of long rectangles with respect to C such
that the rectangles in RC belong to two classes. We can
find two parallel
that partition C into three boxes
either
(ii) there is some 1 - i - 3 such that jRC i
all rectangles in RC i
belong to
the same class.
3.1. Reducing three classes to two classes
Let B and SB be as defined earlier. Assume, without loss
of generality, that the longest edge of B is parallel to the x-
axis. The rectangles in SB that belong to the front class can
be partitioned into two subsets: the set R of rectangles that
are vertical (and parallel to the right face of box B) and the
set T of rectangles that are horizontal (and parallel to the top
face of box B). See Figure 2(a). Let e be the edge of B that
lies on the z-axis. The intersection of each rectangle in R
with the back face of B is a segment parallel to the z-axis.
r denote the projection of this segment onto the z-axis,
and let -
r. Let z be the
endpoints of intervals in -
R that lie in the interior of e but
not in the interior of any interval of -
R. (Note that
be less than 2jRj, as in Figure 2(b), if some of the projected
segments overlap.) Similarly, for each rectangle t in the
set T , we define ~ t to be the projection of t on the y-axis,
and ~
~
t. Let y be the y-coordinates
of the vertices of intervals in ~
T defined in the
same way as
We divide B into kl boxes by drawing the planes
and the planes
Figure
2(b). This decomposition of B into kl boxes
can easily be constructed in a tree-like fashion by performing
cuts. We refer to these cuts as ff-cuts.
If any resulting box has a free rectangle (such as t in Figure
2(b)), we divide that box into two boxes by applying the
C be the set of boxes
into which B is partitioned in this manner. We can prove
the following theorem about the decomposition of B into C.
This is the only place in the whole algorithm where we use
the fatness of the rectangles in S.
Lemma 3 The set of boxes C formed by the above process
satisfies the following properties:
y-axis
z-axis
c
a
x-axis
s
r
(a)
z 3
z 0
a
(b)
Figure
2. (a) Rectangles belonging to the sets R and
T . (b) The back face of B; dashed lines are intersections
of the back face with the ff-cuts.
(i) Each box C in C has only two classes of rectangles,
(ii) There are at most 26bffc 2 n boxes in C, and
be the endpoints of e, the edge of the
that lies on the z-axis. Similarly, define y 0 and y l to
be the endpoints of the edge of B that lies on the y-axis.
C be a box in C. If C does not contain
a rectangle from T [ R, the proof is trivial since the
rectangles in T and R together constitute the front class.
Suppose C contains rectangles from the set R. Rectangles
in R belong to the front class and are parallel to the right
face of B. We claim that C cannot have any rectangles
from the right class. To see this claim, consider an edge of
C parallel to the z-axis. The endpoints of this edge have
z-coordinates z i and z i+1 , for some
contains a rectangle from R, the interval z i z i+1 must
be covered by projections of rectangles from R onto the
z-axis. Any rectangle from the right class inside C must
intersect one of the rectangles whose projections cover
z i z i+1 . That cannot happen since the rectangles in S do
not intersect each other. A similar proof shows that if C
contains rectangles from T , then C is free of rectangles in
the top class.
We first show that that both k and l are at
most 3. Let a (respectively, b; c) denote the length
of the edges of B parallel to the z-axis (respectively, y-
axis, x-axis). By assumption, a; b - c. Let r be a
rectangle from R with dimensions z and x, where z - x.
Consider -
r, the projection of r onto the z-axis. Suppose
that -
r lies in the
interior of the edge e of B lying on the z-axis. Since r
is a rectangle in the front class and is parallel to the right
face of B, we know that z - a - the rectangle
supporting r in the set S ; has dimensions -
z and - x, where
z and x -
x. (If i is 0 or k \Gamma 1,
we cannot claim that
z; in these cases, it is possible
that z is much less than -
z.) See
Figure
3. We see that
a
It follows that the length of - r, and hence the length
of z i z i+1 , is at least a=ff. Since every alternate interval
for at least one rectangle
s in R, k is at most 2bffc + 3. In a similar manner, l
is also at most 2bffc + 3.
This implies that B is divided into at
most kl - (2bffc boxes by the planes
and the planes
Each such box C can contain at most n rectangles. Hence,
at most n free cuts can be made inside C. The free cuts can
divide C into at most n boxes. This implies that the
set C has at most at most 26bffc 2 n boxes.
Each rectangle r in SB is cut into at most
kl pieces. The edges of these pieces form an arrangement
on r. Each face of the arrangement is one of the at most kl
rectangles that r is partitioned into. Only 2(k
of the arrangement have an edge on the boundary of r. All
other faces can be used as free cuts. Hence, after all possible
free cuts are used in the boxes into which B is divided
by the kl cuts, only 2(k pieces of each rectangle
in SB survive. This proves that
for two classes of long rectangles
Let C be one of the boxes into which B is partitioned in
Section 3.1. We now present an algorithm for constructing
r
a r
z
x
Figure
3. Projections of - r (the dashed rectan-
shaded rectangle), and the right
face of B onto the zx-plane.
a BSP for the set of clipped rectangles SC , which has only
two classes of long rectangles. We recursively apply the following
steps to each of the boxes produced by the algorithm
until no box contains a rectangle.
1. If SC has a free rectangle, we use the free cut containing
that rectangle to split C into two boxes.
2. If SC has two classes of rectangles, we use Lemma 2
(with to split C into at most three
boxes using two parallel free cuts.
3. If SC has only one class of rectangles, we split C into
two by a plane as suggested by Lemma 1 (with
and
We first analyze the algorithm for two classes of long
rectangles. The BSP produced has the following struc-
ture: If Step 3 is executed at a node v, then Step 2 is not
invoked at any descendant of v. In view of Lemma 2,
repeated execution of Steps 1 or 2 on SC constructs
in O(jSC j log jS C of the BSP
with O(jSC nodes such that each leaf in TC has only
one class of rectangles and the total number of rectangles
in all the leaves is at most jS C j. At each leaf v of the
tree recursive invocations of Steps 1 and 3 build a BSP
of size O(jS v j log jS v j) in O(jS v j log jS v
for details). Since
where the sum is taken
over all leaves v of TC , the total size of the BSP constructed
inside C is O(jSC j log jS C j).
We now analyze the overall algorithm for long rect-
angles. The algorithm first applies the ff-cuts to the
rectangles in SB , as described in Section 3.1. Consider
the set of boxes C produced by the ff-cuts. Each
of the boxes in C contains only two classes of rectangles
(by Lemma 3(i)). In view of the above discus-
sion, for each box C 2 C, we can construct a BSP
for SC of size O(jSC j log jS C
Lemma 3(ii) and 3(iii) imply that the total size of the BSP
n). The
BSP can be built in the same time. We can now state the
following theorem.
Theorem 1 Let S be a set of n fat rectangles and B a box
so that all rectangles in S are long with respect to B. Then
an O(n log n)-size BSP for the clipped rectangles SB can
be constructed in O(n log n) time. The constants of proportionality
in the big-oh terms are linear in ff 2 , where ff is the
maximum aspect ratio of the rectangles in S :
Remark: In our algorithm for two classes of long rectan-
gles, by using in Step 3 above the algorithm of Paterson and
Yao for constructing linear-size BSPs for orthogonal segments
in the plane [21], rather than their O(n log n) algorithm
for arbitrarily-oriented segments in the plane [20], we
can improve the size of the BSP to linear. This improvement
implies that we can construct linear-size BSPs for long rect-
angles. We will not need this improved result below, except
in Section 4.
4. BSPs of size O(n 4=3 )
In this section, we present a simple algorithm that constructs
a BSP of size O(n 4=3 ) for n fat rectangles. We then
use the intuition gained from the O(n 4=3 ) algorithm to develop
an improved BSP algorithm in Section 5. We analyze
the improved algorithm in Section 6.
We need a definition before describing the algorithm. A
bisecting cut is an orthogonal cut that divides B into two
boxes and bisects the set of vertices of rectangles in S that
lie in the interior of B.
The algorithm for fat rectangles proceeds in phases. A
phase is a sequence of three bisecting cuts, with exactly one
cut perpendicular to each of the three orthogonal directions.
After each phase, if a box contains a free rectangle, we use
the corresponding free cut to further divide the box into two.
We begin the first phase with a box enclosing all the rectangles
with at most 4n vertices in its interior (since there
are n rectangles in S each with four vertices) and continue
executing phases of bisecting cuts until each node has no
vertex in its interior. At termination, each node contains
only long rectangles. We then invoke the algorithm for long
rectangles to construct a BSP in each of these nodes.
The crux of the analysis of the size of the BSP produced
by this algorithm is counting how many pieces one rectangle
can split into when subjected to a specified number of
phases. To this effect, we use the following result due to
Paterson and Yao [21].
Lemma 4 (Paterson-Yao) A rectangle that has been subjected
to d phases of cuts (with free cuts used whenever
possible) is divided into O(2 d ) rectangles.
Theorem 2 A BSP in IR 3 of size O(n 4=3 ) can be constructed
for n fat orthogonal rectangles. The constant of
proportionality in the big-oh term is linear in ff 2 , where ff
is the maximum aspect ratio of the input rectangles.
vertices in its interior, one
phase of cuts partitions B into boxes each of which has
at most k=8 vertices in its interior. Since we start with
n rectangles that have at most 4n vertices, the number
of phases executed by the above algorithm is at
most d(log n)=3 now implies that the total
number of rectangles formed once all the phases are executed
is O(n \Theta 2 d(log n)=3+2=3e stage,
all nodes have only long rectangles. Hence, Theorem 1 and
the remark at the end of Section 3 imply that constructing
a BSP in each of these nodes increases the total size of the
BSP only by a constant factor. This proves the theorem. 2
5. The improved algorithm
The algorithm proceeds in rounds. Each round simulates
a few steps of the algorithm for long rectangles as well as
partitions the vertices of the rectangles in S into a small
number of sets of approximately equal size. At the beginning
of the ith round, where i ? 0, the algorithm has a top
of the BSP for be the set of boxes associated
with the leaves of B i containing at least one rectan-
gle. The initial tree B 1 consists of one node and Q 1 consists
of one box that contains all the input rectangles. Our algorithm
maintains the invariant that for each box
long rectangles in SB are non-free. If Q i is empty, we are
done. Otherwise, in the ith round, for each box
we construct a top subtree TB of the BSP for the set SB and
attach it to the corresponding leaf of B i . This gives us the
new top subtree B i+1 . Thus, it suffices to describe how to
build the tree TB on a box B during a round.
Let F ' SB be the set of rectangles in SB that are long
with respect to B. Set k to be the number
of vertices of rectangles in SB that lie in the interior of B
(note that each such vertex is a vertex of an original rectangle
in the input set S). By assumption, all rectangles
in F are non-free. We choose a parameter a, which remains
fixed throughout the round. We pick
log(f+k) to
optimize the size of the BSP that the algorithm creates (see
Section 6). We now describe the ith round in detail.
rectangles in SB are long, we apply
the algorithm described in Section 3 to construct a BSP
for SB . Otherwise, we perform a sequence of cuts in two
stages that partition B as follows:
Separating Stage: We apply the ff-cuts, as described in
Section 3.1. We make these cuts with respect to the
rectangles in F, i.e., we consider only those rectangles
of SB that are long with respect to B. In each box so
formed, if there is a free rectangle, we apply the free
cut along that rectangle. Let C be the resulting set of
boxes.
Dividing Stage: We refine each box C in C by applying
cuts, similar to the ones made in Section 3.2, as described
below. We recursively invoke the dividing
stage on each box that C is partitioned into. Let kC
denote the number of vertices of rectangles in SC that
lie in the interior of C. The set FC is the set of rectangles
in F that are clipped within C .
1. If C has any free rectangle, we use the free
cut containing that rectangle to split C into two
boxes.
2. If jF
a, we do nothing.
3. If the rectangles in FC belong to two classes,
let PC denote the set of vertices of the rectangles
in SC that lie in the interior of C . We apply two
with
4. If the rectangles in FC belong to just one class,
we apply one cut h using Lemma 1, with
and
The cuts introduced during the dividing stage can be
made in a tree-like fashion. At the end of the dividing
stage, we have a set of boxes so that for each
box D in this set, SD does not contain any free rectangle
and jF
a. Notice that as we apply
cuts in C and in the resulting boxes, rectangles that are
short with respect to C may become long with respect to the
new boxes. We ignore these new long rectangles until the
next round, except when they induce a free cut.
6. Analysis of the improved algorithm
We now analyze the size of the BSP constructed by the
algorithm and the time complexity of the algorithm. In a
round, the algorithm constructs a top subtree TB on a box B
for the set of clipped rectangles SB . Recall that F is the set
of rectangles long with respect to B. For a node C in
let TC be the subtree of TB rooted at C , OE C the number of
long rectangles in FC , and nC the number of long rectangles
in SC n FC .
For a box D corresponding to a leaf of TB , let f D be the
number of long rectangles in SD . Note that f D counts both
the "old" long rectangles in FD (pieces of rectangles that
were long with respect to B) and the "new" long rectangles
in SD nFD (pieces of rectangles that were short with respect
to B, but became long with respect to D due to the cuts
made during the round) ; f
Lemma 5 For a box D associated with a leaf of TB , we
have
a:
We know that nD is at most k (since a rectangle
in SD n FD must be a piece of a rectangle short with respect
to B, and there are at most k such short rectangles). Hence,
Since OE D +akD - (f +ak)=2
a; the lemma follows. 2
For a box C in TB , we use the notation LC to denote the
set of leaves in TC .
Lemma 6 Let C be a box associated with a node in TB . If
all rectangles in FC belong to one class, then
a.
Lemma 7 Let C be a box associated with a node in TB . If
all rectangles in FC belong to two classes, then
a.
Lemma 8 The tree TB constructed on box B in a round has
the following properties:
: The bound on
since each vertex
in the interior of SB lies in the interior of at most one
box of LB . Next, we will use Lemmas 6 and 7 to prove a
bound on
D2LB fD . A similar argument will prove the
bound on jL B j.
Let C be the set of boxes into which B is partitioned by
the separating stage. See Figure 4. Let C be a box in C.
Since all rectangles in FC belong to at most two classes,
Lemmas 6 and 7 imply that
Separating Stage
Dividing Stage
z -
Figure
4. The tree TB constructed in a round.
a.
The boxes in C correspond to the leaves of a top subtree
of . Therefore, the total number of long rectangles in the
boxes associated with the leaves of TB is
which by equation (1) is
By an argument similar to the one used to prove Lemma 3,
we have
also know that
Therefore, we obtain
O
:We now bound the size of the BSP constructed by the
algorithm. Let S(f; denote the maximum size of the
BSP produced by the algorithm for a box that contains f
long rectangles and k vertices in its interior. If
Theorem 1 implies that S(f; log f). For the
case k ? 0, by Lemma 8(iii), we construct the subtree
on B of size O(f log a + a 3=2 k) in one round, and recursively
construct subtrees for each box in the set of leaves
LB . Hence, when k ? 0,
where
and f D
a for every box D in LB .
The solution to this recurrence is
where the constant of proportionality in the big-oh term is
linear in log ff. The intuition behind this solution is that
each round increases the number of "old" long rectangles
by at most a constant factor, while also creating O(a 3=2
"new" long rectangles. The depth of each round is O(log a).
Choosing
log(f+k) balances the total increase in the
number of "old" rectangles (over all the rounds) and the
total increase in the number of "new" rectangles.
Since all operations at a node can be performed in time
linear in the number of rectangles at that node, the same
bound can be obtained for the running time of the algorithm.
4n at the beginning of the first round,
we obtain the following theorem.
Theorem 3 Given a set S of n rectangles in IR 3 such that
the aspect ratio of each rectangle in S is bounded by a constant
ff - 1, we can construct a BSP of size O(
log
for S in time
log n ) . The constants of proportionality
in the big-oh terms are linear in log ff.
7. Extensions
In this section, we show how to modify the algorithm of
Section 5 to handle the following three cases: (i) some of
the rectangles are thin, (ii) some of the rectangles are non-
orthogonal, and (iii) the input consists of intersecting fat
rectangles.
7.1. Fat and thin rectangles
Let us assume that the input rectangles,
consisting of m thin rectangles in T and rectangles
in F.
Given a box B, let f be the number of long rectangles
in FB , let k be the number of vertices of rectangles in FB
that lie in the interior of B, and let t be the number of
rectangles in TB . The algorithm we use now is very similar
to the algorithm for fat rectangles. We fix the parameter
log(f+k) .
1. If SB contains a free rectangle, we use the corresponding
free cut to split B into two boxes.
2. If we use the algorithm for long rectangles
to construct a BSP for B.
3. If t - (f + k), we use the algorithm by Paterson and
Yao for orthogonal rectangles in IR 3 to construct a BSP
4. If (f t, we perform one round of the algorithm
described in Section 5.
This algorithm is recursively invoked on all boxes that B
is split into. Let S(k; f; t) be the size of the BSP produced
by this algorithm for a box with k vertices in
its interior, f long rectangles in FB , and t thin rectangles
in . Analyzing the algorithm's behavior as
in Section 5, we can show that S(k; f;
(see [21] for details), and when (f
O(f log a
The solution to this recurrence is
where the constant of proportionality in the first big-oh term
is linear in log ff. The following theorem is immediate.
Theorem 4 A BSP of size n
m2 O(
log n ) can be constructed
in n
m2 O(
log n ) time for n rectangles in IR 3 ,
of which m are thin. The constants of proportionality in the
big-oh terms are linear in log ff, where ff is the maximum
aspect ratio of the fat rectangles.
There exists a set of m thin rectangles and
rectangles in IR 3 for which any BSP has
m).
7.2. Fat rectangles and non-orthogonal rectangles
Suppose p objects in the input are non-orthogonal and
the rest are fat rectangles. The algorithm we use is very
similar to the algorithm in Section 7.1, except in two places.
In Step 1, we check whether we can make free cuts through
the non-orthogonal objects too. In Step 3, if the number
of non-orthogonal object at a node dominates the number
of fat rectangles, we use Paterson and Yao's algorithm for
triangles in IR 3 to construct a BSP of size quadratic in the
number of objects in cubic time [20].
Theorem 5 A BSP of size np2 O(
log n ) can be constructed
in np 2 2 O(
log n ) time for n objects in IR 3 , of
which p are non-orthogonal and the rest are fat rectangles.
The constants of proportionality in the big-oh terms are linear
in log ff, where ff is the maximum aspect ratio of the fat
rectangles.
7.3. Intersecting fat rectangles
We now consider the case when the n fat rectangles contain
intersecting pairs. For each intersecting pair
of rectangles, we break one of the rectangles in the pair into
a constant number of smaller pieces such that the pieces do
not intersect the other rectangle in the pair. This process
creates a total of n rectangles. Some or all of the
"new" O(k) rectangles may be thin. We then use the algorithm
of Section 7.1 to construct a BSP for the rectangles.
The theorem below follows.
Theorem 6 A BSP of size (n +
k2 O(
log n ) can be
constructed in (n
k2 O(
log n ) time for n rectangles
in IR 3 , which have k intersecting pairs of rectangles. The
constants of proportionality in the big-oh terms are linear
in log ff, where ff is the maximum aspect ratio of the fat
rectangles.
There exists a set of n rectangles in IR 3 , containing k
intersecting pairs, for which any BSP has
k).
8. Conclusions
Since worst-case complexities for BSPs are very
high triangles in IR 3
orthogonal
rectangles in IR 3 ) and all known examples that achieve
the worst case use mainly skinny objects, we have made the
natural assumption that objects are fat and have shown that
this assumption allows smaller worst-case size of BSPs. We
have implemented these algorithms. The practical results
are very encouraging and are presented in a companion paper
[1].
It seems very probable that BSPs of size smaller
than
log n ) can be built for n orthogonal rectangles
of bounded aspect-ratio in IR 3 . The only lower bound we
have is the
n) bound. It would be interesting to
see if algorithms can be developed to construct BSPs of
optimal size. Similar improvements can be envisioned for
Theorems 4, 5 and 6.
An even more challenging open problem is determining
the right assumptions that should be made about the input
objects and the graphics display hardware so that provably
fast and practically efficient algorithms can be developed
for doing hidden-surface elimination of these objects. A
preliminary investigation into an improved model for graphics
hardware has been made by Grove et al. [15].
Acknowledgments
We would like to thank Seth Teller
for providing us with the Soda Hall dataset created at the
Department of Computer Science, University of California
at Berkeley. We would also like to thank the Walk-through
Project, Department of Computer Science, University
of North Carolina at Chapel Hill for providing us with
the datasets for Sitterson Hall, the Orange United Methodist
Church Fellowship Hall, and the Sitterson Hall Lobby.
--R
Practical methods for constructing binary space partitions for orthogonal objects.
Surface approximation and geometric partitions.
Increasing Update Rates in the Building Walk-through System with Automatic Model-space Subdivision and Potentially Visible Set Calculations
Motion planning using binary space partitions.
Modeling Global Diffuse Illumination for Image Synthesis.
A Subdivision Algorithm for Computer Display of Curved Surfaces.
Near real-time object-precision shadow generation using BSP trees-master thesis
Near real-time shadow generation using bsp trees
Fast object-precision shadow generation for areal light sources using BSP trees
Linear size binary space partitions for fat ob- jects
A survey of object-space hidden surface re- moval
Computer Graphics: Principles and Practice.
On visible surface generation by a priori tree structures.
The object complexity model for hidden-surface elimination
On maximum flows in polyhedral domains.
SCULPT an interactive solid modeling tool.
Merging BSP trees yields polyhedral set operations.
Application of BSP trees to ray-tracing and CSG evaluation
Efficient binary space partitions for hidden-surface removal and solid modeling
Optimal binary space partitions for orthogonal objects.
A characterization of ten hidden surface algorithms.
Visibility Computations in Densely Occluded Polyhedral Environments.
Set operations on polyhedra using binary space partitioning trees.
--TR
--CTR
John Hershberger , Subhash Suri , Csaba D. Toth, Binary space partitions of orthogonal subdivisions, Proceedings of the twentieth annual symposium on Computational geometry, June 08-11, 2004, Brooklyn, New York, USA
Csaba D. Tth, Binary space partitions for line segments with a limited number of directions, Proceedings of the thirteenth annual ACM-SIAM symposium on Discrete algorithms, p.465-471, January 06-08, 2002, San Francisco, California
Csaba D. T'oth, A note on binary plane partitions, Proceedings of the seventeenth annual symposium on Computational geometry, p.151-156, June 2001, Medford, Massachusetts, United States
Mark de Berg , Micha Streppel, Approximate range searching using binary space partitions, Computational Geometry: Theory and Applications, v.33 n.3, p.139-151, February 2006
B. Aronov , A. Efrat , V. Koltun , Micha Sharir, On the union of -round objects, Proceedings of the twentieth annual symposium on Computational geometry, June 08-11, 2004, Brooklyn, New York, USA | solid modelling;aspect ratio;computer graphics;computational geometry;binary space partitions;rectangles |
352757 | An Optimal Algorithm for Monte Carlo Estimation. | A typical approach to estimate an unknown quantity $\mu$ is to design an experiment that produces a random variable Z, distributed in [0,1] with E[Z]=\mu$, run this experiment independently a number of times, and use the average of the outcomes as the estimate. In this paper, we consider the case when no a priori information about Z is known except that is distributed in [0,1]. We describe an approximation algorithm ${\cal A}{\cal A}$ which, given $\epsilon$ and $\delta$, when running independent experiments with respect to any Z, produces an estimate that is within a factor $1+\epsilon$ of $\mu$ with probability at least $1-\delta$. We prove that the expected number of experiments run by ${\cal A}{\cal A}$ (which depends on Z) is optimal to within a constant factor {for every} Z. | Introduction
The choice of experiment, or experimental design, forms an important aspect of statistics.
One of the simplest design problems is the problem of deciding when to stop sampling.
For example, suppose Z are independently and identically distributed according
to Z in the interval [0; 1] with mean -Z . From Bernstein's inequality, we know that if
N is fixed proportional to ln(1=ffi)=ffl 2 and with probability at
least approximates -Z with absolute error ffl. Often, however, -Z is small and
a good absolute error estimate of -Z is typically a poor relative error approximation of
-Z .
We say ~
-Z is an (ffl; ffi)-approximation of -Z if
In engineering and computer science applications we often desire an (ffl; ffi)-approximation
of -Z in problems where exact computation of -Z is NP-hard. For example, many
researchers have devoted substantial effort to the important and difficult problem of approximating
the permanent of valued matrices [1, 4, 5, 9, 10, 13, 14]. Researchers
have also used (ffl; ffi)-approximations to tackle many other difficult problems, such as approximating
probabilistic inference in Bayesian networks [6], approximating the volume
of convex bodies [7], solving the Ising model of statistical mechanics [11], solving for
network reliability in planar multiterminal networks [15, 16], approximating the number
of solutions to a DNF formula [17] or, more generally, to a GF[2] formula [18], and
approximating the number of Eulerian orientations of a graph [19].
and let oe 2
Z denote the variance of Z. Define
We first prove a slight generalization of the Zero-One Estimator Theorem [12, 15, 16, 17].
The new theorem, the Generalized Zero-One Estimator Theorem, proves that if
Z (1)
then S=N is an (ffl; ffi)-approximation of -Z .
To apply the Generalized Zero-One Estimator Theorem we require the values of the
unknown quantities -Z and oe 2
Z . Researchers circumvent this problem by computing an
upper bound - on ae Z =- 2
Z , and using - in place of ae Z =- 2
Z to determine a value for N in
Equation (1). An a priori upper bound - on ae Z =- 2
Z that is close to ae Z =- 2
Z is often very
difficult to obtain, and a poor bound leads to a prohibitive large bound on N .
To avoid the problem encountered with the Generalized Zero-One Estimator Theo-
rem, we use the outcomes of previous experiments to decide when to stop iterating.
This approach is known as sequential analysis and originated with the work of Wald
on statistical decision theory [22]. Related research has applied sequential analysis to
specific Monte Carlo approximation problems such as estimating the number of points
in a union of sets [17] and estimating the number of self-avoiding walks [20]. In other
related work, Dyer et al describe a stopping rule based algorithm that provides an upper
bound estimate on -Z [8]. With probability the estimate is at most (1
the estimate can be arbitrarily smaller than - in the challenging case when - is small.
We first describe an approximation algorithm based on a simple stopping rule. Using
the stopping rule, the approximation algorithm outputs an (ffl; ffi)-approximation of -Z
after expected number of experiments proportional to \Upsilon=- Z . The variance of the random
variable Z is maximized subject to a fixed mean -Z if Z takes on value 1 with probability
-Z and 0 with probability . In this case, oe 2
and the expected
number of experiments run by the stopping-rule based algorithm is within a constant
factor of optimal. In general, however, oe
Z is significantly smaller than -Z , and for small
values of oe 2
Z the stopping-rule based algorithm performs 1=ffl times as many experiments
as the optimal number.
We describe a more powerful algorithm, the AA algorithm, that on inputs ffl, ffi, and
independently and identically distributed outcomes Z generated from any random
variable Z distributed in [0; 1], outputs an (ffl; ffi)-approximation of -Z after an expected
number of experiments proportional to \Upsilon \Delta ae Z =- 2
Z . Unlike the simple, stopping-rule based
algorithm, we prove that for all Z, AA runs the optimal number of experiments to within
a constant factor. Specifically, we prove that if BB is any algorithm that produces an
(ffl; ffi)-approximation of -Z using the inputs ffl, ffi, and Z 1 runs an expected
number of experiments proportional to at least \Upsilon \Delta ae Z =- 2
Z . (Canetti, Evan and Goldreich
prove the related lower bound \Omega\Gammand (1=ffi)=ffl 2 ) on the number of experiments required to
approximate -Z with absolute error ffl with probability at least 1 \Gamma ffi [2].) Thus we show
that for any random variable Z, AA runs an expected number of experiments that is
within a constant factor of the minimum expected number.
The AA algorithm is a general method for optimally using the outcomes of Monte-Carlo
experiments for approximation-that is, to within a constant factor, the algorithm
uses the minimum possible number of experiments to output an (ffl; ffi)-approximation on
each problem instance. Thus, AA provides substantial computational savings in applications
that employ a poor upper bound - on ae Z =- 2
Z . For example, the best known a priori
bound on - for the problem of approximating the permanent of size n is superpolynomial
in n [13]. Yet, for many problem instances of size n, the number of experiments run by
AA is significantly smaller than this bound. Other examples exist where the bounds are
also extremely loose for many typical problem instances [7, 10, 11]. In all those applica-
tions, we expect AA to provide substantial computational savings, and possibly render
problems that were intractable, because of the poor upper bounds on ae Z =- 2
Z , amenable
to efficient approximation.
Approximation Algorithm
In Subsection 2.1, we describe a stopping rule algorithm for estimating -Z . This algorithm
is used in the first step of the approximation algorithm AA that we describe in
Subsection 2.2.
2.1 Stopping Rule Algorithm
Let Z be a random variable distributed in the interval [0; 1] with mean -Z . Let Z
be independently and identically distributed according to Z.
Stopping Rule Algorithm
Input Parameters: (ffl; ffi) with
Initialize N / 0, S / 0
While
Output: ~
Stopping Rule Theorem: Let Z be a random variable distributed in [0; 1] with
-Z be the estimate produced and let NZ be the number of experiments
that the Stopping Rule Algorithm runs with respect to Z on input ffl and ffi. Then,
The proof of this theorem can be found in Section 5.
2.2 Approximation Algorithm AA
The (ffl; ffi)-approximation algorithm AA consists of three main steps. The first step uses
the stopping rule based algorithm to produce an estimate -Z that is within a constant
factor of -Z with probability at least 1 \Gamma ffi. The second step uses the value of -
-Z to set
the number of experiments to run in order to produce an estimate -
ae Z that is within a
constant factor of ae with probability at least 1 \Gamma ffi. The third step uses the values of -Z
and -
ae Z produced in the first two steps to set the number of experiments and runs this
number of experiments to produce an (ffl; ffi)-estimate of ~
-Z of -Z .
Let Z be a random variable distributed in the interval [0; 1] with mean -Z and variance
Z . Let Z
two sets of random variables independently and
identically distributed according to Z.
Approximation Algorithm AA
Input Parameters: (ffl; ffi), with
1: Run the Stopping Rule Algorithm using Z parameters
fflg and ffi=3. This produces an estimate -Z of -Z .
Step 2: Set
-Z and initialize S / 0.
For
ae Z / maxfS=N; ffl -Z g.
Step 3: Set
ae Z =- 2
Z and initialize S / 0.
For
~
-Z / S=N .
Output: ~
-Z
AA Theorem: Let Z be any random variable distributed in [0; 1], let
be the mean of Z, oe 2
Z be the variance of Z, and ae
g. Let ~
-Z be the
approximation produced by AA and let NZ be the number of experiments run by AA
with respect to Z on input parameters ffl and ffi. Then,
(2) There is a universal constant c 0 such that Pr[NZ - c 0 \Upsilon
(3) There is a universal constant c 0 such that E[NZ
Z .
We prove the AA Theorem in Section 6.
3 Lower Bound
Algorithm AA is able to produce a good estimate of -Z using no a priori information
about Z. An interesting question is what is the inherent number of experiments needed
to be able to produce an (ffl; ffi)-approximation of -Z . In this section, we state a lower
bound on the number of experiments needed by any (ffl; ffi)-approximation algorithm to
estimate -Z when there is no a priori information about Z. This lower bound shows
that, to within a constant factor, AA runs the minimum number of experiments for
every random variable Z.
To formalize the lower bound, we introduce the following natural model. Let BB be any
algorithm that on input (ffl; ffi) works as follows with respect to Z. Let Z
independently and identically distributed according to Z with values in the interval [0; 1].
BB runs an experiment, and on the N th run BB receives the value ZN . The measure of
the running time of BB is the number of experments it runs, i.e., the time for all other
computations performed by BB is not counted in its running time. BB is allowed to
use any criteria it wants to decide when to stop running experiments and produce an
estimate, and in particular BB can use the outcome of all previous experiments. The
estimate that BB produces when it stops can be any function of the outcomes of the
experiments it has run up to that point. The requirement on BB is that is produces an
(ffl; ffi)-approximation of -Z for any Z.
This model captures the situation where the algorithm can only gather information
about -Z through running random experiments, and where the algorithm has no a priori
knowledge about the value of -Z before starting. This is a reasonable pair of assumptions
for practical situations. It turns out that the assumption about a priori knowledge can
be substantially relaxed: the algorithm may know a priori that the outcomes are being
generated according to some known random variable Z or to some closely related random
variable Z 0 , and still the lower bound on the number of experiments applies.
Note that the approximation algorithm AA fits into this model, and thus the average
number of experiments it runs with respect to Z is minimal for all Z to within a constant
factor among all such approximation algorithms.
Lower Bound Theorem: Let BB be any algorithm that works as described above on
input (ffl; ffi). Let Z be a random variable distributed in [0; 1], let -Z be the mean of Z, oe 2
Z
be the variance of Z, and ae
g. Let ~
-Z be the approximation produced by
BB and let NZ be the number of experiments run by BB with respect to Z. Suppose that
BB has the the following properties:
(1) For all Z with -Z ? 0, E[NZ
(2) For all Z with -Z ? 0,
Then, there is a universal constant c ? 0 such that for all Z, E[NZ
Z .
We prove this theorem in Section 7.
4 Preliminaries for the Proofs
We begin with some notation that is used hereafter. Let -
For fixed ff; fi - 0, we define the random variables
The main lemma we use to prove the first part of the Stopping Rule Theorem provides
bounds on the probabilities that the random variables
k are greater than zero.
We first form the sequences of random variables e di
real valued d. We prove that these sequences are supermartingales when 0 - d - 1 and
Z , i.e., for all k ? 0
and similarly,
We then use properties of supermartingales to bound the probabilities that the random
k are greater than zero. For these and subsequent proofs, we use the
following two inequalities:
Inequality 4.1 For all ff, e ff
Inequality 4.2 Let :72. For all ff with jffj - 1,
Lemma 4.3 For jdj - 1, E[e dZ
Z .
Proof: Observe that E[e dZ But from Inequality 4.2,
Taking expectations and applying Inequality 4.1 completes the proof. 2
Lemma 4.4 For 0 - d - 1, and for fi -doe 2
Z , the sequences of random variables
supermartingales.
Proof: For k - 1,
and thus,
Similarly, for k - 1
and thus from Lemma 4.3,
and
Thus, for fi -doe 2
Z ,
and
directly from the properties of conditional expectations and of
martingales.
Lemma 4.5 If j is a supermartingale then for all
This lemma is the key to the proof of the first part of the
Stopping Rule Theorem. In addition, from this lemma we easily prove a slightly more
general version of the Zero-One Estimator Theorem.
Lemma 4.6 For any fixed N ? 0, for any fi - 2-ae Z ,
\GammaN
and
\GammaN
Proof: Recall the definitions of i
N from Equations (3) and (4). Let
Then, the left-hand side of Equation (5) is equivalent to Pr[i
and the left-hand
side of Equation (6) is equivalent to Pr[i \Gamma
N and i 0\Gamma
N .
We now give the remainder of the proof of Equation (5), using i 0+
N , and omit the
remainder of the analogous proof of Equation (6) which uses i 0\Gamma
N in place of i 0+
N . For any
implies that d - 1. Note also that
since ae Z - oe 2
Z . Thus, by Lemma 4.4, e di 0+
N is a supermartingale.
Thus, by Lemma 4.5
E[e di 0+
\GammaN
Since e di 0+
0 is a constant,
E[e di 0+
completing the proof of Equation (5). 2
We use Lemma 4.6 to generalize the Zero-One Estimator Theorem [17] from f0; 1g-valued
random variables to random variables in the interval [0; 1].
Generalized Zero-One Estimator Theorem: Let Z
variables independent and identically distributed according to Z. If ffl ! 1 and
then
Pr
Proof: The proof follows directly from Lemma (4.6), using noting that
ffl- Z - 2-ae Z and that N \Delta (ffl- Z
5 Proof of the Stopping Rule Theorem
We next prove the Stopping Rule Theorem. The first part of the proof also follows
directly from Lemma 4.6. Recall that
Stopping Rule Theorem: Let Z be a random variable distributed in [0; 1] with
-Z be the estimate produced and let NZ be the number of experiments
that the Stopping Rule Algorithm runs with respect to Z on input ffl and ffi. Then,
Proof of Part (1): Recall that ~
It suffices to show that
We first show that Pr[NZ ! \Upsilon 1 =(- Z (1+ ffl))] - ffi=2. Let Assuming
that -Z (1 the definition of \Upsilon 1 and L implies that
Since NZ is an integer, NZ only if NZ - L. But NZ - L if and
only if SL - \Upsilon 1 . Thus,
Noting that ffl- Z - fi - 2-ae Z , Lemma (4.6) implies that
Using inequality (7) and noting that ae Z -Z , it follows that this is at most ffi=2.
The proof that Pr[NZ ? \Upsilon 1 =(- Z
Proof of Part (2): The random variable NZ is the stopping time such that
Using Wald's Equation [22] and E[NZ
and thus,
:Similar to the proof of the first part of the Stopping Rule Theorem, we can show that
and therefore with probability at least 1 \Gamma ffi=2 we require at most (1 experiments
to generate an approximation. The following lemma is used in the proof of the
AA Theorem in Section 6.
Stopping Rule Lemma:
Proof of the Stopping Rule Lemma: E[1=~- Z directly from
Part (2) of the Stopping Rule Theorem and the definition of NZ . E[1=~- 2
can be easily proved based on the ideas used in the proof of Part (2) of the Stopping
Rule Theorem. 2
6 Proof of the AA Theorem
AA Theorem: Let Z be any random variable distributed in [0; 1], let -Z be the mean
of Z, oe 2
Z be the variance of Z, and ae
g. Let ~
-Z be the approximation
produced by AA and let NZ be the number of experiments run by AA with respect to Z
on input parameters ffl and ffi. Then,
(2) There is a universal constant c 0 such that Pr[NZ - c 0 \Upsilon
(3) There is a universal constant c 0 such that E[NZ
Z .
Proof of Part (1): From the Stopping Rule Theorem, after Step (1) of AA, -Z
ffl) holds with probability at least 1 \Gamma ffi=3. Let
We show next that if -Z
ffl), then in Step (2) the choice of
\Upsilon 2 guarantees that -
ae Z - ae Z =2. Thus, after Steps (1) and (2), \Phi -
ae Z =- 2
Z with
probability at least 1 \Gamma ffi=3. But by the Generalized Zero-One Estimator Theorem, for
ae Z =- 2
ae Z =- 2
Z , Step (3)
guarantees that the output ~ -Z of AA satisfies Pr[-Z
For all i, let - and observe that,
Z . First assume that
Z . If oe 2
ffl)ffl- Z then from the Generalized Zero-One
Estimator Theorem, after at most (2=(1 \Gamma
experiments, ae Z =2 - S=N - 3ae Z =2 with probability at least 1 \Gamma 2ffi=3. Thus -
ae Z - ae Z =2.
If ffl- Z - oe 2
ffl)ffl- Z then ffl- Z - oe 2
ffl)), and therefore, -
ae Z - ffl -Z - ae Z =2.
Next, assume that oe 2
. But Steps (1) and (2) guarantee
that -
ae Z - ffl -
ffl), with probability at least 1 \Gamma ffi=3.
Proof of Part (2): AA may fail to terminate after O(\Upsilon \Delta ae Z =- 2
experiments either
because Step (1) failed with probability at least ffi=2 to produce an estimate -
-Z such
that -Z (1 \Gamma
ffl), or, because in Step (2), for oe 2
ffl)ffl- Z ,
and, S=N is not O(ffl- Z ) with probability at least 1 \Gamma ffi=2.
But Equation 8 guarantees that Step (1) of AA terminates after O(\Upsilon \Delta ae Z =- 2
with probability at least 1 \Gamma ffi=2. In addition, we can show, similarly to Lemma 4.6,
that if oe 2
Thus, for N - 2\Upsilon \Delta ffl=- Z , we have that Pr[S=N - 4ffl- Z
Proof of Part (3): Observe that from the Stopping Rule Theorem, the expected
number of experiments in Step (1) is O(ln(1=ffi)=(ffl- Z )). From the Stopping Rule Lemma,
the expected number of experiments in Step (2) is O(ln(1=ffi)=(ffl- Z )). Finally, in Step (3)
observe that E[-ae Z =- 2
Z
ae Z and -
-Z are computed from disjoint sets
of independently and identically distributed random variables. From the Stopping Rule
Lemma
Z ] is O(ln(1=ffi)=- 2
Furthermore, observe that E[-ae Z
Z and E[ffl- Z
Z and
Thus, the expected number of experiments in Step (3) is O(ln(1=ffi) 2.
7 Proof of Lower Bound Theorem
Lower Bound Theorem: Let BB be any algorithm that works as described above on
input (ffl; ffi). Let Z be a random variable distributed in [0; 1], let -Z be the mean of Z, oe 2
Z
be the variance of Z, and ae
g. Let ~
-Z be the approximation produced by
BB and let NZ be the number of experiments run by BB with respect to Z. Suppose that
BB has the the following properties:
(1) For all Z with -Z ? 0, E[NZ
(2) For all Z with -Z ? 0, Pr[-Z
Then, there is a universal constant c ? 0 such that for all Z, E[NZ
Z .
Let f Z (x) and f Z 0 (x) denote two given distinct probability mass (or in the continuous
case, the density) functions. Let Z independent and identically distributed
random variables with probability density f(x). Let HZ denote the hypothesis
and let H Z 0 denote the hypothesis denote the probability that we reject
HZ under f Z and let fi denote the probability that we accept H Z 0 under f Z 0 .
The sequential probability ratio test minimizes the number of expected sample size
both under HZ and H Z 0 among all tests with the same error probabilities ff and fi.
Theorem 7.1 states the result of the sequential probability ratio test. We prove the
result for completeness, although similar proofs exist [21].
Theorem 7.1 If T is the stopping time of any test of HZ against H Z 0 with error probabilities
ff and fi, and EZ [T ];
ff
and
ff
Proof: For the independent and identically distributed random variables Z 1
k .
For stopping time T , we get from Wald's first identity
and
Next,
let\Omega denote the space of all inputs on which the test rejects HZ , and
denotes its complement. Thus, by definition, we require that
Similarly, we require that Pr Z
From the properties
of expectations, we can show that
Z[\Omega
c ];
and we can decompose
T j\Omega\Gamma and observe that from
Inequality 4.1, EZ
But
I\Omega ]=Pr
where
I\Omega denotes the characteristic function for the
set\Omega\Gamma Thus, since
Y
we can show that
and finally,
ff
Similarly, we can show that
and,
Thus,
ff
and we prove the first part of the lemma. Similarly,
proves the second part of the lemma. 2
Corollary 7.2 If T is the stopping time of any test of HZ against H Z 0 with error probabilities
ff and fi such that ff
and
Proof: If ff
ff
ff
achieves a minimum at Substitution of completes the proof.
Lemma 7.5 proves the Lower Bound Theorem for oe 2
We begin with some
definitions. Let
Z .
use Inequality 4.2 to show that
Taking expectations completes the proof. 2
Lemma 7.4 2doe 2
Z =/.
denotes the derivative of / with respect to d,
the proof follows directly from Lemma 7.3. 2
Lemma 7.5 If oe 2
Proof: Let T denote the stopping time of any test of HZ against H Z 0 . Note that
Z then, by Lemmas 7.3
-). Thus, to test HZ against H 0
Z , we can use the BB
with input ffl such that -Z (1
for ffl we obtain ffl - ffl=(2(2 ffl). But Corollary 7.2 gives a lower bound on the
expected number of experiments E[N
run by BB with respect the Z. We observe that
where the inequality follows from Lemma 7.3. We let
Z and substitute
to complete the proof. 2
We now prove the Lower Bound Theorem that holds also when oe 2
We
define the density
Lemma 7.6 If -Z - 1=4 then
Proof: Observe that E Z 0
Lemma 7.7 \GammaE Z
Proof: Observe that
Taking expectations completes the proof. 2
Lemma 7.8 If -Z - 1=4 then E[NZ
Proof: Let T denote the stopping time of any test of HZ against H Z 0 . From
Lemma 7.6, and since -Z - 1, =4. Thus, to test HZ against H 0
Z ,
we can use the BB with input ffl such that -Z (1+ ffl
Solving for ffl we obtain ffl - ffl=(8 ffl). But Corollary 7.2 gives a lower bound on the expected
number of experiments E[N
run by BB with respect the Z. Next observe that,
by Lemma 7.7, \Gamma! Z in Corollary 7.2 is at most 2ffl- Z . Substitution of
completes the proof. 2
Proof of Lower Bound Theorem: Follows from Lemma 7.5 and Lemma 7.8. 2
--R
"How hard is it to marry at random? (On the approximation of the permanent)"
"Lower bounds for sampling algorithms for estimating the average "
"An optimal algorithm for Monte-Carlo estimation (extended abstract)"
"Polytopes, permanents and graphs with large factors"
"Approximating the Permanent of Graphs with Large Factors,"
"An optimal approximation algorithm for Bayesian inference"
"A random polynomial time algorithm for approximating the volume of convex bodies"
"A Mildly Exponential Time Algorithm for Approximating the Number of Solutions to a Multi-dimensional Knapsack Problem"
"An analysis of a Monte Carlo algorithm for estimating the permanent"
"Conductance and the rapid mixing property for Markov Chains: the approximation of the permanent resolved"
"Polynomial-time approximation algorithms for the Ising model"
"Random generation of combinatorial structures from a uniform distribution"
"A mildly exponential approximation algorithm for the permanent"
"A Monte-Carlo Algorithm for Estimating the Permanent"
"Monte Carlo algorithms for planar multiterminal network reliability problems"
"Monte Carlo algorithms for enumeration and reliability problems"
"Monte Carlo approximation algorithms for enumeration problems"
"Approximating the Number of Solutions to a GF[2] For- mula,"
"On the number of Eulerian orientations of a graph"
"Testable algorithms for self-avoiding walks"
Sequential Analysis
Sequential Analysis
--TR
--CTR
Michael Luby , Vivek K. Goyal , Simon Skaria , Gavin B. Horn, Wave and equation based rate control using multicast round trip time, ACM SIGCOMM Computer Communication Review, v.32 n.4, October 2002
Ronald Fagin , Amnon Lotem , Moni Naor, Optimal aggregation algorithms for middleware, Journal of Computer and System Sciences, v.66 n.4, p.614-656, 1 June
Fast approximate probabilistically checkable proofs, Information and Computation, v.189 n.2, p.135-159, March 15, 2004 | monte carlo estimation;sequential estimation;stochastic approximation;stopping rule;approximation algorithm |
352762 | Separating Complexity Classes Using Autoreducibility. | A set is autoreducible if it can be reduced to itself by a Turing machine that does not ask its own input to the oracle. We use autoreducibility to separate the polynomial-time hierarchy from exponential space by showing that all Turing complete sets for certain levels of the exponential-time hierarchy are autoreducible but there exists some Turing complete set for doubly exponential space that is not.Although we already knew how to separate these classes using diagonalization, our proofs separate classes solely by showing they have different structural properties, thus applying Post's program to complexity theory. We feel such techniques may prove unknown separations in the future. In particular, if we could settle the question as to whether all Turing complete sets for doubly exponential time are autoreducible, we would separate either polynomial time from polynomial space, and nondeterministic logarithmic space from nondeterministic polynomial time, or else the polynomial-time hierarchy from exponential time.We also look at the autoreducibility of complete sets under nonadaptive, bounded query, probabilistic, and nonuniform reductions. We show how settling some of these autoreducibility questions will also lead to new complexity class separations. | Introduction
While complexity theorists have made great strides in understanding the structure of complexity
classes, they have not yet found the proper tools to do nontrivial separation of complexity classes
such as P and NP. They have developed sophisticated diagonalization, combinatorial and algebraic
techniques but none of these ideas have yet proven very useful in the separation task.
Back in the early days of computability theory, Post [13] wanted to show that the set of noncomputable
computably enumerable sets strictly contains the Turing-complete computably enumerable
sets. In what we now call iPost's Programj (see [11, 15]), Post tried to show these classes dioeer by
-nding a property that holds for all sets in the -rst class but not for some set in the second.
We would like to resurrect Post's Program for separating classes in complexity theory. In
particular we will show how some classes dioeer by showing that their complete sets have dioeerent
structure. While we do not separate any classes not already separable by known diagonalization
techniques, we feel that re-nements to our techniques may yield some new separation results.
In this paper we will concentrate on the property known as iautoreducibility.j A set A is
autoreducible if we can decide whether an input x belongs to A in polynomial-time by making
queries about membership of strings dioeerent from x to A.
Trakhtenbrot [16] -rst looked at autoreducibility in both the unbounded and space-bounded
models. Ladner [10] showed that there exist Turing-complete computably enumerable sets that are
not autoreducible. Ambos-Spies [1] -rst transferred the notion of autoreducibility to the polynomial-time
setting. More recently, Yao [19] and Beigel and Feigenbaum [5] have studied a probabilistic
variant of autoreducibility known as icoherence.j
In this paper, we ask for what complexity classes do all the complete sets have the autoreducibil-
ity property. In particular we show:
ffl All Turing-complete sets for \Delta EXP
are autoreducible for any constant k, where \Delta EXP
the sets that are exponential-time Turing-reducible to \Sigma P
k .
ffl There exists a Turing-complete set for doubly exponential space that is not autoreducible.
Since the union of all sets \Delta EXP
k coincides with the exponential-time hierarchy, we obtain a separation
of the exponential-time hierarchy from doubly exponential space and thus of the polynomial-time
hierarchy from exponential space. Although these results also follow from the space hierarchy
theorems [9] which we have known for a long time, our proof does not directly use diagonalization,
rather separates the classes by showing that they have dioeerent structural properties.
Issues of relativization do not apply to this work because of oracle access (see [8]): A polynomial-time
autoreduction can not view as much of the oracle as an exponential or doubly exponential
computation. To illustrate this point we show that there exists an oracle relative to which some
complete set for exponential time is not autoreducible.
Note that if we can settle whether the Turing-complete sets for doubly exponential time are
all autoreducible one way or the other, we will have a major separation result. If there exists
a Turing-complete set for doubly exponential time that is not autoreducible, then we get that the
exponential-time hierarchy is strictly contained in doubly exponential time thus that the polynomial-time
hierarchy is strictly contained in exponential time. If all of the Turing-complete sets for doubly
exponential time are autoreducible, we get that doubly exponential time is strictly contained in doubly
exponential space, and thus polynomial time strictly in polynomial space. We will also show that
this assumption implies a separation of nondeterministic logarithmic space from nondeterministic
polynomial time. Similar implications hold for space bounded classes (see Section 5). Autoreducibil-
ity questions about doubly exponential time and exponential space thus remain an exciting line of
research.
We also study the nonadaptive variant of the problem. Our main results scale down one exponential
as follows:
ffl All truth-table-complete sets \Delta P
are truth-table-autoreducible for any constant k, where \Delta P
denotes that sets polynomial-time Turing-reducible to \Sigma P
k .
ffl There exists a truth-table-complete set for exponential space that is not truth-table-autoreducible.
Again, -nding out whether all truth-table-complete sets for intermediate classes, namely polynomial
space and exponential time, are truth-table-autoreducible, would have major implications.
In contrast to the above results we exhibit the limitations of our approach: For the restricted
reducibility where we are only allowed to ask two nonadaptive queries, all complete sets for EXP,
EXPSPACE, EEXP, EEXPSPACE, etc., are autoreducible.
We also argue that uniformity is crucial for our technique of separating complexity classes, because
our nonautoreducibility results fail in the nonuniform setting. Razborov and Rudich [14] show
that if strong pseudo-random generators exist, inatural proofsj can not separate certain nonuniform
complexity classes. Since this paper relies on uniformity in an essential way, their result does not
apply.
Regarding the probabilistic variant of autoreducibility mentioned above, we can strengthen
our results and construct a Turing-complete set for doubly exponential space that is not even
probabilistically autoreducible. We leave the analogue of this theorem in the nonadaptive setting
open: Does there exist a truth-table complete set for exponential space that is not probabilistically
truth-table autoreducible? We do show that every truth-table complete set for exponential time
is probabilistically truth-table autoreducible. So, a positive answer to the open question would
establish that exponential time is strictly contained in exponential space. A negative answer, on the
other hand, would imply a separation of nondeterministic logarithmic space from nondeterministic
polynomial time.
Here is the outline of the paper: First, we introduce our notation and state some preliminaries in
Section 2. Next, in Section 3 we establish our negative autoreducibility results, for the adaptive as
well as the nonadaptive case. Then we prove the positive results in Section 4, where we also brieAEy
look at the randomized and nonuniform settings. Section 5 discusses the separations that follow
from our results and would follow from improvements on them. Finally, we conclude in Section 6
and mention some possible directions for further research.
1.1 Errata to conference version
A previous version of this paper [6] erroneously claimed proofs showing all Turing complete sets
for EXPSPACE are autoreducible and all truth-table complete sets for PSPACE are nonadaptively
autoreducible. Combined with the additional results in this version, we would have a separation of
NL and NP (see Section 5).
However the proofs in the earlier version failed to account for the growth of the running time
when recursively computing previous players' moves. We use the proof technique in Section 3 though
unfortunately we get weaker theorems. The original results claimed in the previous version remain
important open questions as resolving them either way will yield new separation results.
Notation and Preliminaries
Most of our complexity theoretic notation is standard. We refer the reader to the textbooks by
Balc#zar, D#az and Gabarr# [4, 3], and by Papadimitriou [12].
We use the binary alphabet 1g. We denote the dioeerence of a set A with a set B, i.e.,
the subset of elements of A that do not belong to B, by A n B.
For any integer k ? 0, a \Sigma k -formula is a Boolean expression of the form
where OE is a Boolean formula, Q i denotes 9 if i is odd, and 8 otherwise, and the n i 's are positive
integers. We say that (1) has alternations. A \Pi k -formula is just like (1) except starts with a
8-quanti-er. It also has k \Gamma 1 alternations. A QBF k -formula is a \Sigma k -formula (1) or a \Pi k -formula
For any integer k ? 0, \Sigma P
k denotes the k-th \Sigma-level of the polynomial-time hierarchy. We
de-ne these levels recursively by \Sigma P
k . The \Delta-levels of the polynomial-time
and exponential-time hierarchy are de-ned as \Delta P
respectively \Delta EXP
k . The
polynomial-time hierarchy PH equals the union of all sets \Delta P
k , and the exponential-time hierarchy
EXPH similarly the union of all sets \Delta EXP
k .
A reduction of a set A to a set B is a polynomial-time oracle Turing machine M such that
A. We say that A reduces to B and write A 6 P
Turing). The reduction M
is nonadaptive if the oracle queries M makes on any input are independent of the oracle, i.e., the
queries do not depend upon the answers to previous queries. In that case we write A 6 P
for truth-table). Reductions of functions to sets are de-ned similarly. If the number of queries on
an input of length n is bounded by q(n), we write A 6 P
bounded by some constant, we write A 6 P
We denote the set of queries of
M on input x with oracle B by Q M B(x); in case of nonadaptive reductions, we omit the oracle B in
the notation. If the reduction asks only one query and answers the answer to that query, we write
For any reducibility 6 P
r and any complexity class C, a set C is 6 P
r -hard for C if we can 6 P
r -reduce
every set A 2 C to C. If in addition C 2 C, we call C 6 P
r -complete for C. For any integer k ? 0, the
set TQBF k of all true QBF k -formulae is 6 P
m -complete for \Sigma P
k . For reduces to the fact
that the set SAT of satis-able Boolean formulae is 6 P
m -complete for NP.
Now we get to the key concept of this paper:
De-nition 2.1 A set A is autoreducible if there is a reduction M of A to itself that never queries its
own input, i.e., for any input x and any oracle B, x 62 Q M B(x). We call such M an autoreduction
of A.
We will also discuss randomized and nonuniform variants. A set is probabilistically autoreducible if it
has a probabilistic autoreduction with bounded two-sided error. Yao [19] -rst studied this concept
under the name icoherencej. A set is nonuniformly autoreducible if it has an autoreduction that uses
polynomial advice. For all these notions, we can consider both the adaptive and the nonadaptive
case. For randomized autoreducibility, nonadaptiveness means that the queries only depend on the
input and the random seed.
3 Nonautoreducibility Results
In this section, we show that large complexity classes have complete sets that are not autoreducible.
Theorem 3.1 There is a 6 P
2\GammaT -complete set for EEXPSPACE that is not autoreducible.
Most natural classes containing EEXPSPACE, e.g., EEEXPTIME and EEEXPSPACE, also have
this property.
We can even construct the complete set in Theorem 3.1 to defeat every probabilistic autoreduc-
tion:
Theorem 3.2 There is a 6 P
2\GammaT -complete set for EEXPSPACE that is not probabilistically autore-
ducible.
In the nonadaptive setting, we obtain:
Theorem 3.3 There is a 6 P
3\Gammatt -complete set for EXPSPACE that is not nonadaptively autore-
ducible.
Unlike the case of Theorem 3.1, our construction does not seem to yield a truth-table complete set
that is not probabilistically nonadaptively autoreducible. In fact, as we shall show in Section 4.3,
such a result would separate EXP from EXPSPACE. See also Section 5.
We will detail in Section 4.3 that our nonautoreducibility results do not hold in the nonuniform
setting.
3.1 Adaptive Autoreductions
Suppose we want to construct a nonautoreducible Turing-complete set for a complexity class C, i.e.,
a set A such that:
1. A is not autoreducible.
2. A is Turing-hard for C.
3. A belongs to C.
If C has a 6 P
m -complete set K, realizing goals 1 and 2 is not too hard: We can encode K in A,
and at the same time diagonalize against all autoreductions. A straightforward implementation
would be to encode K(y) as A(h0; yi), and stage-wise diagonalize against all 6 P
-reductions M by
picking for each M an input x not of the form h0; yi that is not queried during previous stages, and
setting (x). However, this construction does not seem to achieve the third goal.
In particular, deciding the membership of a diagonalization string x to A might require computing
inputs y of length jxj c , assuming M runs in time n c . Since we have to do this
for all potential autoreductions M , we can only bound the resources (time, space) needed to decide
A by a function in t(n !(1) ), where t(n) denotes the amount of resources some deterministic Turing
machine accepting K uses. That does not suOEce to keep A inside C.
To remedy this problem, we will avoid the need to compute K(y) on large inputs y, say of length
at least jxj. Instead, we will make sure we can encode the membership of such strings to any set,
not just K, and at the same time diagonalize against M on input x. We will argue that we can do
this by considering two possible coding regions at every stage as opposed to a -xed one: the left
region L, containing strings of the form h0; yi, and the right region R, similarly containing strings
of the form h1; yi. The following states that we can use one of the regions to encode an arbitrary
sequence, and set the other region such that the output of M on input x is -xed and indicates the
region used for encoding.
Either it is the case that for any setting of L there is a setting of R such that M A (x)
accepts, or for any setting of R there is a setting of L such that M A (x) rejects. (*)
This allows us to achieve goals 1 and 2 from above as follows. In the former case, we will set
encode K in L (at that stage); otherwise we will set encode K in R.
Since the value of A(x) does not aoeect the behavior of M A on input x, we diagonalize against M
in both cases. Also, in any case
so deciding K is still easy when given A. Moreover - and crucially - in order to compute A(x), we
no longer have to decide K(y) on large inputs y, of length jxj or more. Instead, we have to check
whether the former case in (*) holds or not. Although quite complex a task, it only depends on
M and on the part of A constructed so far, not on the value of K(y) for any input of length jxj
or more: We verify whether we can encode any sequence, not just the characteristic sequence of
K for lengths at least jxj, and at the same time diagonalize against M on input x. Provided the
complexity class C is suOEciently powerful, we can perform this task in C.
There is still a catch, though. Suppose we have found out that the former case in (*) holds.
Then we will use the left region L to encode K (at that stage), and we know we can diagonalize
against M on input x by setting the bits of the right region R appropriately. However, deciding
exactly how to set these bits of the noncoding region requires, in addition to determining which
region we should use for coding, the knowledge of K(y) for all y such that jxj 6 jyj 6 jxj c . In order
to also circumvent the need to decide K for too large inputs here, we will use a slightly stronger
version of (*) obtained by grouping quanti-ers into blocks and rearranging them. We will partition
the coding and noncoding regions into intervals. We will make sure that for any given interval, the
length of a string in that interval (or any of the previous intervals) is no more than the square of the
length of any string in that interval. Then we will block-wise alternately set the bits in the coding
region according to K, and the corresponding ones in the noncoding region so as to maintain the
diagonalization against M on input x as in (*). This way, in order to compute the bit A(h1; zi)
of the noncoding region, we will only have to query K on inputs y with jyj 6 jzj 2 , as opposed to
for an arbitrarily large c depending on M as was the case before.
This is what happens in the next lemma, which we prove in a more general form, because we
will need the generalization later on in Section 5.
Lemma 3.4 Fix a set K, and suppose we can decide it simultaneously in time t(n) and space
s(n). be a constructible monotone unbounded function, and suppose there is
a deterministic Turing machine accepting TQBF that takes time t 0 (n) and space s 0 (n) on QBF-
formulae of size 2 n fi(n)
with at most 2 log fi(n) alternations. Then there is a set A such that:
1. A is not autoreducible.
2. K 6 P
A.
3. We can decide A simultaneously in time O(2 n 2
\Deltat(n 2 )+2 n \Deltat 0 (n)) and space O(2 n 2
Proof (of Lemma 3.4)
Fix a function fi satisfying the hypotheses of the Lemma. Let M be a standard enumeration
of autoreductions clocked such that M i runs in time n fi(i) on inputs of length n. Our construction
starts out with A being the empty set, and then adds strings to A in subsequent stages
de-ned by the following sequence:
Note that since M i runs in time n fi(i) , M i can not query strings of length n i+1 or more on input 0 n i .
Fix an integer i ? 1 and let . For any integer j such that 0 6 j 6 log fi(m), let I j
denote the set of all strings with lengths in the interval [m 2 j
1)). Note that
log fi(m)
forms a partition of the set I of strings with lengths in [m; m fi(m)
the property that for any 0 6 k 6 log fi(m), the length of any string in [ k
I j is no more than the
square of the length of any string in I k .
During the i-th stage of the construction, we will encode the restriction Kj I of K to I into
use the string 0 m for diagonalizing against M i , applying the next
strengthening of (*) to do so:
3.1 For any set A, at least one of the following holds:
or
Here we use (Q z y ) y2Y as a shorthand for Q z y 1
and
all variables are quanti-ed over f0; 1g. Without loss of generality we assume that the range of the
pairing function h\Delta; \Deltai is disjoint from 0 .
Proof (of Claim 3.1)
Fix A. If (2) does not hold, then its negation holds, i.e,
Switching the quanti-ers log fi(m) in (4) yields
the weaker statement (3).
if formula (2) holds
then for
the lexicographically -rst value satisfying
where A
endfor
A
the lexicographically -rst value satisfying
where A
endfor
A
endif
Figure
1: Stage i of the construction of the set A in Lemma 3.4
Figure
1 describes the i-th stage in the construction of the set A. Note that the lexicographically
-rst values in this algorithm always exist, so the construction works -ne. We now argue that the
resulting set A satis-es the properties of Lemma 3.4.
1. The construction guarantees that by the end of stage
Since M i on input 0 m can not query 0 m (because M i is an autoreduction) nor any of the
strings added during subsequent stages (because M i does not even have the time to write
down any of these strings), A(0 m
holds for the -nal set A. So, M i is not an
autoreduction of A. Since this is true of any autoreduction M i , the set A is not autoreducible.
2. During stage i, we encode Kj I in the left region ioe we do not put 0 m into A; otherwise
we encode Kj I in the right region. So, for any y 2 I , Therefore,
A.
3. First note that A only contains strings of the form 0 m with
and strings of the form hb; yi with b 2 f0; 1g and y 2 \Sigma .
Assume we have executed the construction of A up to but not including stage i. The additional
work to decide the membership to A of a string belonging to the i-th stage, is as follows.
case
does not hold and (2) is a QBF 2 log fi(m) -formula of size
case hb; zi where
Then hb; zi 2 A ioe z 2 K, which we can decide in time t(jzj).
case hb; zi where
Say z 2 I k , 0 6 k 6 log fi(m). In order to compute whether hb; zi 2 A, we run the part of
stage i corresponding to the values of j in Figure 1 up to and including k. This involves
computing K on [ k
I j and deciding O(2 jzj ) QBF 2 log fi(m) -formulae of size O(2 m fi(m) ).
The latter takes O(2 jzj \Delta t 0 (m)) time. Since every string in [ k
I j is of size no more than
, we can do the former in time O(2 jzj 2
A similar analysis also shows that we can perform the stages up to but not including i in time
All together, this yields the time bound claimed for A. The analysis of the space complexity
is analogous.
(Lemma
Using the upper bound 2 n fi(n) for s 0 (n), the smallest standard complexity class to which Lemma
3.4 applies, seems to be EEXPSPACE. This results in Theorem 3.1.
Proof (of Theorem 3.1)
In Lemma 3.4, set K a 6 P
m -complete set for EEXPSPACE, and
In section 4.2, we will see that 6 P
2\GammaT in the statement of Theorem 3.1 is optimal: Theorem 4.5
shows that Theorem 3.1 fails for 6 P
2\Gammatt .
We note that the proof of Theorem 3.1 carries through for 6 EXPSPACE
-reductions with polynomially
bounded query lengths. This implies the strengthening given by Theorem 3.2.
3.2 Nonadaptive Autoreductions
Diagonalizing against nonadaptive autoreductions M is easier. If M runs in time -(n), there can
be no more than -(n) coding strings that interfere with the diagonalization, as opposed to 2 -(n) in
the adaptive case. This allows us to reduce the complexity of the set constructed in Lemma 3.4 as
follows.
Lemma 3.5 Fix a set K, and suppose we can decide it simultaneously in time t(n) and space
s(n). be a constructible monotone unbounded function, and suppose there is
a deterministic Turing machine accepting TQBF that takes time t 0 (n) and space s 0 (n) on QBF-
formulae of size n fi(n) with at most 2 log fi(n) alternations. Then there is a set A such that:
1. A is not nonadaptively autoreducible.
2. K 6 P
A.
3. We can decide A simultaneously in time O(2 n \Delta (t(n 2 )+ t 0 (n))) and space O(2 n +s(n 2 )+s 0 (n)).
Proof (of Lemma 3.5)
The construction of the set A is the same as in Lemma 3.4 (see Figure 1) apart from the following
dioeerences:
now is a standard enumeration of nonadaptive autoreductions clocked such that
runs in time n fi(i) on inputs of length n. Note that the set QM (x) of possible queries M
makes on input x contains no more than jxj fi(i) elements.
ffl During stage i ? 1 of the construction, I denotes the set of all strings y with lengths in
denotes the set of strings in I with lengths in [m
Note that the only ' y 's and r y 's that aoeect the validity of the predicate iM A 0
formula (2) and the corresponding formulae in Figure 1, are those for which y 2 I .
ffl At the end of stage i in Figure 1, we add the following line:
A
This ensures coding K(y) for strings y with lengths in [n are
queried by M i on input 0 m . Although not essential, we choose to encode them in both the
left and the right region.
Adjusting time and space bounds appropriately, the proof that A satis-es the 3 properties claimed,
carries over. There is an additional case in the analysis of the third point, namely the one of an
input of the form hb; zi where b 2 I . Then, by construction,
which we can decide in time t(jzj). The crucial simpli-cation over the adaptive
case lies in the fact that (2) and the similar formulae in Figure 1 now become QBF 2 log fi(n) -formulae
of size O(n fi(n) ) as opposed to of size O(2 n fi(n) ) in Lemma 3.4. (Lemma 3.5)
As a consequence, we can lower the space complexity in the equivalent of Theorem 3.1 from
doubly exponential to singly exponential, yielding Theorem 3.3. In section 4.2, we will show we can
not reduce the number of queries from 3 to 2 in Theorem 3.3.
If we restrict the number of queries the nonadaptive autoreduction is allowed to make to some
-xed polynomial, the proof technique of Theorem 3.3 also applies to EXP. In particular, we obtain:
Theorem 3.6 There is a 6 P
3\Gammatt -complete set for EXP that is not 6 P
btt -autoreducible.
4 Autoreducibility Results
For small complexity classes, all complete sets turn out to be autoreducible. Beigel and Feigenbaum
[5] established this property of all levels of the polynomial-time hierarchy as well as of PSPACE,
the largest class for which it was known to hold before our work. In this section, we will prove it
for the \Delta-levels of the exponential-time hierarchy.
As to nonadaptive reductions, the question was even open for all levels of the polynomial-time
hierarchy. We will show here that the 6 P
tt -complete sets for the \Delta-levels of the polynomial-time
hierarchy are nonadaptively autoreducible. For any complexity class containing EXP, we will prove
that the 6 P
2\Gammatt -complete sets are 6 P
2\Gammatt -autoreducible.
Finally, we will also consider nonuniform and randomized autoreductions.
Throughout this section, we will assume without loss of generality an encoding fl of a computation
of a given oracle Turing machine M on a given input x with the following properties. fl will
be a marked concatenation of successive instantaneous descriptions of M , starting with the initial
instantaneous description of M on input x, such that:
ffl Given a pointer to a bit in fl, we can -nd out whether that bit represents the answer to an
oracle query by probing a constant number of bits of fl.
ffl If it is the answer to an oracle query, the corresponding query is a substring of the pre-x of
fl up to that point, and we can easily compute a pointer to the beginning of that substring
without probing fl any further.
ffl If it is not the answer to an oracle query, we can perform a local consistency check for that
bit which only depends on a constant number of previous bit positions of fl and the input x.
Formally, there exist a function g M and a predicate e M , both polynomial-time computable,
and a constant c M such that the following holds: For any input x, any index i to a bit position
in fl, and any j, 1 is an index no larger than i, and
indicates whether fl passes the local consistency test for its i-th bit fl i . Provided the pre-x
of fl up to but not including position i is correct, the local consistency test is passed ioe fl i is
correct.
We call such an encoding a valid computation of M on input x ioe the local consistency tests (5) for
all the bit positions i that do not correspond to oracle answers, are passed, and the other bits equal
the oracle's answer to the corresponding query. Any other string we will call a computation.
4.1 Adaptive Autoreductions
We will -rst show that every 6 P
T -complete set for EXP is autoreducible, and then generalize to all
\Delta-levels of the exponential-time hierarchy.
Theorem 4.1 Every 6 P
T -complete set for EXP is autoreducible.
Here is the proof idea: For any of the standard deterministic complexity classes C, we can decide
each bit of the computation on a given input x within C. So, if A is a 6 P
T -complete set for C that
can be decided by a machine M within the con-nes of the class C, then we can 6 P
-reduce deciding
the i-th bit of the computation of M on input x to A. Now, consider the two (possibly invalid)
computations we obtain by applying for every bit position the above reduction, answering all queries
except for x according to A, and assuming x 2 A for one computation, and x 62 A for the other.
Note that the computation corresponding to the right assumption about A(x), is certainly
correct. So, if both computations yield the same answer (which we can eOEciently check using A
without querying x), that answer is correct. If not, the other computation contains a mistake. We
can not check both computations entirely to see which one is right, but given a pointer to the -rst
incorrect bit of the wrong computation, we can eOEciently verify that it is mistaken by checking only
a constant number of bits of that computation. The pointer is again computable within C.
In case C ' EXP, using a 6 P
T -reduction to A and assuming x 2 A or x 62 A as above, we
can determine the pointer with oracle A (but without querying x) in polynomial time, since the
pointer's length is polynomially bounded.
We now -ll out the details.
Proof (of Theorem 4.1)
Fix a 6 P
T -complete set A for EXP. Say A is accepted by a Turing machine M such that the
computation of M on an input of size n has length 2 p(n) for some -xed polynomial p. Without loss
of generality the last bit of the computation gives the -nal answer. Let g M , e M , and c M be the
formalization of the local consistency test for M as described by (5).
Let -(hx; ii) denote the i-th bit of the computation of M on input x. We can compute - in
EXP, so there is an oracle Turing machine R - 6 P
-reducing - to A.
Let oe(x) be the -rst such that R Anfxg
exists. Again, we can compute oe in EXP, so there is a 6 P
T -reduction R oe from oe to A.
Consider the algorithm in Figure 2 for deciding A on input x. The algorithm is a polynomial-time
oracle Turing machine with oracle A that does not query its own input x. We now argue that
it correctly decides A on input x. We distinguish between two cases:
case R Anfxg
Since at least one of the computations R Anfxg
- (hx; \Deltai) or R A[fxg
coincides with the actual
computation of M on input x, and the last bit of the computation equals the -nal decision,
correctness follows.
if R Anfxg
then accept ioe R A[fxg
else i / R A[fxg
oe (x)
accept ioe e M (x;
endif
Figure
2: Autoreduction for the set A of Theorem 4.1 on input x
case R Anfxg
contains a mistake. Variable i gets the
correct value of the index of the -rst incorrect bit in this computation, so the local consistency
test for R Anfxg
being the computation of M on input x fails on the i-th bit, and we
accept x.
If x 62 A, R Anfxg
- (hx; \Deltai) is a valid computation, so no local consistency test fails, and we reject
x.
(Theorem 4.1)
The local checkability property of computations used in the proof of Theorem 4.1 does not
relativize, because the oracle computation steps depend on the entire query, i.e., on a number of
bits that is only limited by the resource bounds of the base machine, in this case exponentially
many. We next show that Theorem 4.1 itself also does not relativize.
Theorem 4.2 Relative to some oracle, EXP has a 6 P
2\GammaT -complete set that is not autoreducible.
Proof
Note that EXP has the following property:
Property 4.1 There is an oracle Turing machine N running in EXP such that for any oracle B,
the set accepted by N B is 6 P
m -complete for EXP B .
Without loss of generality, we assume that N runs in time 2 n . Let K B denote the set accepted by
We will construct an oracle B and a set A such that A is 6 P
2\GammaT -complete for EXP B and is not
The construction of A is the same as in Lemma 3.4 (see Figure 1) with
for that the reductions M i now also have access to the oracle B.
We will encode in B information about the construction of A that reduces the complexity of A
relative to B, but do it high enough so as not to destroy the 6 P
2\GammaT -completeness of A for EXP B nor
the diagonalizations against 6 P B
-autoreductions.
We construct B in stages along with A. We start with B empty. Using the notation of Lemma
3.4, at the beginning of stage i, we add
to B ioe property (2) does not hold, and at the end of
sub-stage j, we union B with
(2) holds at stage i
Note that this does not aoeect the value of K B (y) for
, nor the computations of M i on
inputs of size at most m (for suOEciently large i such that m log It follows from the analysis
in the proof of Lemma 3.4 that the set A is 6 P
2\GammaT -hard for EXP B and not 6 P B
Regarding the complexity of deciding A relative to B, note that the encoding in the oracle B
allows us to eliminate the need for evaluating QBF log fi(n) -formulae of size 2 n fi(n)
. Instead, we just
query B on easily constructed inputs of size O(2 n 2
Therefore, we can drop the terms corresponding
to the QBF log fi(n) -formulae of size 2 n fi(n) in the complexity of A. Consequently, A 2 EXP B .
(Theorem 4.2)
Theorem 4.2 applies to any complexity class containing EXP that has Property 4.1, e.g.,
EXPSPACE, EEXP, EEXPSPACE, etc.
Sometimes, the structure of the oracle allows to get around the lack of local checkability of oracle
queries. This is the case for oracles from the polynomial-time hierarchy, and leads to the following
extension of Theorem 4.1:
Theorem 4.3 For any integer k ? 0, every 6 P
T -complete set for \Delta EXP
k+1 is autoreducible.
The proof idea is as follows: Let A be a 6 P
T -complete set accepted by the deterministic oracle Turing
machine M with oracle TQBF k . First note that there is a polynomial-time Turing machine N such
that a query q belongs to the oracle TQBF k ioe
where the y ' 's are of size polynomial in jqj.
We consider the two purported computations of M on input x constructed in the proof of
Theorem 4.1. One of them belongs to a party assuming x 2 A, the other one to a party assuming
x 62 A. The computation corresponding to the right assumption is correct; the other one might not
be.
Now, suppose the computations dioeer, and we are given a pointer to the -rst bit position
where they disagree, which turns out to be the answer to an oracle query q. Then we can have
the two parties play the k-round game underlying (6): The party claiming q 2 TQBF k plays the
existentially quanti-ed y ' 's, the other one the universally quanti-ed y ` 's. The players' strategies will
consist of computing the game history so far, determining their optimal next move, 6 P
-reducing
this computation to A, and -nally producing the result of this reduction under their respective
assumption about A(x). This will guarantee that the party with the correct assumption plays
optimally. Since this is also the one claiming the correct answer to the oracle query q, he will win
the game, i.e., N(q; y answer bit.
The only thing the autoreduction for A has to do, is determine the value of N(q; y
in polynomial time using A as an oracle but without querying x. It can do that along the lines
of the base case algorithm given in Figure 2. If during this process, the local consistency test for
's computation requires the knowledge of bits from the y ` 's, we compute these via the reduction
de-ning the strategy of the corresponding player. The bits from q we need, we can retrieve from the
M-computations, since both computations are correct up to the point where they -nished generating
q. Once we know N(q; y easily decide the correct assumption about A(x).
The construction hinges on the hypothesis that we can 6 P
-reduce determining the player's moves
to A. Computing these moves can become quite complex, though, because we have to recursively
reconstruct the game history so far. The number of rounds k being constant, seems crucial for
keeping the complexity under control. The conference version of this paper [6] erroneously claimed
the proof works for EXPSPACE, which can be thought of as alternating exponential time with
an exponential number of alternations. Establishing Theorem 4.3 for EXPSPACE would actually
separate NL from NP, as we will see in Section 5.
Proof (of Theorem 4.3)
Let A be a 6 P
T -complete set for \Delta EXP
k accepted by the exponential-time oracle Turing
machine M with oracle TQBF k . Let g M , e M , and c M be the formalization of the local consistency
test for M as described by (5). Without loss of generality there is a polynomial p and a polynomial-time
Turing machine N such that on inputs of size n, M makes exactly 2 p(n) oracle queries, all of
the form
where q has length 2 p 2 (n) . Moreover, the computations of N in (7) each have length 2 p 3 (n) , and
their last bit represents the answer; the same holds for the computations of M on inputs of length
n. Let g N , e N , and c N be the formalization of the local consistency test for N .
We -rst de-ne a bunch of functions computable in \Delta EXP
k+1 . For each of them, say -, we -x an
oracle Turing machine R - that 6 P
-reduces - to A, and which the -nal autoreduction for A will use.
The proofs that we can compute these functions in \Delta EXP
are straightforward.
Let -(hx; ii) denote the i-th bit of the computation of M TQBF k on input x, and oe(x) the -rst i
(if any) such that R Anfxg
ii). The roles of - and oe are the same as in the proof
of Theorem 4.1: We will use R - to -gure out whether both possible answers for the oracle query
lead to the same -nal answer, and if not, use R oe to -nd a pointer i to the -rst incorrect
bit (in any) of the simulated computation getting the negative oracle answer x 62 A. If i turns out
not to point to an oracle query, we can proceed as in the proof of Theorem 4.1. Otherwise, we will
make use of the following functions and associated reductions to A.
We de-ne the functions j ' and y ' inductively for At each level ' we -rst de-ne j ' ,
which induces a reduction R j ' , and then de-ne y ' based on R j ' . All of these functions take an input
x such that the i-th bit of R Anfxg
- (hx; \Deltai) is the answer to an oracle query (7), where
oe (x).
We de-ne j ' (x) as the lexicographically least y
such that
if this value does not exist, we set j '
. Note that the right-hand side of (8) is 1 ioe y ' is
existentially quanti-ed in (7).
R A[fxg
R Anfxg
The condition on the right-hand side of (9) means that we use the hypothesis x 2 A to compute
y ' (x) from R j ' in case:
ffl either y ' is existentially quanti-ed in (7) and the player assuming x 2 A claims (7) holds,
ffl or else y ' is universally quanti-ed and the player assuming x 2 A claims (7) fails.
Otherwise we use the hypothesis x 62 A.
In case i points to the answer to an oracle query (7), the functions j ' and the reductions R j '
incorporate the moves during the successive rounds of the game underlying (7). The reduction R j '
together with the player's assumption about membership of x to A, determines the actual move
during the '-th round, namely R A[fxg
if the '-th round is played by the opponent assuming
otherwise. The condition on the right-hand side of (9) guarantees that the
existentially quanti-ed variables are determined by the opponent claiming the query (7) is a true
formula, and the universally quanti-ed ones by the other opponent. In particular, (9) ensures that
the opponent with the correct claim about (7) has a wining strategy. Provided it exists, the function
de-nes a winning move during the '-th round of the game for the opponent playing that round,
given the way the previous rounds were actually played (as described by the y(x)'s). For odd ',
i.e., y ' is existentially quanti-ed, it tries to set y ' such that the remainder of (7) holds; otherwise it
tries to set y ' such that the remainder of (7) fails. The actual move may dioeer from the one given
by j ' in case the player's assumption about x 2 A is incorrect. The opponent with the correct
assumption plays according to j ' . Since that opponent also makes the correct claim about (7), he
will win the game. In any case, N(q; y hold ioe (7) holds.
Finally, we de-ne the functions - and - , which have a similar job as the functions - respectively
oe, but for the computation of N(q; y instead of the computation of M TQBF
k (x). More
precisely, -(hx; ri) equals the r-th bit of the computation of N(q; y 1 where the
are de-ned by (9), and the bit with index
oe (x) in the computation R Anfxg
the answer to the oracle query (7). We de-ne -(x) to be the -rst r (if any) for which R Anfxg
R A[fxg
ri), provided the bit with index
oe (x) in the computation R Anfxg
- (hx; \Deltai) is the
answer to an oracle query.
Now we have these functions and corresponding reductions, we can describe an autoreduction for
A. On input x, it works as described in Figure 3. We next argue that the algorithm correctly decides
A on input x. Checking the other properties required of an autoreduction for A is straightforward.
We only consider the cases where R Anfxg
points to the
answer to an oracle query in R Anfxg
\Deltai). We refer to the analysis in the proof of Theorem 4.1
for the remaining cases.
case R Anfxg
points to the -rst incorrect bit of R Anfxg
- (hx; \Deltai), which turns out to be the
answer to an oracle query, say (7).
yields the correct oracle answer
to (7),
R Anfxg
and we accept x.
If x 62 A, both R Anfxg
give the correct answer to the oracle
query i points to in the computation R Anfxg
\Deltai). So, they are equal, and we reject x.
if R Anfxg
then accept ioe R A[fxg
else i / R A[fxg
oe (x)
if the i-th bit of R Anfxg
\Deltai) is not the answer to an oracle query
then accept ioe e M (x;
else if R Anfxg
then accept ioe R Anfxg
else r / R A[fxg
accept ioe e N (q; y
R Anfxg
where q denotes the query described in R Anfxg
to which the i-th bit in this computation is the answer
and
R A[fxg
R Anfxg
endif
endif
endif
Figure
3: Autoreduction for the set A of Theorem 4.3 on input x
case R Anfxg
Then, as described in Figure 3, we will use the local consistency test for R Anfxg
being the
computation of N(q; y 1 (x)). Apart from bits in the purported computation
R Anfxg
- (hx; \Deltai), this test may also need bits from q and from the y ' (x)'s. The y ` (x)'s can be
computed straightforwardly using their de-nition (9). The bits from q we might need, can be
retrieved from R Anfxg
\Deltai). This is because our encoding scheme for computations has the
property that the query q is a substring of the pre-x of the computation up to the position
indexed by i. Since either R Anfxg
- (hx; \Deltai) is correct everywhere, or else i is the -rst position
where is is incorrect, the description of q in R Anfxg
- (hx; \Deltai) is correct in any case. Moreover,
we can easily compute a pointer to the beginning of the substring q of R Anfxg
- (hx; \Deltai) from i.
- (hx; \Deltai) has an error as a computation of
gets assigned the index of the -rst incorrect bit in
this computation, so the local consistency check fails, and we accept x.
If x 62 A, R Anfxg
- (hx; \Deltai) is a valid computation of N(q; y 1 so every local
consistency test is passed, and we reject x.
(Theorem 4.3)
4.2 Nonadaptive Autoreductions
So far, we constructed autoreductions for 6 P
T -complete sets A. On input x, we looked at the
two candidate computations obtained by reducing to A, answering all oracle queries except for x
according to A, and answering query x positively for one candidate, and negatively for the other.
If the candidates disagreed, we tried to -nd out the right one, which always existed. We managed
to get the idea to work for quite powerful sets A, e.g., EXP-complete sets, by exploiting the local
checkability of computations. That allowed us to -gure out the wrong computation without going
through the entire computation ourselves: With help from A, we -rst computed a pointer to the
-rst mistake in the wrong computation, and then veri-ed it locally.
We can not use this adaptive approach for constructing nonadaptive autoreductions. It seems
like -guring out the wrong computation in a nonadaptive way, requires the autoreduction to perform
the computation of the base machine itself, so the base machine has to run in polynomial time. Then
checking the computation essentially boils down to verifying the oracle answers. Using the game
characterization of the polynomial-time hierarchy along the same lines as in Theorem 4.3, we can
do this for oracles from the polynomial-time hierarchy.
Theorem 4.4 For any integer k ? 0, every 6 P
tt -complete set for \Delta P
k+1 is nonadaptively autore-
ducible.
Parallel to the adaptive case, an earlier version of this paper [6] stated Theorem 4.4 for unbounded
k, i.e., for PSPACE. However, we only get the proof to work for constant k. In Section 5, we will
see that proving Theorem 4.4 for PSPACE would separate NL from NP.
The only additional diOEculty in the proof is that in the nonadaptive setting, we do not know
which player has to perform the even rounds, and which one the odd rounds in the k-round game
underlying a query like (6). But we can just have them play both scenarios, and afterwards -gure
out the relevant run.
Proof (of Theorem 4.4)
Let A be a 6 P
tt -complete set for \Delta P
k accepted by the polynomial-time oracle Turing machine
M with oracle TQBF k . Without loss of generality there is a polynomial p and a polynomial-time
Turing machine N such that on inputs of size n, M makes exactly p(n) oracle queries q, all of the
where q has length p 2 (n). Let q(x; i) denote the i-th oracle query of M TQBF k on input x. Note that
k .
g. The set Q belongs to \Delta P
, so there is a 6 P
-reduction RQ
from Q to A.
If for a given input x, R A[fxg
Q and R Anfxg
Q agree on hx; ji for every 1 6 j 6 p(jxj), we are home:
We can simulate the base machine M using R A[fxg
as the answer to the j-th oracle query.
Otherwise, we will make use of the following functions computable in \Delta P
corresponding
oracle Turing machines R j 1
tt -reductions to A, and functions
. As in the proof of Theorem 4.3, we de-ne j ' and y ' inductively
k. They are de-ned for inputs x such that there is a smallest 1 6 i 6 p(jxj)
for which R Anfxg
ii). The value of j ' (x) equals the lexicographically least
we set j ' string does not exist. The right-hand side of (11) is 1 ioe y ' is existentially
quanti-ed in (10).
R A[fxg
R Anfxg
The condition on the right-hand side of (12) means that we use the hypothesis x 2 A to compute
y ' (x) from R j ' in case:
ffl either y ' is existentially quanti-ed in (10) and the assumption x 2 A leads to claiming that
ffl or else y ' is universally quanti-ed and the assumption x 2 A leads to claiming that (10) fails.
The intuitive meaning of the functions j ' and the reductions R j ' is similar to in the proof
of Theorem 4.3: They capture the moves during the '-th round of the game underlying (10) for
i). The function j ' encapsulates an optimal move during round ' if it exists, and the
reduction R j ' under the player's assumption regarding membership of x to A, produces the actual
move in that round. The condition on the right-hand side of (12) guarantees the correct alternation
of rounds. We refer to the proof of Theorem 4.3 for more intuition.
Consider the algorithm in Figure 4. Note that the only queries to A the algorithm in Figure 4
if R Anfxg
then accept ioe M accepts x when the j-th oracle query is answered R A[fxg
else i / -rst j such that R Anfxg
accept ioe N(q; y
where q denotes the i-th query of M on input x
when the answer to the j-th oracle query is given by R A[fxg
and
R A[fxg
R Anfxg
endif
Figure
4: Nonadaptive autoreduction for the set A of Theorem 4.4 on input x
needs to make, are the queries of RQ dioeerent from x on inputs hx; ji for 1 6 j 6 p(jxj), and the
queries of R j ' dioeerent from x on input x for 1 6 ' 6 k. Since RQ and the R j ' 's are nonadaptive, it
follows that Figure 4 describes a 6 P
tt -reduction to A that does not query its own input. A similar but
simpli-ed argument as in the proof of Theorem 4.3 shows that it accepts A. So, A is nonadaptively
autoreducible. (Theorem 4.4)
Next, we consider more restricted reductions. Using a dioeerent technique, we show:
Theorem 4.5 For any complexity class C, every 6 P
2\Gammatt -complete set for C is 6 P
2\Gammatt -autoreducible,
provided C is closed under exponential-time reductions that only ask one query which is smaller in
length.
In particular, Theorem 4.5 applies to In view of Theorems
3.1 and 3.3, this implies that Theorems 3.1, 3.3, and 4.5 are optimal.
The proof exploits the ability of EXP to simulate all polynomial-time reductions to construct
an auxiliary set D within C such that any 6 P
2\Gammatt -reductions of D to some -xed complete set A has
a property that induces an autoreduction on A.
Proof (of Theorem 4.5)
be a standard enumeration of 6 P
2\Gammatt -reductions such that M i runs in time n i on
inputs of size n. Let A be a 6 P
2\Gammatt -complete set for C.
Consider the set D that only contains strings of the form h0
decided by the algorithm of Figure 5 on such an input. Except for deciding A(x), the algorithm runs
case truth-table of M i on input h0 i ; xi with the truth-value of query x set to A(x)
constant:
rejects
of the form iy 62 Aj:
accept ioe x 62 A
otherwise:
accept ioe x 2 A
endcase
Figure
5: Algorithm for the set D of Theorem 4.5 on input h0
in exponential time. Therefore, under the given conditions on C, there is a 6 P
-reduction
M j from D to A.
The construction of D diagonalizes against every 6 P
-reduction M i of D to A whose truth-table
on input h0 would become constant once we -lled in the membership bit for x. Therefore,
for every input x, one of the following cases holds for the truth-table of M j on input h0
ffl The reduced truth-table is of the form iy 2 Aj with y 6= x.
ffl The reduced truth-table is of the form iy 62 Aj with y 6= x.
ffl The truth-table depends on the membership to A of 2 strings dioeerent from x.
Then M A
j does not query x on input h0 j ; xi, and accepts ioe x 2 A.
The above analysis shows that the algorithm of Figure 6 describes a 6 P
2\Gammatt -reduction of A.
then accept ioe M A
accepts
else
y / unique element of QM j
accept ioe y 2 A
endif
Figure
Autoreduction constructed in the proof of Theorem 4.5
(Theorem 4.5)
4.3 Probabilistic and Nonuniform Autoreductions
The previous results in this section trivially imply that the 6 P
T -complete sets for the \Delta-levels of
the exponential-time hierarchy are probabilistically autoreducible, and the 6 P
tt -complete sets for
the \Delta-levels of the polynomial-time hierarchy are probabilistically nonadaptively autoreducible.
Randomness allows us the prove more in the nonadaptive case.
First, we can establish Theorem 4.4 for EXP:
Theorem 4.6 Let f be a constructible function. Every 6 P
f(n)\Gammatt -complete set for EXP is probabilistically
O(f(n))\Gammatt -autoreducible. In particular, every 6 P
tt -complete set for EXP is probabilistically
nonadaptively autoreducible.
Proof (of Theorem 4.6)
Let A be a 6 P
f(n)\Gammatt -complete set for EXP. We will apply the PCP Theorem for EXP [2] to A.
Lemma 4.7 ([2]) There is a constant k such that for any set A 2 EXP, there is a polynomial-time
Turing machine V and a polynomial p such that for any input x:
ffl If x 2 A, then there exists a proof oracle - such that
Pr
ffl If x 62 A, then for any proof oracle -
Pr
Moreover, V never makes more than k proof oracle queries, and there is a proof oracle ~ - 2 EXP
independent of x such that (13) holds for
- in case x 2 A.
Translating Lemma 4.7 into our terminology, we obtain:
Lemma 4.8 There is a constant k such that for any set A 2 EXP, there is a probabilistic 6 P
reduction N , and a set B 2 EXP such that for any input x:
ffl If x 2 A, then N B (x) always accepts.
ffl If x 62 A, then for any oracle C, N C (x) accepts with probability at most 1
3 .
Let R be a 6 P
f(n)\Gammatt -reduction of B to A, and consider the probabilistic reduction M A that on input
runs N on input x with oracle R A[fxg . M A is a probabilistic 6 P
k \Deltaf (n)\Gammatt -reduction to A that never
queries its own input. The following shows it de-nes a reduction from A:
accepts.
ffl If x 62 A, then for accepts with probability at most 1(Theorem 4.6)
Note that Theorem 4.6 makes it plausible why we did not manage to scale down Theorem 3.2
by one exponent to EXPSPACE in the nonadaptive setting, as we were able to do for our other
results in Section 3 when going from the adaptive to the nonadaptive case: This would separate
EXP from EXPSPACE.
We suggest the extension of Theorem 4.6 to the \Delta-levels of the exponential-time hierarchy as
an interesting problem for further research.
Second, Theorem 4.4 also holds for NP:
Theorem 4.9 All 6 P
tt -complete sets for NP are probabilistically nonadaptively autoreducible.
Proof (of Theorem 4.9)
Fix a 6 P
tt -complete set A for NP. Let RA denote a length nondecreasing 6 P
m -reduction of A to SAT.
De-ne the set
is a Boolean formula with, say m variables and 9 a
there is a 6 P
-reduction RW from W to A.
We will use the following probabilistic algorithm by Valiant and Vazirani [18]:
Lemma 4.10 ([18]) There exists a polynomial-time probabilistic Turing machine N that on input
a Boolean formula ' with n variables, outputs another quanti-er
such
ffl If ' is satis-able, then with probability at least 1
4n , OE has a unique satisfying assignment.
ffl If ' is not satis-able, then OE is never satis-able.
Now consider the following algorithm for A: On input x, run N on input RA (x), yielding a
Boolean formula OE with, say m variables, and it accepts ioe
OE(R A[fxg
evaluates to true. Note that this algorithm describes a probabilistic 6 P
tt -reduction to A that never
queries its own input. Moreover:
ffl If x 2 A, then with probability at least 1
, the Valiant-Vazirani algorithm N produces a
Boolean formula OE with a unique satisfying assignment ~ a OE . In that case, the assignment we
use (R A[fxg
we accept x.
ffl If x 62 A, any Boolean formula OE which N produces has no satisfying assignment, so we always
reject x.
Executing \Theta(n) independent runs of this algorithm, and accepting ioe any of them accepts, yields a
probabilistic nonadaptive autoreduction for A. (Theorem 4.9)
So, for probabilistic autoreductions, we get similar results as for deterministic ones: Low end
complexity classes turn out to have the property that their complete sets are autoreducible, whereas
high end complexity classes do not. As we will see in more detail in the next section, this structural
dioeerence yields separations.
If we allow nonuniformity, the situation changes dramatically. Since probabilistic autoreducibil-
ity implies nonuniform autoreducibility [5], all our positive results for small complexity classes carry
over to the nonuniform setting. But, as we will see next, the negative results do not, because also
the complete sets for large complexity classes become autoreducible, both in the adaptive and in the
nonadaptive case. So, uniformity is crucial for separating complexity classes using autoreducibility,
and the Razborov-Rudich result [14] does not apply.
Feigenbaum and Fortnow [7] de-ne the following concept of #P-robustness, of which we also
consider the nonadaptive variant.
De-nition 4.1 A set A is #P-robust if #P A ' FP A ; A is nonadaptively #P-robust if #P A
FP A
tt .
Nonadaptive #P-robustness implies #P-robustness. For the usual deterministic and nondeterministic
complexity classes containing PSPACE, all 6 P
T -complete sets are #P-robust. For the
deterministic classes containing PSPACE, it is also true that the 6 P
tt -complete sets are nonadaptively
#P-robust.
The following connection with nonuniform autoreducibility holds:
Theorem 4.11 All #P-robust sets are nonuniformly autoreducible. All nonadaptively #P-robust
sets are nonuniformly nonadaptively autoreducible.
Proof
Feigenbaum and Fortnow [7] show that every #P-robust language is random-self-reducible. Beigel
and Feigenbaum [5] prove that every random-self-reducible set is nonuniformly autoreducible (or
iweakly coherentj as they call it). Their proofs carry over to the nonadaptive setting.
It follows that the 6 P
tt -complete sets for the usual deterministic complexity classes containing
PSPACE are all nonuniformly nonadaptively autoreducible. The same holds for adaptive reductions,
in which case the property is also true of nondeterministic complexity classes containing PSPACE.
In particular, we get the following:
Corollary 4.12 All 6 P
T -complete sets for NEXP, EXPSPACE, EEXP, NEEXP, EEXPSPACE,
. are nonuniformly autoreducible. All 6 P
tt -complete sets for PSPACE, EXP, EXPSPACE, . are
nonuniformly nonadaptively autoreducible.
5 Separation Results
In this section, we will see how we can use the structural property of all complete sets being
autoreducible to separate complexity classes. Based on the results of Sections 3 and 4, we only
get separations that were already known: EXPH 6= EEXPSPACE (by Theorems 4.3 and 3.1),
Theorems 4.6 and 3.2), and PH 6= EXPSPACE (by Theorems 4.4 and
3.3, and also by scaling down EXPH 6= EEXPSPACE). However, settling the question for certain
other classes, would yield impressive new separations.
We summarize the implications in Figure 7.
Theorem 5.1 In Figure 7, a positive answer to a question from the -rst column, implies the separation
in the second column, and a negative answer, the separation in the third column.
question yes no
Are all 6 P
T -complete sets for EXPSPACE autoreducible? NL 6= NP PH 6= PSPACE
Are all 6 P
T -complete sets for EEXP autoreducible?
Are all 6 P
tt -complete sets for PSPACE 6 P
Are all 6 P
tt -complete sets for EXP 6 P
Are all 6 P
tt -complete sets for EXPSPACE
probabilistically 6 P
Figure
7: Separation results using autoreducibility
Most of the entries in Figure 7 follow directly from the results of the previous sections. In order to
-nish the table, we use the next lemma:
Lemma 5.2 If NL, we can decide the validity of QBF-formulae of size t and with ff alternations
on a deterministic Turing machine M 1 in time t O(c ff ) and on a nondeterministic Turing
machine M 2 in space O(c ff log t), for some constant c.
Proof (of Lemma 5.2)
by Cook's Theorem we can transform in polynomial time a \Pi 1 -formula with free
variables into an equivalent \Sigma 1 -formula with the same free variables, and vice versa. Since
we can decide the validity of \Sigma 1 -formulae in polynomial-time. Say both the transformation algorithm
T and the satis-ability algorithm S run in time n c for some constant c.
Let OE be a QBF-formula of size t with ff alternations. Consider the following algorithm for
deciding OE: Repeatedly apply the transformation T to the largest suOEx that constitutes a \Sigma 1 - or
until the whole formula becomes \Sigma 1 , and then run S on it.
This algorithm correctly decides the truth of OE. Since the number of alternations decreases by
one during every iteration, it makes at most ff calls to T , each time at most raising the length of
the formula to the power c. It follows that the algorithm runs in time t O(c ff ) .
Moreover, since padding argument shows that DTIME[-
time constructible function - . Therefore the result holds. (Lemma 5.2)
This allows us to improve Theorems 3.2 and 3.3 as follows under the hypothesis
Theorem 5.3 If there is a 6 P
2\GammaT -complete set for EXPSPACE that is not probabilistically
autoreducible. The same holds for EEXP instead of EXPSPACE.
Proof
Combine Lemma 5.2 with the probabilistic extension of Lemma 3.4 used in the proof of Theorem
3.2.
Theorem 5.4 If there is a 6 P
3\Gammatt -complete set for PSPACE that is not nonadaptively
autoreducible. The same holds for EXP instead of PSPACE.
Proof
Combine Lemma 5.2 with Lemma 3.5.
Now, we have all ingredients for establishing Figure 7:
Proof (of Theorem 5.1)
The NL 6= NP implications in the iyesj-column of Figure 7 immediately follow from Theorems 5.3
and 5.4 by contraposition.
By Theorem 3.1, a positive answer to the 2nd question in Figure 7 would yield EEXP 6=
EEXPSPACE, and by Theorem 3.3, a positive answer to the 4th question would imply EXP 6=
EXPSPACE. By padding, both translate down to P 6= PSPACE.
Similarly, by Theorem 4.3, a negative answer to the 2nd question would imply EXPH 6= EEXP,
which pads down to PH 6= EXP. A negative answer to the 4th question would yield PH 6= EXP
directly by Theorem 4.4. By the same token, a negative answer to the 1st question results
in EXPH 6= EXPSPACE and PH 6= PSPACE, and a negative answer to the 3rd question in
PSPACE. By Theorem 4.6, a negative answer to the last question implies EXP 6= EXPSPACE
and P 6= PSPACE.
We note that we can tighten all of the separations in Figure 7 a bit, because we can apply
Lemmata 3.4 and 3.5 to smaller classes than in Theorems 3.1 respectively 3.3. One improvement
along these lines that might warrant attention is that we can replace iNL 6= NPj in Figure 7 by
This is because that condition suOEces for Theorems 5.3 and
5.4, since we can strengthen Lemma 5.2 as follows:
Lemma 5.5 If coNP ' NP " NSPACE[log O(1) n], we can decide the validity of QBF-formulae
of size t and with ff alternations on a deterministic Turing machine M 1 in time t O(c ff ) and on a
nondeterministic Turing machine M 2 in space O(d ff log d t), for some constants c and d.
6 Conclusion
We have studied the question whether all complete sets are autoreducible for various complexity
classes and various reducibilities. We obtained a positive answer for lower complexity classes in
Section 4, and a negative one for higher complexity classes in Section 3. This way, we separated
these lower complexity classes from these higher ones by highlighting a structural dioeerence. The
resulting separations were not new, but we argued in Section 5 that settling the very same question
for intermediate complexity classes, would provide major new separations.
We believe that re-nements to our techniques may lead to them, and would like to end with a
few words about some thoughts in that direction.
One does not have to look at complete sets only. Let C 1 ' C 2 . Suppose we know that all
complete sets for C 2 are autoreducible. Then it suOEces to construct, e.g., along the lines of Lemma
3.4, a hard set for C 1 that is not autoreducible, in order to separate C 1 from C 2 .
As we mentioned at the end of Section 5, we can improve Theorem 3.1 a bit by applying Lemma
3.4 to smaller space-bounded classes than EEXPSPACE. We can not hope to gain much, though,
since the coding in the proof of Lemma 3.4 seems to be DSPACE[2 n fi(n)
]-complete because of the
log fi(n) -formulae of size 2 n fi(n)
involved for inputs of size n. The same holds for Theorem 3.3
and Lemma 3.5.
Generalizations of autoreducibility may allow us to push things further. For example, one could
look at k(n)-autoreducibility where k(n) bits of the set remain unknown to the querying machine.
Theorem 4.3 goes through for k(n) 2 O(log n). Perhaps one can exploit this leeway in the coding of
Lemma 3.4 and narrow the gap between the positive and negative results. As discussed in Section
5, that would yield interesting separations.
Finally, one may want to look at other properties than autoreducibility to realize Post's Program
in complexity theory. Perhaps another concept from computability theory or a more arti-cial
property can be used to separate complexity classes.
Acknowledgments
We would like to thank Manindra Agrawal and Ashish Naik for very helpful discussions. We are
also grateful to Carsten Lund and Muli Safra for answering questions regarding the PCP Theorem.
We thank the anonymous referees for their nice suggestions on how to present our results.
--R
Proof veri-cation and hardness of approximation problems
On being incoherent without being very hard.
Using autoreducibility to separate complexity classes.
On the random-self-reducibility of complete sets
The role of relativization in complexity theory.
On the computational complexity of algorithms.
Mitotic recursively enumerable sets.
Classical Recursion Theory
Computational Complexity.
Recursively enumerable sets of positive integers and their decision problems.
Natural proofs.
Recursively Enumerable Sets and Degrees.
On autoreducibility.
On autoreducibility.
NP is as easy as detecting unique solutions.
Coherent functions and program checkers.
--TR
--CTR
Christian Glaer , Mitsunori Ogihara , A. Pavan , Alan L. Selman , Liyu Zhang, Autoreducibility, mitoticity, and immunity, Journal of Computer and System Sciences, v.73 n.5, p.735-754, August, 2007
Luca Trevisan , Salil Vadhan, Pseudorandomness and Average-Case Complexity Via Uniform Reductions, Computational Complexity, v.16 n.4, p.331-364, December 2007 | complexity classes;autoreducibility;completeness;coherence |
352775 | Self-Testing without the Generator Bottleneck. | Suppose P is a program designed to compute a function f defined on a group G. The task of self-testing P, that is, testing if P computes f correctly on most inputs, usually involves testing explicitly if P computes f correctly on every generator of G. In the case of multivariate functions, the number of generators, and hence the number of such tests, becomes prohibitively large. We refer to this problem as the generator bottleneck. We develop a technique that can be used to overcome the generator bottleneck for functions that have a certain nice structure, specifically if the relationship between the values of the function on the set of generators is easily checkable. Using our technique, we build the first efficient self-testers for many linear, multilinear, and some nonlinear functions. This includes the FFT, and various polynomial functions. All of the self-testers we present make only O(1) calls to the program that is being tested. As a consequence of our techniques, we also obtain efficient program result-checkers for all these problems. | Introduction
. The notions of program result-checking, self-testing, and self-correcting
as introduced in [4, 17, 5] are powerful tools for attacking the problem
of program correctness. These methods offer both realistic and efficient tools for
software verification. Various useful mathematical functions have been shown to have
self-testers and self-correctors; some examples can be found in [5, 3, 17, 9, 14, 18, 1,
19, 21, 6]. The theoretical developments in this area are at the heart of the recent
breakthrough results on probabilistically checkable proofs and the subsequent results
that show non-approximability of hard combinatorial problems.
Suppose we are given a program P designed to compute a function f . Informally,
a self-tester for f distinguishes the case where P computes f correctly always from
the case where P errs frequently. A result-checker for a function f takes as input
a program P and an input q to P , and outputs PASS when P correctly computes
f always and outputs FAIL if P (q) 6= f(q). Given a program P that computes f
correctly on most inputs, a self-corrector for f is a program P sc that uses P as an
oracle and computes f correctly on every input with high probability.
1.1. Definitions and Basics. Before we discuss our results, we present the
basic definitions of testers, checkers, etc., and state some desirable properties of these
programs. Let f be a function on a domain D and let P be a program that purports to
compute f . The testers, correctors, and checkers we define are probabilistic programs
that take P as an oracle, and in addition, take one or more of the following parameters
as input: an accuracy parameter ffl that specifies the conditions that P is expected to
This paper unifies the preliminary versions which appeared in the 27th Annual Symposium
on Theory of Computing [10] and in the 15th Annual Foundations of Software Technology and
Theoretical Computer Science [16].
y Department of Computer Science, Cornell University, Ithaca, NY 14853-7501
(ergun@cs.cornell.edu). This work is partially supported by ONR Young Investigator Award
N00014-93-1-0590, the Alfred P. Sloan Research Award, and NSF grant DMI-91157199.
z Department of Computer Science, Cornell University, Ithaca, NY 14853-7501
(ravi@cs.cornell.edu). This work is partially supported by ONR Young Investigator Award
N00014-93-1-0590, the Alfred P. Sloan Research Award, and NSF grant DMI-91157199.
x Department of Computer Science, University of Houston, Houston,
Most of this work performed while the author was at SUNY/Buffalo, supported in part by K. Regan's
NSF grant CCR-9409104.
UN, S. R. KUMAR, AND D. SIVAKUMAR
meet, and a confidence parameter ae that is an upper bound on the probability that
the tester/corrector/checker fails to do its job. The following definitions formalize the
notions of self-tester [5], self-corrector [5, 17], and result-checker [4].
Definition 1.1 (Self-Tester). An ffl-self-tester for f is a probabilistic oracle
program T that, given ae ? 0, satisfies the following conditions:
ffl Pr x2D [P
Definition 1.2 (Self-Corrector). An ffl-self-corrector for f is a probabilistic oracle
program P sc that, given any input y, and ae ? 0, satisfies the following condition:
ffl Pr x2D [P
Definition 1.3 (Result-Checker). A checker (or result-checker) for f is a probabilistic
oracle program C that, given an input y and ae ? 0, satisfies the following
conditions:
ffl Pr x2D [P outputs PASS, and
We now list three important properties that are required of self-testers, self-
correctors, and result-checkers. For definiteness, we state these for the case of self-
testers. First, the self-tester T should be computationally different from and more
efficient than any program that computes f [4]. This restriction ensures that T does
not implement the obvious algorithm to compute f (and hence could harbor the same
set of bugs, or be computationally inefficient). Furthermore, this ensures that the
running time of T is asymptotically better than the running time of the best known
algorithm for f . The second important property required of T is that it should not
require the knowledge of too many correct values of f . In particular, this rules out
the possibility that T merely keeps a large table of the correct values of f for all
inputs. The third important property required of a self-tester is efficiency : an efficient
self-tester should only make O(1=ffl; lg(1=ae)) calls to P . For constant ffl and ae, an
efficient self-tester makes only O(1) calls to the program. (In the rest of the paper,
we often write O(1) as a shorthand for O(1=ffl; lg(1=ae)), particularly when discussing
the dependence on other parameters of interest.)
The following well-known lemma summarizes some relationships between the notions
of self-testers, self-correctors, and result-checkers. For the reader's convenience,
we sketch the idea of the proof of this lemma, suppressing the details of the accuracy
and confidence parameters.
Lemma 1.4 ([5]). (a) If f has a self-tester and a self-corrector that make O(1)
calls to the program, then f has a result-checker that makes O(1) calls to the program.
(b) If f has a result-checker, then it has a self-tester.
Proof. (Sketch) For part (a), suppose that f has a self-tester and a self-corrector.
Given an input y and oracle access to a program P , first self-test P to ensure that
it doesn't err too often. If the self-tester finds P to be too erroneous, output FAIL.
Otherwise, compute f(y) by using the self-corrector for P sc and the program P , and
output PASS iff P
Clearly a perfect program always passes. Suppose P (y) 6= f(y). Then one of
the following two cases must occur. The program is too erroneous, in which case the
self-tester, and hence the checker, outputs FAIL. The program is not too erroneous,
in which case the self-corrector computes f(y) correctly with high probability, so the
checker detects that P (y) 6= P sc (y) and outputs FAIL.
For part (b), suppose that f has a result-checker. By using the result-checker to
test if P randomly chosen inputs x, the fraction of inputs x for
which P (x) 6= f(x) can be estimated. Output PASS iff this fraction is less than ffl.
A useful tool in constructing self-correctors is the notion of random self-reducibility .
The fine details of this notion are beyond the scope of this paper, and we refer the
reader to the papers [3, 17] (see also the survey paper [11]). Informally, a function
f is randomly self-reducible if evaluation of f on an input can be reduced efficiently
to the evaluation of f on one or more random inputs. For a quick example, note
that linear functions are randomly self-reducible: to compute f(x), it suffices to pick
a random r compute f(x + r) and f(r), and finally obtain
All functions that we consider in this paper are efficiently randomly self-reducible;
therefore, whenever required, we will always assume that efficient self-correction is
possible.
1.2. Building Self-Testers using Properties. The process of self-testing
whether a program P computes a function f correctly on most inputs is usually
a two-step strategy. First perform some tests to verify that P agrees on most inputs
with a function g that belongs to a certain class F of functions that contains f . Then
perform some additional tests to verify that the function g is, in fact, the intended
function f .
The standard way to test whether P agrees with some function in a class F of
functions is based on the notion of a robust property . Informally, property I is said to
be a robust characterization of a function family F if the following two conditions hold:
(1) every f 2 F satisfies I, and (2) if P is a function (program) that satisfies I for
most inputs, then P must agree with some g 2 F on most inputs. For example, Blum,
Luby, and Rubinfeld [5] establish that the property of linearity
serves as a robust property for the class of all linear functions, and use this to build
self-testers for linear functions. This generic technique was first formalized in [19].
(Robust Property). A property is a predicate I f
property I f (~x) is (ffl; ffi)-robust for a class of functions F over a domain
D, if it satisfies the following conditions:
\Theta
ffl If a function (program) P satisfies Pr ~x2D k [I P
there is a function g such that
that is, (9g 2 F) such that P agrees with g on all but ffi fraction of inputs.
We now outline the process of building self-testers using robust properties (cf. [5]).
Let D be a (finite) group with generators e some class of
functions from D into some range R. Further assume that the functions in F possess
the property of random self-reducibility, and can hence be self-corrected efficiently.
Suppose P is a program that purports to compute a specific function f 2 F . Let
I f (~x) be a robust property that characterizes F .
As mentioned earlier, the process of building self-testers is a two-step process.
In the first step, we will ensure that that the program P agrees with some function
2 F on most inputs. To do this, we will use the fact that I f is a robust property
that characterizes F . Specifically, the self-tester will estimate the fraction of k-tuples
holds. If this fraction is at least 1 \Gamma ffl, then by the robustness
of I f , it follows that there is some g 2 F that agrees with P on all but ffi fraction of
D. The required estimation can be carried out by random sampling of ~x and testing
the property I f .
UN, S. R. KUMAR, AND D. SIVAKUMAR
The next step is to verify that the function g is the same as the function f that
P purports to compute. This is achieved by testing that g(e i
generator of the group D. If this is true, then by an easy induction it would follow
that g j f . An important point to be mentioned here is that the self-tester has access
only to P and not to g; the function g is only guaranteed to exist. Nevertheless,
the required values of g may be obtained by using a self-corrected version P sc of P .
Another point worth mentioning is that to carry out this step, the self-tester needs
to know the values of f on every generator of D.
1.3. The Generator Bottleneck. An immediate application of the basic
method outlined above to functions whose domains are vector spaces of large dimension
suffers from a major efficiency drawback. For example, if the inputs to the
function f are n-dimensional vectors (or n \Theta n matrices), then the number of generators
of the domain is n (resp. n 2 ). The straightforward approach of exhaustively
testing if P sc agrees with f on each generator by making n (resp.
furthermore, the self-tester built this approach requires the knowledge of the correct
value of f on n (resp. is large, this makes the overhead in the
self-testing process too high. This issue is called the generator bottleneck problem.
In this paper, we address the generator bottleneck problem, and solve it for a
fairly large class of functions that satisfy some nice structural properties. The self-
testers that we build are not only useful in themselves, but are also useful in building
efficient result-checkers, which are important for practical applications.
1.4. Our Results. We present a fairly general method of overcoming the generator
bottleneck and testing multivariate functions by making only O(1) calls to the
program being tested.
First we investigate the problem of multivariate linear functions (i.e., the functions
f satisfying We show a general technique that can be applied
in a natural vector space setting. The main idea is to obtain an easy and uniform
way of "generating" all generators from a single generator. Using this idea, we give a
simple and powerful condition for a linear function f to be efficiently self-testable on
a large vector space. We then apply this scheme to obtain very efficient self-testers
for many functions. This includes polynomial differentiation (of arbitrary order),
polynomial integration, polynomial "mod" function, etc. We also obtain the first
efficient self-tester for Fourier transforms.
We then extend this method to the case of multilinear functions (i.e., functions
f that are linear in each variable when the other variables are fixed). We build an
efficient tester for polynomial multiplication as a consequence. Another application
we give is for large finite fields: we show that multilinear functions over finite field
extensions of dimension n can be efficiently self-tested with O(1) calls, independent of
the dimension n. We also provide a new efficient self-tester for matrix multiplication.
We next extend the result to some nonlinear functions. We give self-testers for
exponentiation functions that avoid the generator bottleneck. For example, consider
the function that computes the square of a polynomial over a finite field:
Here we do not have the linearity property that is crucial in the proof for the linear
functions. Instead, we use the fact that the Lagrange interpolation identity (cf. Fact
4.1) for polynomials gives a robust characterization. We exhibit a self-tester for the
function that makes O(d) calls to the program being tested. Extending the
technique when f is a constant degree exponentiation to the case when f is a constant
degree polynomial (eg., is a polynomial over a finite field)
is much harder. First we show a reduction from multiplication to the computation of
low-degree polynomials. Using this reduction and the notion of a result-checker, we
construct a self-tester for degree d polynomials over finite field extensions of dimension
n that make O(2 d ) calls to the program being tested.
1.5. Related Work. One method that has been used to get around the generator
bottleneck has been to exploit the property of downward self-reducibility [5].
The self-testers that use this property, however, have to
make\Omega\Gamma383 n) calls to the
program depending on the way the problem decomposes into smaller problems. For
instance, a tester for the permanent function of n \Theta n matrices makes O(n) calls to
the program, whereas a tester for polynomial multiplication that uses similar principles
makes O(log n) calls. In [5] a bootstrap tester for polynomial multiplication that
makes O(log n) calls to the program being tested is given. It is already known that
matrix multiplication can be tested (without any calls to the program) using a result-
checker due to Freivalds [13]. The idea of Freivalds' matrix multiplication checker can
also be adapted to build testers for polynomial multiplication that make no calls to
the program being tested. This approach, however, requires the underlying field to be
large (have at least (2 is the degree of the polynomials being
multiplied, and fl is a positive constant). Moreover, this scheme requires the tester
to perform polynomial evaluations, whereas ours does not. For Fourier transforms, a
different result-checker that uses preprocessing has been given independently in [6].
A Useful Fact. The following fact, a variant of the well-known Chernoff-Hoeffding
bounds, is often very useful in obtaining error-bounds in sampling 0/1 random variables
[15]:
Fact 1.6. Let Y independently and identically distributed 0=1 random
variables with means -. Let ' - 2. If N - (1=-)(4
e
Organization of the Paper. Section 2 discusses the scheme for linear functions
over vector spaces; x3 extends the scheme for multilinear functions; x4 outlines the
approach for non-linear functions.
2. Linear Functions over Vector Spaces. In this section, we address the
problem of self-testing linear functions on a vector space without the generator bot-
tleneck. We demonstrate a general technique to self-test without the generator bottleneck
and provide several interesting applications of our technique.
Definitions. Let V be a vector space of finite dimension n over a field K , and let f
be a function from V into a ring R. We are interested in building a self-tester for the
case where f(\Delta) is a linear function, that is, f(cff
and c 2 K . For the unit vector that has a 1 in the i-th
position and 0's in the other positions. The vectors e 1 a collection of
basis vectors that span V . Viewed as an Abelian group under vector addition, V is
generated by e We assume that the field K is finite, since it is not clear how
to choose a random element from an infinite field.
The property of linearity I f (ff; fi) was shown to
be robust in [5]. Using this and the generic construction of self-testers from robust
properties, one obtains the following self-tester for the function f :
6 F. ERG -
UN, S. R. KUMAR, AND D. SIVAKUMAR
Property Test:
Repeat O( 1
ae ) times
Pick ff; fi 2R V
Verify
Reject if the test fails
Generator Tests:
For
Verify P sc
If P passes the Property Test then we are guaranteed the existence of a linear
function g that is close to P . There are, however, two problems with the Generator
Tests: one is that the self-tester is inefficient-if the inputs are vectors of size n, the
self-tester makes O(n) calls to the program, which is not desirable. Secondly, the
self-tester needs to know the correct value of f on n different points, which is also
undesirable. Our primary interest is to avoid this generator bottleneck and solve both
of the problems mentioned. The key idea is to find an easy and uniform way that
"converts" one generator into the next generator. We illustrate this idea through the
following example.
Example. Let Pn denote the additive group of all degree n polynomials over a
field K . The elements multiplying any generator x k
by x gives the next generator x k+1 . For a polynomial q 2 Pn and a scalar c 2 K , let
denote the function that evaluates q(c). Clearly E c is linear and satisfies the
simple relation E c Suppose P is a program that purports to compute
and assume that P has passed Property Test given above. Then we know by
robustness of linearity that there is a linear function g that agrees with P on most
inputs. Note that g can be computed correctly with high probability via the self-
corrector (which are easy to construct for linear functions [5]). Now, rather than
verify that g(x k of Pn , we may instead verify that g
satisfies the property By an easy induction, this implies
that g agrees with E c at all the generators. By linearity of g, it follows that g agrees
with E c on all inputs.
We are now faced with the task of verifying
is too expensive to be tried explicitly and exhaustively. Instead, we prove that it
suffices to check with O(1) tests that almost everywhere that we look
at. That is, pick many random q 2 Pn , ask the program P sc to compute the values
of g(q) and g(xq) and cross-check that holds. In other words, we prove
that the property J g (q) j robust (in a restricted sense), under the
assumption that g is linear. (In its most general interpretation, robustness guarantees
the existence of h that satisfies and that agrees with
g on a large fraction of inputs. We actually show that h j g, hence the "restricted
sense.") Notice that the number of points on which the self-tester needs to know the
value of f is just one, in contrast to n as in the original approach of [5].
Generalization via the Basis Rotation Function '. We note that this idea has a
natural generalization to vector spaces. Let ' denote the basis rotation function, i.e.,
the linear operator on a vector space V that "rotates" the coordinate axes that span
. ', which can be viewed as a matrix, defines a one-to-one correspondence from the
set of basis vectors to itself: for every i, '(e i . The computational payoff
is achieved when there is a simple relation between f(ff) and f('(ff)) for all vectors
specifically, we show that the generator bottleneck can be avoided if
there is an easily computable function h ';f such that
for all ff 2 V . (For instance, for polynomial evaluation E c ,
From here on, when obvious, we drop the suffix f and simply denote h ';f as h ' . If the
function f is linear, the linearity of ' implies that h ' is linear in its second argument
in the following sense: h ' (ff What is
more important is that h ' be easy to compute, given just ff and f(ff). Using this
scheme, we show that many natural functions f have a suitable candidate for h ' .
The Generator Tests of [5] can now be replaced by:
Basis Test:
Verify P sc
Inductive Test:
Repeat O( 1
ae ) times
Pick ff 2R V
Verify P sc
Reject if the test fails
The following theorem proves that this replacement is valid:
Theorem 2.1. Suppose f is a linear function from the vector space V into a
ring R, and suppose P is a program for f .
(a) Let ffl ! 1=2, and suppose P satisfies the following condition:
Then the function g defined by is a linear
function on V , and g agrees with P on at least 1 \Gamma 2ffl fraction of the inputs.
(b) Furthermore, suppose h ' (ff; satisfies the following
conditions:
(3) Pr ff2V [g('(ff)) 6= h ' (ff; g(ff))] - ffl, where ff is such that '(ff) is defined.
Remarks. The above theorem merely lists a set of properties. The fact that this
set yields a self-tester is presented in Theorem 2.2. Note that hypotheses (1), (2), and
(3) above are conditions on P and g, not tests performed by a self-tester.
Proof. The proof that the function g is linear and P sc computes g (with high
probability) is due to [5]. For the rest of this proof, we will assume that g is linear
and that it satisfies conditions (2) and (3) above.
We first argue that it suffices to prove that if the conditions hold, then for every
agrees with f on the first basis
vector. For i ? 1, the basis vector e i can be obtained by '(e
would follow that g computes f correctly on all
the basis vectors. Finally, since g is linear, it computes f correctly on all of V , since
the vectors in V are just linear combinations of the basis vectors.
Now we show that condition (3) implies that 8ff 2 V ,
an arbitrary element ff 2 V . We will show that the probability over a random
UN, S. R. KUMAR, AND D. SIVAKUMAR
that positive. Since the equality is independent of fi and
holds with nonzero probability, it must be true with probability 1. Now
Pr
\Theta
The first equality in the above is just rewriting. The second equality follows from
the linearity of '. The third equality follows from the fact that g is linear. If the
random variable fi is distributed uniformly in V , the random variables fi and ff \Gamma fi
are distributed identically and uniformly in V . Therefore, by the assumption that g
satisfies condition (3), the fourth equality fails with probability at most 2ffl. The fifth
equality uses the fact that h ' is linear, and the last equality uses the fact that g is
linear.
The foregoing theorem shows that if P (and g) satisfy certain conditions, then g,
which can be computed using P , is identically equal to the function f . The self-tester
comprises the following tests: Linearity Test, Basis Test, and Inductive Test.
Theorem 2.2. For any ae ! 1 and ffl ! 1=2, the above three tests comprise a
2ffl-self-tester for f . That is, if a program P computes f correctly on all inputs, then
the self-tester outputs PASS with probability 1, and if P computes f incorrectly on
more than 2ffl fraction of the inputs, then the self-tester outputs FAIL with probability
at least 1 \Gamma ae.
Proof. In performing the three tests, the self-tester is essentially estimating the
probabilities listed in conditions (1), (2), and (3) of the hypothesis of Theorem 2.1.
Note that condition (2) does not involve any probability; rather, the self-tester uses
P sc to compute g(e 1 ). By choosing O((1=ffl) log(1=ae)) samples in Linearity Test
and Inductive Test and by using the self-corrector with confidence parameter ae=3
in Basis Test the self-tester ensures that its confidence in checking each condition
is at least 1 \Gamma (ae=3).
correctly, the tester always outputs PASS. Con-
versely, suppose the tester outputs PASS. Then with probability ae, the hypotheses
of Theorem 2.1 are true. By the conclusion of Theorem 2.1, it follows that a function
g that is identical to f exists and that g equals P on at least 1 \Gamma 2ffl fraction of the
inputs.
2.1. Applications. We present some applications of Theorem 2.2. We remind
the reader that a linear function f on a vector space V is efficiently self-testable without
the generator bottleneck if there is a (linear) function h ' that is easily computable
and that satisfies In each of our applications
f , we show that a suitable function exists that satisfies the above condi-
tions. Recall the example of the polynomial evaluation function E c
the identity E c holds; in the applications below, we will only establish
similar relationships. Also, for the sake of simplicity, we do not give all the technical
parameters required; these can be computed by routine calculations following the
proofs of the theorems in the last section.
Our applications concern linear functions of polynomials. We obtain self-testers
for polynomial evaluation, Fourier transforms, polynomial differentiation, polynomial
integration, and the mod function of polynomials. Moreover, the vector space setting
lets us state some of these results in terms of the matrices that compute linear
transforms of vector spaces.
Let Pn ' K [x] denote the group of polynomials in x of degree - n over a field
K . The group Pn forms a vector space under usual polynomial addition and scalar
multiplication by elements from K . The polynomials
polynomial
has the vector representation (q
basis rotation function ' in this case is just multiplication by x, thus
that multiplying q by x results in a polynomial of degree n+ 1. To handle this minor
detail, we will assume that the program works over the domain Pn+1 and we conclude
correctness over Pn .
Polynomial Evaluation. For any c 2 K , let E c (q) denote, as described before,
the function that returns the value q(c). This function is linear. Moreover, the
relation between E c (xq) and E c (q) is simple and linear: E c To self-test
a program P that claims to compute E c , the Inductive Test is simply to choose
many random q's, and verify that P sc holds.
Operators and the Discrete Fourier Transform. If
are distinct elements of K , then one may wish to evaluate a polynomial q 2 Pn
simultaneously on all points. The ideas for E c extend easily to this case, for
any u 2 K , and these relations hold simultaneously.
Let ! be a principal (n 1)-st root of unity in K . The operation of converting a
polynomial from its coefficient representation to pointwise evaluation at the powers of
! is known as the Discrete Fourier Transform (DFT). DFT has many fundamental applications
that include fast multiplication of integers and polynomials. With our nota-
tion, the DFT of a polynomial q 2 Pn is simply F
The DFT F is linear, and F that here the
function h is really n coordinate functions h n. The self-tester will
simply choose q's randomly, request the program to compute F (q) and F (xq), and
verify for each holds.
This suggests the following generalization (for the case of arbitrary vector spaces).
Simultaneous evaluation of a polynomial at d corresponds to
multiplying the vector p by a
. The ideas
used to test simultaneous evaluation of polynomials and the DFT extend to give a
self-tester for any linear transform that is represented by a Vandermonde matrix.
The matrix for the DFT can be written as a Vandermonde matrix F , where
. The inverse of the DFT, that is, converting a polynomial from pointwise
representation to coefficient form, also has a Vandermonde matrix whose entries are
given by e
\Gammaij . It follows that the inverse Fourier Transform can be
self-tested efficiently. Another point worth mentioning here is that in carrying out
the Inductive Test the self-tester does not have to compute det F . All it needs to
do is verify that for many randomly chosen q's, the identity ( e
F (q))[i]
holds.
Operators in Elementary Jordan Canonical Form. A linear operator M is said to
be in elementary Jordan canonical form if all the diagonal entries of M are c for some
all the elements to the left of the main diagonal (the first non-principal
diagonal in the lower triangle of M) are 1's. It is easy to verify that
where M 0 is a matrix that has a \Gamma1 in the top left corner and a 1 in the bottom right
UN, S. R. KUMAR, AND D. SIVAKUMAR
corner and zeroes elsewhere. Therefore, for every in the
vector space, This gives an easy way to
implement the Inductive Test in the self-tester.
An attempt to extend this to matrices in Jordan canonical form, or even to
diagonal matrices, seems not to work. If, however, a diagonal or shifted diagonal
matrix has a special structure, then we can obtain self-testers that avoid the generator
problem. For example, the matrix corresponding to the differentiation of polynomials
has a special structure: it contains the entries n; on the diagonal above
the main diagonal.
Differentiation and Integration of Polynomials. Differentiation of polynomials is
a linear function We have the explicit form for
Integration of polynomials is a linear function I : Pn ! Pn+1 . The explicit form for h
is Even though this does not readily fit into our framework
(since it is not of the form the proof of Theorem 2.1 can be
easily modified to handle this case using the linearity of I . For completeness, we spell
out the details for the robustness of the Inductive Test which is the only change
required.
Lemma 2.3. If g : Pn ! Pn+1 is a linear function that satisfies Pr q2Pn [g(xq) 6=
Proof.
Pr
r2Pn
\Theta
Since the event holds with positive probability and is
independent of r, it holds with probability one.
Thus we can avoid the generator bottleneck for these functions. This can be
considered as a special case of the previous application.
Higher Order Differentiation of Polynomials. Let D k denote the k-th differential
operator. It is easy to write a recurrence-like identity for D k in terms of D
This gives us a self-tester only in the library setting described in [5, 20], where one
assumes that there are programs to compute all these differential operators. If we
wish to self-test a program that only computes D k and have no library of lower-order
differentials, this assumption is not valid. To remedy this, we will use the following
lemma, which is proved in the Appendix.
Lemma 2.4. If q is a polynomial in x of degree - k, then
Using this identity, the self-tester can perform an Inductive Test. The robustness
of the Inductive Test can be established as in the proof of Theorem 2.1. For
completeness, we outline the key step here. Let c i denote the coefficient of the term
in the sum in Lemma 2.4. Thus c
(\Gammax) k\Gammai .
Lemma 2.5. If g is a linear function that satisfies Pr q2Pn [
Proof.
Pr
r2Pn
\Theta X
Here the first equality is rewriting, the second equality holds with probability 1\Gamma2ffl
by the assumption that Pr q [
the event
holds with positive probability and is independent of r, it holds with probability one.
Thus, testing if g satisfies this identity for most q suffices to ensure that g satisfies
this identity everywhere. If g does satisfy this identity, then we know the following:
To conclude that g j D k by induction, we need to modify Basis
Test to test k base cases: if
Mod Function. Let ff 2 K [x] be a monic irreducible polynomial. Let M ff (q)
denote the mod function with respect to ff, that is, M ff ff. This is a linear
function when the addition is interpreted as mod ff addition. Since ff is monic, the
degree of M ff (q) is always less than deg ff. If c 2 K is the coefficient of the highest
degree term in M ff (q), we have
ae xM ff (q) if deg xM ff (q) ! deg ff
As before, in testing if a program P computes the function M ff , step (3) of the
self-tester is to choose many q's at random, compute P sc (q) and P sc (xq), and verify
that one of the identities P sc holds (depending
on the degree of q).
3. Multilinear Functions. In this section we extend the ideas in x2 to multilinear
functions. A k-variate function f is called k-linear if it is linear in each of its variables
when the other variables are fixed, i.e., f(ff
Our main motivating example for a multilinear function is polynomial multiplication
which is bilinear. Note that the domain of f is
generated by n 2 generators of the form (i.e., pairs of generators
of suppose we wish to test P that purports to compute f . The naive
approach would require doing the Generator Tests at these n 2 generators. This
requires O(n 2 ) calls to P , rendering the self-tester highly inefficient. Blum, Luby,
and Rubinfeld [5] give a more efficient bootstrap self-tester that makes O(log O(k) n)
calls to P . It can be seen that for general k-linear functions, their method can be
extended to yield a tester that makes O(log O(k) n) calls to P . (In our context, it is
allowable to think of k as a constant since changing k results in an entirely different
function f .) We are interested in reducing the number of calls to P with respect to
UN, S. R. KUMAR, AND D. SIVAKUMAR
the problem size n for a specific function f . The complexity of the tester we present
here is independent of n, and the self-tester is required to know the correct value of
f at only one point. As in the previous section, our result applies to many general
multilinear functions over large vector spaces.
As before, we define a set of properties depending on f , that, if satisfied by P ,
would necessarily imply that P must be the same as the particular multilinear function
f . For simplicity, we present the following theorem for f that is bilinear. This is an
analog of Theorem 2.1 for multilinear functions.
Theorem 3.1. Suppose f is a bilinear function from V 2 into a ring R, and
suppose P is a program for f .
(a) Let ffl ! 1=4, and suppose P satisfies the following condition:
Then the function g defined by g(ff
is a bilinear function on V 2 , and g agrees
with P on at least 1 \Gamma 2ffl fraction of the inputs.
(b) Furthermore, suppose g satisfies the following conditions:
Proof. A simple extension of the proof in [5] shows that g is bilinear. (Better
bounds on ffl via a different test can be obtained by appealing to [2].) As in the
proof of Theorem 2.1, it suffices to show that given the three conditions, a stronger
version of condition (3) holds: g('(ff 1
h (2)
. With the addition of this last property, it can
be shown that g j f . Taking condition (2) that g(e 1 as the base case
and inducting by obtaining (e via an application of '
to either generator for all 1 be shown that g(e i
bases elements This, combined with the bilinearity property of g, implies
the correctness of g on every input.
Now we proceed to show the required intermediate result that given conditions
(1) and (2), g satisfies the stronger version of condition (3) that we require above:
Pr
\Theta
h (2)
The first equality is a rewriting of terms. Multilinearity of g implies the second
and third equalities. If the probability Pr[g(ff
the fourth equality fails with probability less than 4ffl. The rest of the equality follows
from the multilinearity of h (2)
' and g. If ffl ! 1=4, this probability is nonzero. Since the
first and last terms are independent of are equal with nonzero probability,
the result follows. A similar approach works for h (1)
' as well.
Multilinearity Test:
Repeat O( 1
ae ) times
Pick
Verify
Reject if the test fails
Basis Test:
Verify P sc
Inductive Test:
Repeat O( 1
ae ) times
Pick
Verify
Verify
Reject if the test fails
Note that in the latter two tests we use a self-corrected version P sc of P . The notion
of self-correctors for multilinear functions over vector spaces is implied by random
self-reducibility.
It is easy to see that Theorem 3.1 extends to an arbitrary k-linear function so
long as ffl ! 1=2 k . Thus, we obtain the following theorem whose proof mirrors that of
Theorem 2.2.
Theorem 3.2. If f is a k-variate linear function, then for any ae ! 1 and
, the above three tests comprise a 2 k ffl-self-tester for f that succeeds with
probability at least 1 \Gamma ae.
3.1. Applications. Let q 1 ; q 2 denote polynomials in x. The function M(q
that multiplies two polynomials is symmetric and linear in each variable. Moreover,
since polynomial multiplication has an efficient
self-tester.
An interesting application of polynomial multiplication, together with the mod
function described in x2.1, is the following. It is well-known that a degree n (finite)
extension K of a finite field F is isomorphic to the field F[x]=(ff), where ff is an
irreducible polynomial of degree n over F. Under this isomorphism, each element of
K is viewed as a polynomial of degree - n over F, addition of two elements
is just their sum as polynomials, and multiplication of q
by q 1 q 2 mod ff. It follows that field arithmetic (addition and multiplication) in finite
extensions of a finite field can be self-tested without the generator bottleneck, that is,
the number of calls made to the program being tested is independent of the degree of
the field extension.
3.2. Matrix Multiplication. Let Mn denote the algebra of n \Theta n matrices over
Mn !Mn denote matrix multiplication. Matrix multiplication
14 F. ERG -
UN, S. R. KUMAR, AND D. SIVAKUMAR
is a bilinear function; however, since it is a matrix operation rather than a vector oper-
ation, it requires a slightly different treatment from the general multilinear functions.
Mn , viewed as an additive group, has n 2 generators; one possible set of generators is
where each generator E i;j is a matrix that has a 1 in position
We note that any generator E i;j can be converted into any
other generator E k;' via a sequence of horizontal and vertical rotations obtained by
multiplications by the special permutation matrix \Pi:
The rotation operations correspond to the ' operator of the model for multilinear func-
tions. There are, however, two different kinds of rotations-horizontal and vertical-
due to the two-dimensional nature of the input, and the function h defining the behavior
of the function with respect to these rotations is not always easily computable
short of actually performing a matrix multiplication. We therefore exploit some additional
properties of the problem to come up with a set of conditions that are sufficient
for P to be computing matrix multiplication f . Let M 0
n denote the subgroup of Mn
that contains only matrices with columns all-zero.
Theorem 3.3. Let P be a program for f and ffl ! 1=8.
(a) Suppose P satisfies the following:
ffl.
Then the function g defined by g(X; Y
bilinear function on M 2
n , and g agrees with P
on at least 1 \Gamma 2ffl fraction of the inputs.
(b) Furthermore, suppose g satisfies the following conditions:
To be able to prove this theorem, we first need to show that the conditions
recounted have stronger implications than their statements. Then we will show that
these strengthened versions of the conditions imply Theorem 3.3.
First, we show that condition (2) implies a stronger version of itself:
Lemma 3.4. If condition (2) in Theorem 3.3 holds then g(E
Proof. We in fact show something stronger. We show that g(X; Y
all
Pr
\Theta g(X; Y
The second equality holds with probability by condition (2). All the rest
hold from linearity of g and f . The result follows since ffl ! 1=4. The lemma follows
since
n .
An immediate adaptation of the proof of Lemma 3.4 can be used to extend condition
(3) to hold for all inputs.
Next, we show that the linearity of g makes it possible to conclude from hypothesis
(4) that g is associative.
Lemma 3.5. If condition (4) in Theorem 3.3 holds then g is always associative.
Proof.
Pr
\Theta
The first equality holds from the linearity of g after expanding X;Y; Z as
respectively. The second one is true by the condition
8ffl. The last one is a recombination of terms using linearity.
We now have the tools to prove Theorem 3.3.
Proof. The bilinearity of g follows from the proof of Theorem 3.1.
From Lemmas 3.4 and 3.5, we have that condition (2) can be extended such that
conditions (3) and (4) on
hold for all inputs.
To show that these properties are sufficient to identify g as matrix multiplication,
note that from the multilinearity of g we can write,
1-i;j-n
1-k;'-n
1-i;j;k;'-n
If implies that g is the
same as f . Now, using our assumptions, we proceed to show the former holds:
The first equality is just a rewriting of the two generators in terms of other generators.
The second one follows from the strengthening of condition (3) that g computes f
whenever one of its arguments is equal to a power of \Pi. The third one follows from
associativity of g, and the fourth one holds because g is the same as f when the first
input is a power of \Pi and because of rewriting of E k+j \Gamma2;' . The fifth equality is true
because g computes f correctly when its first argument is E i;1 (as the consequence of
condition (2), see Lemma 3.4). The last one is a rewriting of the previous equality,
using the associativity of multiplication. Therefore, g is the same function as f .
We now present the test for associativity:
UN, S. R. KUMAR, AND D. SIVAKUMAR
Associativity Test:
Repeat O( 1
ae ) times
Pick X;Y; Z 2R Mn
Verify P sc (X; P sc (Y;
Reject if the test fails
A self-tester can be built by testing conditions (1), (2), (3), and (4), which correspond
to the Property Test the Basis Test, the Inductive Test and the Associativity
Test respectively. Note that testing conditions (2) and (3) involve knowing the value
of f at random inputs. These inputs, however, come from a restricted subspace which
makes it possible to compute f both easily and efficiently. The following theorem is
immediate.
Theorem 3.6. For any ae ! 1 and ffl ! 1=8, there is an ffl-self-tester for matrix
multiplication that succeeds with probability at least 1 \Gamma ae.
4. Nonlinear Functions. In this section, we consider nonlinear functions.
Specifically, we deal with exponentiation and constant degree polynomials in the ring
of polynomials over the finite fields Z p . It is obvious that exponentiation and constant
degree polynomials are clearly defined over this ring.
4.1. Constant Degree Exponentiation. We first consider the function
q d for some d (that is, raising a polynomial to the d-th power). Suppose a program
P claims to perform this exponentiation for all degree n polynomials q 2 Pn ' K [x].
Using the low-degree test of Rubinfeld and Sudan [19] (see also [14]) we can first test
if the function computed by P is close to some degree d polynomial g. As before,
using the self-corrected version P sc of P , we can also verify that g(e 1
The induction identity applies, and one can test whether
P satisfies this property on most inputs. Now it remains to show that this implies
We follow a strategy similar to the case of linear
functions, this time using the Lagrange interpolation formula as the robust property
that identifies a degree d polynomial. We note that this idea is similar to the use of
the interpolation formula by Gemmell, et al. [14], which extends the [5] result from
linear functions to low-degree polynomials. Before proceeding with the proof, we state
the following fact concerning the Lagrange interpolation identity.
Fact 4.1. Let g be a degree d polynomial. For any q 2 Pn , if are
distinct elements of Pn ,
Y
; and also
Y
The self-tester for comprises the following tests:
Degree Test:
Verify P is close to a degree d polynomial g (low-degree test)
Reject if the test fails
Basis Test:
Verify P sc
Inductive Test:
Repeat O( 1
ae ) times
Pick ff 2R V
Verify P sc
Reject if the test fails
Let fi denote the probability that the d random choices from the domain produce
distinct elements. We will assume that the domain is large enough so that fi is close
to 1.
Assume that P passes the Degree Test and P sc passes the Basis Test that is,
agrees with some degree-d polynomial g on most inputs. We note that the low-degree
of test of [19] makes O((1=ffl) log(1=ae)) calls to render a decision with confidence
only O((1=ffl) log(1=ae)) calls to compute g
correctly with probability (ae=3). Below we sketch the proof that if ffl
and P sc passes the Inductive Test then g satisfies
that so the time taken by the tester is only \Theta(d) (when ae is a constant).
Pr
a
a
a
Here a
The first equality is Fact 4.1, and applies since
g has been verified to be a degree d polynomial. Since the q i 's are uniformly and
identically distributed, by Inductive Test the second equality fails with probability
1)ffl. The third equality is just rewriting, and the fourth equality is due to
Fact 4.1 (the interpolation identity), which can be applied so long as the q i 's are
distinct, an event that occurs with probability fi. Since the equality
holds independent of q i 's, if ffl holds with probability 1.
Theorem 4.2. The function has an O(1=d)-self-tester
that makes O(d) queries.
4.2. Constant Degree Polynomials. Next we consider extending the result of
x4.1 to arbitrary degree d polynomials f : Pn ! P nd . Clearly the low-degree test and
the basis test work as before. The interpolation identity is valid, too. The missing
ingredient is the availability of an identity like "f which, as we have
shown above, is a robust property that can be efficiently tested. We show how to get
UN, S. R. KUMAR, AND D. SIVAKUMAR
around this difficulty; this idea is based on a suggestion due to R. Rubinfeld [private
communication].
nd is a degree d polynomial (eg.,
suppose a program P purports to compute f . Our strategy is to design a self-corrector
R for P and to then estimate the fraction of inputs q such that P (q) 6= R(q). The
difficulty in implementing this idea by directly using the random self-reducibility of
f is that the usefulness of the self-corrector (to compute f correctly on every input)
depends critically on our ability to certify that P is correct on most inputs. Since
checking whether P is correct on most inputs is precisely the task of self-testing, we
seem to be going in cycles.
To circumvent this problem, we will design an intermediate multiplication program
Q that uses P as an oracle. To design the program Q, we prove the following
technical lemma that helps us express d-ary multiplication in terms of f-that is, we
establish a reduction from the multilinear function
to the nonlinear function
f . This reduction is a generalization to degree d of the elementary polarization identity
slightly stronger in that it works for arbitrary
polynomials of degree d, not just degree-d exponentiation.
Lemma 4.3. For x 2 f0; 1g d , let x i denote the i-th bit of x. For any polynomial
of degree d,
x2f0;1g d
Y
G
d
Y
Using the reduction given by Lemma 4.3, we will show how to construct an ffl-self-
tester T for f (for following the outline sketched.
(1) First we build a program Q that performs c-ary multiplication for any c - d
d, we can simply multiply by extra 1's). The program Q is then self-tested
efficiently (without the generator bottleneck) by using a (1=2 d+1 )-self-tester for the
d-variate multilinear multiplication function from x3. The number of queries made to
P in this process is O(1), where the constant depends on d (the degree of f) but not
on n (the dimensionality of the domain of f ). Thus if Q passes this self-testing step,
then it computes multiplication correctly on all but 1=2 d+1 fraction of the inputs. If
Q fails the self-testing process, the self-tester T rejects.
(2) Next we build a reliable program Q sc that self-corrects Q using the random
self-reducibility of the multilinear multiplication function. That is, Q sc can be used to
compute c-ary multiplication (for any c - d) correctly for every input with probability
at least 1 \Gamma ae for any constant ae ? 0. In particular, by making O(2 d log d) calls to
(and hence O(2 2d log d) calls to P ), Q sc can be used to compute multiplication
correctly for every input with probability at least 1 \Gamma (1=10d).
(3) Next, we use Q sc to build the program R that computes f(q) in the straight-forward
way by using Q sc to compute the d required multiplications. If Q sc computes
multiplication correctly with probability correctly for
any input with probability at least 0:9.
Finally, T randomly picks O((1=ffl) log(1=ae)) many samples q and checks
if outputs PASS iff P all the chosen values of q.
It is easy to see that if P computes f correctly on all inputs, the self-tester T will
output PASS with probability one. For the converse, suppose that
outputs PASS. We will upper bound the probability of this event
by ae.
Since Q passes the self-testing step (Step (1)), it computes multiplication correctly
on all but 1=2 d+1 fraction of the inputs, and therefore, the the use of the self-corrector
Q sc as described in Step (2) is justified. This, in turn, implies the guarantee made of
R in Step (3): for every input q, with probability at least 0:9. In Step
(4), the probability that P (q) 6= R(q) for a single random q is at least (0:9)ffi. The
probability that P for every random input q chosen in Step (4) is therefore
at most (1 \Gamma (0:9)ffi)
Thus the probability that T outputs PASS, given that Pr q [P (q) 6= f(q)] ? ffl, is at
most ae. The following theorem is proven, modulo the proof of Lemma 4.3.
Theorem 4.4. The function f(q), where f is a polynomial in q 2 Pn of degree
d, has an O(1=2 d )-self-tester that makes O(2 d ) queries.
Even though our self-tester makes O(2 d ) queries to test degree-d exponentiation, the
number of queries is independent of n, the dimensionality of the domain. Thus, our
self-tester is attractive if n is large and d is small. In particular, in conjunction with
the testers for finite field arithmetic described in x3.1, the self-testers described here
help us to efficiently self-test constant degree polynomials on finite field extensions of
large dimension.
It remains to prove Lemma 4.3. This lemma is a direct corollary of the following
lemma, which illustrates a method to express d-ary multiplication in terms of f . The
proof of the next lemma is given in the Appendix.
Lemma 4.5. Let d be distinct variables. For x 2 f0; 1g d , let x i denote
the i-th bit of x. Then
x2f0;1g d
Y
Appendix
. Proof of Lemma 2.4 and Lemma 4.5.
Lemma 2.4. If q is a polynomial in x of degree - k, then
Proof. By induction on k. The base case obviously true. Let k ? 0, we
have
and since differentiation is linear, we have
Since the first term (i = 0) in the first sum vanishes, and since i
\Delta , the first sum evaluates to
which 1)!q by the inductive hypothesis. Hence it suffices to
show that the second sum evaluates to 0. This summation can be written
as
tity, this sum can be split into the terms
xD(q)). The second term can be seen
UN, S. R. KUMAR, AND D. SIVAKUMAR
to be (\Gammax)k!D(q) using the induction hypothesis. The first and third terms can be
combined to obtain k!(xD(q)) again using the induction hypothesis. Thus, the entire
Lemma 4.5. Let d be distinct variables. For x 2 f0; 1g d , let x i denote
the i-th bit of x. Then
x2f0;1g d
Y
Proof. The proof uses the Fourier Transform on the boolean cube f0; 1g d (us-
ing the standard isomorphism between f0; 1g d and Z d
denote the space of
functions from Z d
into C . F is a (finite) vector space of functions of dimension 2 d .
Define the inner product between functions f;
x f(x)g(x).
For
define the function - ff : Z d
x i and ff i denote the i-th bits, respectively, of x and ff. It is easy to check that
whence every - ff is a character of Z d
. Furthermore, it is
easy to check that h- ff Therefore, the
characters - ff form an orthonormal basis of F , and every function f : Z d
unique expansion in this basis as
ff
f ff - ff . This is called the Fourier transform
of f , and the coefficients b
f ff are called the Fourier coefficients of f ; by the orthonormality
of the basis, b
h- ff ; fi. An easy property of the Fourier transform is that
for ff 6= 0 d ,
(in fact, this is true for any non-trivial character of any
group).
For the proof of the lemma, we note that it suffices to prove the lemma for all
complex numbers p i . Fix a list of complex numbers define the function
. Then the left hand side of the statement of the
lemma is just 2 d b
Thus:
x2Z d/ d
Y
f ff
x
x
0-n1 ;:::;n d -c
Y
0-n1 ;:::;n d -c
Y
x
d
Y
0-n1 ;:::;n d -c
Y
x
d
Y
The innermost sum is zero if ff
every i, that is, n d, this is impossible,
since
d, then the only way this can happen is if
otherwise, for some i, n i ? 1, and since
i, it is easy to see that we have 2 d b
Acknowledgments
. We are very grateful to Ronitt Rubinfeld for her valuable
suggestions and guidance. We thank Manuel Blum and Mandar Mitra for useful
discussions. We thank Dexter Kozen for his comments. We are grateful to the two
anonymous referees for valuable comments that resulted in many improvements to our
exposition. The idea of describing the proof of Lemma 4.5 using Fourier transforms
is also due to one of the referees.
--R
Checking approximate computations over the reals
Hiding instances in multioracle queries
Designing programs that check their work
A theory of testing meets a test of theory
Reflections on the Pentium division bug
Functional Equations and Modeling in Science and Engineer- ing
A note on self-testing/correcting methods for trigonometric functions
Locally random reductions in interactive complexity theory
Approximating clique is almost NP-complete
Fast probabilistic algorithms
On self-testing without the generator bottleneck
New directions in testing
Testing polynomial functions efficiently and over rational do- mains
Robust characterizations of polynomials with applications to program testing
A Mathematical Theory of Self-Checking
Robust functional equations with applications to self-testing/ correcting
On the role of algebra in the efficient verification of proofs
--TR
--CTR
M. Kiwi, Algebraic testing and weight distributions of codes, Theoretical Computer Science, v.299 n.1-3, p.81-106,
Marcos Kiwi , Frdric Magniez , Miklos Santha, Exact and approximate testing/correcting of algebraic functions: a survey, Theoretical aspects of computer science: advanced lectures, Springer-Verlag New York, Inc., New York, NY, 2002 | program correctness;generator bottleneck;self-testing |
353196 | An approach to safe object sharing. | It is essential for security to be able to isolate mistrusting programs from one another, and to protect the host platform from programs. Isolation is difficult in object-oriented systems because objects can easily become aliased. Aliases that cross program boundaries can allow programs to exchange information without using a system provided interface that could control information exchange. In Java, mistrusting programs are placed in distinct loader spaces but uncontrolled sharing of system classes can still lead to aliases between programs. This paper presents the object spaces protection model for an object-oriented system. The model decomposes an application into a set of spaces, and each object is assigned to one space. All method calls between objects in different spaces are mediated by a security policy. An implementation of the model in Java is presented. | Introduction
In the age of Internet programming, the importance of sound
security mechanisms for systems was never greater. A host
can execute programs from unknown network sources, so it
needs to be able to run each program in a distinct protection
domain. A program running in a protection domain is
prevented from accessing code and data in another domain,
or can only do so under the control of a security policy. Domains
protect mistrusting programs from each other, and
also protect the host environment. There are several aspects
to protection domains: access control, resource allocation
and control, and safe termination. This paper concentrates
on access control.
There are several ways to implement protection domains
for programs. Operating systems traditionally implement
domains using hardware-enforced address spaces. The trend
towards portable programs and mobile code has led nonetheless
to virtual machines that enforce protection in software.
One example in the object-oriented context is a guarded object
[10]. In this approach, a guard object maintains a reference
to a guarded object; a request to gain access to the
guarded object is mediated upon by the guard. Another example
of software enforced protection is found in Java [2],
where each protection domain possesses its own name space
(set of classes and objects) [17]; domains only share basic
system classes. Isolation between Java protection domains
is enforced by run-time controls: each assignment of an object
reference to a variable in another domain is signaled as
a type error (ClassCastException).
The difficulty in implementing protection domains in an
object-oriented context is the ease with which object aliases
can be created [12]. An object is aliased if there is at least
two other objects that hold a reference to it. Aliasing is difficult
to detect, and unexpected aliasing across domains can
constitute a storage channel since information that was not
intended for external access can be leaked or modified [15].
Aliasing between domains can be avoided by making their
object sets disjoint. Any data that needs to be shared between
domains is exchanged by value instead of by reference.
Partitioning protection domains into disjoint object
graphs is cumbersome when an object needs to be accessible
to several domains simultaneously. This is especially
true if the object is mutable e.g., application environment
objects, as this necessitates continuous copies of the object
being made and transmitted. A more serious problem is that
some system objects must be directly shared by domains,
e.g., system-provided communication objects, and even this
limited sharing can be enough to create aliases that lead to
storage channels. For instance, there is nothing to prevent
an error in the code of a guard object from leaking a reference
to the guarded object to the outside world.
Techniques that control object aliasing are often cited as a
means to enforce security by controlling the spread of object
references in programs [12]. These techniques are fundamentally
software engineering techniques; their goal is to enforce
stronger object encapsulation. Security requires more than
this for two main reasons. First, aliasing control techniques
are often class-based: they aim to prevent all objects of a
class from being referenced, though cannot protect selected
instances of that class. Second, security constraints are dynamic
in nature and aliasing constraints are not. One example
is server containment [7], where the goal is to allow a
server to process a client request, but after this request, the
server must "forget" the references that it holds for objects
transmitted by the client. The goal of server containment is
to reduce the server's ability to act as a covert channel [15].
The crux of the problem is that once a reference is ob-
tained, it can be used to name an object and to invoke methods
of that object. We believe that naming and invocation
must be separated, thus introducing access control into the
language. Least privilege [23] is one example of a system
security property that requires access control for its imple-
mentation. Least privilege means that a program should be
assigned only the minimum rights needed to accomplish its
task. Using an example of file system security, least privilege
insists that a directory object not be able to gain access
to the files it stores [7] in order to minimize the effects of
erroneous directory objects. Thus, a directory can name file
objects, but can neither modify nor extract their contents.
This paper introduces the Object Spaces model for an
object-oriented system; Java [2] is chosen as the implementation
platform. A space is a lightweight protection domain
that houses a set of objects. All method calls between objects
of different spaces are mediated by a security policy,
though no attempt is made to control the propagation of
object references between spaces. The model allows for safe
and efficient object sharing. Its efficiency stems from the fact
that copy-by-value of object parameters between domains is
no longer needed. The model is safe in the sense that if
ever an object reference is leaked to a program in another
space, invocation of a method using that reference is always
mediated by a security policy. In addition, access between
objects of different spaces is prohibited by default; a space
must be explicitly granted an access right for a space to invoke
an object in it, and the owner of this space may at any
time revoke that right.
The implementation of the model is made over Java and
thus no modifications to the Java VM or language are
needed. Each application object is implicitly assigned a
space object. An object can create an object in another
space, though receives an indirect reference to this object:
A bridge object is returned that references the new object.
Our implementation assures the basic property that access
to an object in another space always goes through a bridge
object. Bridges contain a security policy that mediate each
cross-space method call.
A space is a lightweight protection domain as it only models
the access control element of a domain; thread management
and resource control issues are not treated since these
require modifications to the Java virtual machine.
The remainder of this paper is organized as follows. Section
2 outlines the object space model and explains its design
choices. Section 3 presents a Java API for the model and examples
of its use. Section 4 describes the implementation of
the model over Java 2 and gives performance results. Section
5 reviews related work and Section 6 concludes.
2 The Object Space Model
The basis of the object space model is to separate the ability
to name an object from the ability to invoke methods of that
object. This is done by partitioning an application's set of
objects into several object spaces. A space contains a set of
objects, and possibly some children spaces. Every object of
an application is inside of exactly one space. An object may
invoke any method of any object that resides in the same
space. Method invocation between objects of different spaces
is mediated by an application-provided security policy.
We start in Section 2.1 with an overview of the object
space model. A formal definition of the model is given in
Section 2.2, and we present some examples of how the model
well-known protection problems in Section 2.3.
2.1 Model
Overview
The set of spaces of an application is created dynami-
cally. On application startup, the initial objects occupy a
RootSpace. Objects of this space may then create further
spaces; these new spaces are owned by the RootSpace. These
children spaces may in turn create further spaces. For each
space created by an object, the enclosing space of the creator
object becomes the owner space of the new space. The
space graph is thus a tree under the ownership relation.
(a)
(b)
(c)
Figure
1 Ownership and authorization relations on spaces.
An object in a space may invoke methods of an object
in another space if the second space is owned by the calling
object's space. When the calling object is not in the owner
space of the called object, then the calling space must have
been explicitly granted the right to invoke methods of objects
in the second space by the owner of the second space.
The set of spaces is organized hierarchically because this
models well the control structure of many applications [22].
Typically, a system separates programs into a set of protection
domains since they must be protected from each other.
A program's components may also need to be isolated from
one another, since for example it may use code from different
libraries. In the object space model, a program in a space
can map its components to distinct spaces.
We decided not to include space destruction within the
space model, because of the difficultly of implementing this
safely and efficiently. A space, like an object, can be removed
by the system when other spaces no longer possess a
reference to it (or to the objects inside of it).
An object may create other objects in its space without
any prohibition. A space may also create objects in its children
spaces - this is how a space's initial objects are created.
A space s1 may not create objects in another space s2 if s1 is
not the parent of space s2 . The goal of this restriction is to
prevent s1 from inserting a Trojan Horse object into space
s2 that tricks s2 into granting s1 a right for s2 .
Figure
1 gives an example of how the space graph of an
application develops, and how access rights are introduced.
Ownership is represented by arrows and access rights are
represented by dashed line arrows. When the system starts,
RootSpace (represented by space s0 in the figure) is created.
In this example, space s0 creates two children spaces, s1 and
s2 , and permits objects of space s1 to invoke objects of space
s2 . Objects of space s0 can invoke objects of spaces s1 and
s2 by default since space s0 is the owning space of s1 and s2 .
In Figure 1b, space s1 creates a child space s3 , and grants
it a copy of its access right for s2 . Only space s1 possesses
a right on S3 though. In Figure 1c, s3 has created a child
space s4 , and granted space s2 an access right for s4 . This
means that objects of spaces s2 and s3 can call objects in
space s4 .
The access control model could be seen as introducing
programming complexity because an object that possesses a
reference is no longer assured that a method call on the referenced
object will succeed. This is also the case for applets
in Java where calls issued by applets to system objects are
mediated by SecurityManager objects that can reject the
call.
2.2 Formal Definition
The state of the object space protection system is defined
by the triple
[S, O, R]
where S is the set of spaces, O is the ownership relation
and R represents the space access rights. Let N denote the
set from which space names are generated. S is a subset of
names of newly created spaces are taken from
NnS. O is a relation on spaces (N \Theta (N [ fnullg)); we
use s1 O s2 to mean that space s1 "is owned by" space s2 .
The expression s1 O s2 evaluates to true if (s1 , s2
The null value in the definition of O denotes the owner of
RootSpace. Finally, R is a relation of type (N \Theta N ); s1 R s2
means that space s1 possesses the right to invoke methods
in space s2 .
A space always has the right to invoke (objects in) owned
spaces: s1 O s2 ) s2 R s1 . Further, an object can always
invoke methods on objects in the same space as itself:
8s:S. s R s.
We now define the semantics of the object space opera-
tions. The grant(s0 , s1 , s2) operation allows (an object of)
space s0 to give (objects of) space s1 the right to invoke objects
of space s2 . For this operation to succeed, s0 must be
the owning space of s2 or, s0 must already have an access
right for s2 and be the owner of s1 . The logic behind this is
that a parent space decides who can have access to its chil-
dren, and a space may always copy a right that it possesses
to its children spaces.
if (s2 O s0 ) (s0 R s2 " s1 O s0)
then [S, O, (R [ f(s1 , s2)g)]
else [S, O, R]
Access rights between spaces can also be revoked. A space
s can revoke the right from any space that possesses a right
for a space owned by s. When a space loses a right, then all
of its descendant spaces in the space hierarchy implicitly lose
the right also. This is because a space might have acquired
a right, granted a copy of that right to a child space, and
have the child execute code on its behalf that exploits that
access right. The operation revoke(s0 , s1 , s2) is used by
(an object of) space s0 to remove the right of (objects of)
space s1 to access objects of space s2 . The operation is the
reverse of grant. D(s) denotes the set of descendant spaces
of s in the space tree; as said, these spaces also have their
right revoked.
D(s"))g. This operation does not allow an owner to lose its
right to access a child space.
if (s2 O s0) (s1 O s0 " s1 R s2))
then [S, O, (R n f(s1 , s2
else [S, O, R]
A space s may create a new space for which it becomes
the owner. The new space is given a fresh name s' (s' 62 S).
create(s)[S, O, R] "
On system startup, the RootSpace s0 is created. The
initial system state is thus [fs0 g, f(s0 , null)g, f(s0 , s0)g].
Finally, each time a method call is effected in the system,
an access control decision is made using the checkAccess
operation to determine if space s0 may invoke methods on
objects in space s1 :
In Figure 1a, S is fs0 , s1 , s2g, and O is f(s1 , s0 ), (s2 , s0 ),
(s0 , null)g. R contains f(s0 , s1 ), (s0 , s2 ), (s1 , s2)g as well
as the pairs (s i , s i ) for each s i in S.
In Figure 1b, the sets fs3g, f(s3 , s1)g and f(s3 , s2 ), (s1 ,
s3g are included in the three elements of the system state.
In Figure 1c, s3 has created a child space s4 , and granted
space s2 an access right for s4 . Thus, s3 R s4 , s2 R s4 , and
s4 O s3 .
2.3 Examples
We give some brief examples of how the model can be ex-
ploited. More detail is added in Section 3 after a Java API
for the model has been presented.
2.3.1 Program Isolation
Today's computer users cannot realistically trust that the
programs they run are bug or virus free. It is crucial then
that the host be able to run a non-trusted program in isolation
from its services. This means that client programs not
be able to communicate with the services, or that they can
only do so under the control of a security policy that decides
whether each method call from a program to the servers is
permitted.
The basis to achieving isolation using the object space
model is shown in Figure 2a. The Root space creates a space
(Server) for the host service objects, and a client space
for each of the user programs. The host's security policy is
placed in the Root space, and controls whether the user programs
may access the services using the grant and revoke
operations. The code of this example is given in Section 3.2.
In comparison, the ability to isolate programs in this fashion
is awkward in Java using loader spaces. In Java, each
program is allocated its own class loader [17], which is responsible
for loading versions of the classes for the pro-
gram. An object instantiated from a class loaded by one
loader is considered as possessing a distinct type to objects
of the same class loaded by another loader. This
means that the assignment of an object reference in one
domain to a variable in another domain constitutes a type
error (ClassCastException). This model is inconvenient for
client-server communication, since parameter objects must
be serialized (transferred by value).
(b)
(a)
Root
Root
Guard
G-Obj
(c)
user
Root
Client
Server
client1 client2
Server
user
packet
Figure
Examples with spaces.
2.3.2 Guarded Objects
A common example of a mechanism for controlled sharing
is guarded objects. We consider two different versions of
the guarded object notion here: a Java 2 version [10] and a
second more traditional version [23].
In Java, a guard object contains a guarded object. On
application startup, only the guard object possesses a reference
to the guarded object. This object contains a method
which executes a checkAccess method that encapsulates
a security policy, and returns a reference to the
guarded object if checkAccess permits. This mechanism is
useful in contexts where a client must authenticate itself to
a server before gaining access to server objects, e.g., a file
server that authenticates an access (using checkAccess) before
returning a reference to a file.
An implementation of the more traditional guarded object
notion would never return a reference to the guarded object.
Rather, each method call to the guarded object would be
mediated by the guard, which would then transfer the call
if checkAccess permits, and then transfer back the result
object.
Both of these approaches have weaknesses however. In
the Java version, there is no way to revoke a reference to the
guarded object once this reference has been copied outside of
the guard object. Revocation is needed in practice to confine
the spread of access rights in a system. The problem with
the traditional notion of guarded object is that a method of
the guarded object may return an object that itself contains
a reference to the guarded object. This clearly undermines
the role of the guard.
Figure 2b illustrates how guarded objects are implemented
in the object space model. A guard object is placed in its own
space (Guard), and the guard creates a child space (G-Obj)
in which it instantiates the guarded object. Space Guard
controls what other spaces can access G-Obj. To implement
the traditional version of the guarded object paradigm, the
guard would never grant access to G-Obj to other spaces.
To implement the Java version, getObject() of the guard
grants the Client space access to G-Obj in the event of
checkAccess succeeding. The guard can at any time revoke
access, which is something that cannot be done in traditional
implementations, so even if a reference is leaked, a grant
operation must also be effected for access to the guarded
object to be possible. We give the code of this example in
Section 3.2.
Guards are required in all systems for stronger encapsula-
tion. The goal of encapsulation is be able to make an object
public - accessible to other programs - without making its
component objects directly accessible. This is often a requirement
for kernel interface objects, since a serious error
could occur if a user program gained hold of a reference to
an internal object. An example of this is the security bug
that allowed an applet to gain a reference to its list of code
signers in JDK1.1.1, which the applet could then modify [4].
By adding signer objects to this list, the applet
could inherit the privileges associated with that signer.
private Vector /* of object */ signers;
public Vector getSigners()-
return signers;
The JDK actually used an array to represent the signers
[4]; arrays require special treatment in the object space
model, as will be seen in Section 4. This example also shows
that declaring a variable as private is not enough to control
access to the object bound to that variable. In the object
space model, stronger encapsulation of internal objects (e.g.,
signers) is achieved by having these objects instantiated in
a space (G-Obj) owned by the kernel interface objects' space
(Guard).
2.3.3 Server Containment
Servers are shared by several client programs. In an environment
where mistrusting programs execute, a server should
not be allowed to act as a covert channel by holding onto references
to objects passed as parameters in a service request
and then subvertly passing these references to a third party.
Security requires that a server be contained [7] - the server
can no longer gain access to any object after the request has
been serviced. A schema for this using spaces is shown in
Figure 2c. The packet space is for objects that are being
passed as parameter. The server is granted access to these
objects for the duration of the service call. This access is
revoked following the call. Server containment requires the
ability to isolate programs from one another, and the ability
to revoke rights on spaces. As such, it uses features also
present in the preceding two examples.
3 The Object Space API
This section describes the classes of the object space system
API, and then presents an example of its use.
3.1 Basic Java Classes
There are only three classes that an application requires
to use the object space model: IOSObject, Space and
RemoteSpace. We briefly describe the role of each, before
presenting an implementation over Java 2 in Section 4.
The class IOSObject
1 describes an object that possesses
an attribute Space. This attribute denotes the space that
houses the object. Not all objects of an application need
inherit from IOSObject; the only requirement is that the
first object instantiated within each space be a subclass of
IOSObject since in this way, there is at least one object from
which others may obtain a pointer to their enclosing Space
object. The API of IOSObject is the following:
public class IOSObject
protected Space mySpace;
public IOSObject();
public final Space getSpace();
The getSpace method enables an object to get a handle
on its enclosing space from an IOSObject.
The Space class represents an object's handle on its enclosing
space. Handles on other spaces are instances of the
RemoteSpace class. SpaceObject is an empty interface implemented
by both Space and RemoteSpace.
public final class Space implements SpaceObject
public static RemoteSpace
createRootSpace(IOSObject iosObj);
protected
public RemoteSpace createChildSpace();
public void grant( SpaceObject sourceSpace,
SpaceObject targetSpace );
public void revoke( SpaceObject sourceSpace,
SpaceObject targetSpace );
public Object newInstance( String className,
RemoteSpace target );
public RemoteSpace getParent();
protected void setParent(Space parent);
static boolean checkAccess( Space protectedObjSpace,
"IOS" comes from "Internet Operating System", which
is the name of the project in which the space model was
developed.
Space callerSpace )
Recall that spaces are organized in a hierarchy: the
root of the hierarchy is created with the static method
createRootSpace. This method returns a RemoteSpace ob-
ject, and the system ensures that this method is called only
once. The createChildSpace method creates a child space
of the invoking object's space. The grant and revoke methods
implement the access control commands of the model
(see Section 2.2). The space of the object that invokes either
operation is the grantor or revoker space of the operation.
The newInstance method creates an object within the
specified space. This is how objects are initially created
inside of a space. The implementation verifies that the class
specified extends IOSObject, so that subsequently created
objects have a means to obtain a reference to their Space.
Further, only a parent space may execute this method. The
goal is to prevent spaces injecting malicious code into a space
in the aim of forcing that space to execute a grant that would
allow the malicious object space gain an access right to the
attacked space. The setParent method is executed by the
system when initiating a space; the checkAccess method
that consults the security policy. These two methods are
only used by the object space model implementation.
The third of the classes in the object space API is
RemoteSpace:
public final class RemoteSpace implements SpaceObject
public RemoteSpace(Space sp);
This represents a handle on another space. The only user-visible
(public) operation is the constructor that allows an
object to generate a remote space pointer from the pointer
to its enclosing space. This enables a space to transfer a
pointer to itself to other spaces and thus allow other spaces
to grant it access rights.
It is important to note that an object can only possess
a Space reference to its enclosing space, and never to other
spaces. In this way, the system assures that an object in
one space does not force another space to grant it an access
right since the grant and revoke operations are only defined
in Space, meaning that the system can always identify the
space of the invoking object and thus authorize the call.
Note also that the Space and RemoteSpace classes are final,
meaning that a malicious program cannot introduce Trojan
Horse versions of these classes into the system.
3.2 Example code extracts
The first example continues the program isolation discussion
of Section 2.3.1, and is taken from a newspaper system
for the production and distribution of articles [19]. Here we
concentrate on a program that compiles an article. For security
reasons, we wish to isolate this program from the rest
of the system - in particularly from the Storage and graphical
Editor objects. This requires being able to meditate all
method calls between the client program and the services.
These security requirements are summarized in Figure 3.
The is a typical example of the need to isolate user programs
from the rest of the system. Section 4 gives a performance
comparison of an implementation of this example
using Java loader spaces with copy-by-value semantics, and
the object spaces implementation.
Root
Storage
Article
client
Client Space Service Space
Editor
Figure
3 The article packager example.
In the code below, Root is the application start-up program
that creates two object spaces, and instantiates the objects
in each domain. This is the only class of the application
that uses the object space model API methods. The Editor
class uses several Swing components to offer a front-end user
this exchanges request messages and events with
the client program.
public class Root extends IOSObject-
public void start()-
// Create the client and server Spaces
RemoteSpace
RemoteSpace
// Allocate access rights;
// Create the services .
newInstance("GUI.Editor", child1);
Storage
newInstance("Kernel.Storage", child1);
// . and create the client
client
newInstance("Kernel.client", child2);
// Start things running
Editor ed = E.init(c); A.init();c.init(ed, A);
public static void main( String[] args )-
RemoteSpace
In this example, the application main starts an instance
of Root. This creates two child spaces, child1 and child2,
grants a right to each space to invoke object methods in the
other. An Editor object and a Storage object are created in
space child1 and the client program is installed in child2.
The editor is given a reference to the client object (so that
it can forward events from the GUI interface) and the client
is given a reference to the two service objects.
The Root class here is almost identical to that used in an
implementation of the guarded object model of Figure 2b.
A Guard object and client are created in distinct spaces. In
the extract of this example below, the guard has a string (of
class IOSString) as guarded object. The class IOSString is
our own version of String; the motivation for this class is
given in Section 4.
import InternetOS.*;
import InternetOS.lang.*;
public class Guard extends IOSObject-
IOSString guardedObject;
RemoteSpace guardedSpace;
public void init() -
newInstance("InternetOS.lang.IOSString",
guardedObject.
set(new IOSObject("The secret text."));
public Object getObject(IOSString password,
RemoteSpace caller)-
// if checkAccess(password) -.
mySpace.grant(caller, guardedSpace);
return guardedObject;
The Guard object has an init method that is called by
the Root. This method creates a child space (guardedSpace),
instantiates the guarded object in this space, and initializes
it using its set method (defined in IOSString). The role
of the guard is to mediate access requests on getObject. A
client must furnish a password string and the guard verifies
the password using the guard object's checkAccess method.
If the check succeeds, the guard grant's access to the client
space and returns the object reference.
public class client extends IOSObject-
Guard G;
public void init(Guard G)-
G;
IOSString
new IOSString("This is my password");
IOSString
(IOSString)G.getObject(password,
new RemoteSpace(mySpace));
System.out.println("String is ");
The client is a program that requests access from the
guard by supplying the password and a pointer to its space
to getObject.
4 The Object Space Implementation
In this section we describe the implementation of our model.
We first describe the notion of bridge, which is the mechanism
that separates spaces at the implementation level.
For portability and prototyping reasons, the current implementation
is made over the Java 2 platform, so no modifications
to the virtual machine or language were made. A
future implementation could integrate the model into the
JVM; in this way other aspects of protection domains such
as resource control and safe termination can also be treated.
We begin in Section 4.1 by describing the basic role of
bridge objects. Section 4.2 describes how they are interposed
on method calls between spaces, and Section 4.3 explains
how bridge classes are generated. Section 4.4 describes in
more detail how the object space model interacts and in some
cases conflicts with features of the Java language. Section 4.5
presents a performance evaluation of the implementation.
4.1 Bridge Objects
So far, we have seen that objects belong to spaces and that
they interact either locally inside the same space or issue
method calls across space boundaries. Interactions between
objects of different spaces are allowed only if the security
policy permits.
To implement the object space model, a bridge object is
interposed between a caller and a callee object when these
are located in different spaces. If the caller has the authorization
to issue the call, then the bridge forwards the call to
the callee, otherwise a security violation exception is raised.
We call the callee the protected object, since it is protected
from external accesses by the bridge, and we use the term
possessor to refer to the caller. This is illustrated in Figure 4,
where real references are denoted by arrows; the dashed line
arrow denotes a protected reference whose use is meditated
by a security policy. The security policy is represented by
an access matrix accessible to all bridges; this encodes the
authorization relation R defined in Section 2.2.
Access
Matrix
bridge
protected object
possessor
Space 1
Figure
4 A bridge object interposed between object spaces.
Bridges are hidden from application programmers. They
are purely an implementation technique and do not appear
in the API. Assuming security permits, a program behaves
as if it has a direct reference to objects in remote spaces. The
main exception to this rule are array references which always
refer to local copies of arrays, even if the entries in an array
can refer to remote objects. We return to the question of
arrays in Section 4.4. Consider class Root in Section 3.2; its
start method contains the call c.init(ed, A) to transfer
references for the editor and storage objects to the client
program. The three objects are all in a different space to
the Root object; the references used are in fact references
to bridge objects even though the programmer of the Root
class does not see this.
Bridges are implemented using instances of Java Bridge
classes, where Bridge is an interface that we provide. Each
class C has a bridged class BC constructed for it. The interface
of BC is the same as that of C. Further, BC is defined
as a subclass of C, which makes it possible to use instances
of BC (i.e., the bridges) anywhere that instance of C are
expected.
The role of a bridge is three-fold:
1. It verifies that the caller can issue a call to the protected
object. To be more precise, this results in verifying that
the space of the caller can access the space of the callee
according to the security policy, consulted using the
checkAccess method of class Space. This method is
shown in Figure 7.
2. It forwards the request from the possessor to the protected
object, if the possessor has the right to access
the protected object.
3. It ensures that objects exchanged as parameters between
the possessor and the protected object do not
become directly accessible from outside their spaces.
The protection model is broken if an object obtains a direct
reference to an object in another space (a reference
is direct if no bridge is interposed between the objects).
This can happen during a call if the arguments are directly
passed to the callee. Therefore, a bridge can be
interposed between the callee and the arguments it re-
ceives. Similarly, this wrapping can occur on the object
returned from the method call on the protected object.
To avoid reference leaks exploiting the Java exception
mechanism, bridges are also responsible for catching
exceptions raised during the execution of the protected
object's method, and for throwing bridged versions of
the exceptions to the caller.
4.2 Interposition of Bridges
Bridges are introduced into the system when an application
object creates an object in a child space using the
newInstance method. This method is furnished by the system
(in Space) and cannot be redefined by users since it is
defined in a final class. In addition to creating the required
object and assigning it to the space, newInstance creates a
bridge for the new object. A reference to this bridge is returned
to the object that initiates the object creation, making
the new object accessible to its creator only through the
bridge. For instance, in the example 3.2, references E, A, c
point to a bridge instead of pointing directly to an Editor,
a Storage or a client object since these objects are created
using newInstance.
The other way that bridge objects appear in the system is
during cross-domain calls where the need for protection for
arguments and returned objects arise. By default, when a
reference to an object is passed through a bridge, a bridge
object for the referenced object is generated in the destination
space. Nevertheless, if a bridge object for the protected
object already exists in the destination space, then a reference
to this bridge is returned instead of having a new bridge
object generated. This is implemented using a map that
maps protected object and space pairs to the bridge used by
objects in that space to refer to the protected object. An
advantage of this solution is that the same bridge is shared
among objects of the same space referring to the same protected
object. However, if objects reside in different spaces,
they cannot share the same bridge. A second advantage of
this is that the time needed to consult the bridge cache is
inferior to the time needed to generate a new bridge object.
A final advantage concerns equality semantics: the == operator
applied to two bridge references to the same protected
object always evaluates to true.
However, there are cases where bridge interposition is not
necessary. For instance, if an object creates another object
which resides in the same space as its creator, then a direct
reference to the new object is allowed. This is the case
when the Java new operator is used, i.e., an object created
with new belongs to the same space as its creator and direct
invocations are allowed. Further, if a bridge receives a
bridge reference as an argument to a call and observes that
the protected object of that bridge is actually in the destination
space, then the protected object reference is returned
in place of the bridge.
Space 1 O3
Space 3
Space 1 O3
O2
O2
O4
Figure
5 The creation of bridges between spaces.
An example of the interposition of bridge objects is shown
in
Figure
5. Space 1 possesses an object O1 that creates an
object O2 in Space 2 using newInstance; a bridge B1 is
created for this reference. O1 then passes a reference for
itself to O2; B1 detects that this reference is remote and
creates a bridge B2. O2 creates an object O3 locally in its
space using new, and obtains a direct reference to it. O3 then
creates an object O4 in Space 3, and a bridge B3 is created.
Finally, O2 passes a reference for O1 to O4 (via O3); a new
bridge B4 is created for this that notes that the reference is
from Space 3 to Space 1.
The only exception to the above is the exchange of
RemoteSpace objects. Objects of this type can be freely
passed to foreign spaces without bridge intervention. Con-
sequently, direct access to these objects is allowed. These
objects are used by other spaces for granting or revoking
accesses to their children. Allowing direct sharing of
RemoteSpace objects does not lead to reference leaks, since
no methods or instance fields of RemoteSpace are accessible
to user programs, as can be seen from its API.
A Space object transmitted through a bridge is converted
to a RemoteSpace. This is needed to ensure the invariant
that a Space object can only be referenced by an object
enclosed by that space.
4.3 Generating Bridge Classes
This subsection describes the generation by our system of
the bridge class BC which mediates accesses to instances of
class C.
Bridge generation always starts from a call to the
getBridge method of the BridgeFactory class. This method
expects three arguments, the object the bridge has to guard
(the protected object), the space of the protected object and
the space of the possessor. The method getBridge returns
a bridge whose class is a subclass of the protected object's
class. If the class file of the bridge does not exist at the time
the method is called, construction of the class file is started
and instantiation of a new object follows. This method is
also responsible for the management of the map that caches
bridges interposed between a given space and a protected
object and returns a cached bridge if another object in the
given space refers to the protected object. An outline of the
code of class BridgeFactory is shown in Figure 10.
Bridge classes are placed in the same package as the object
space implementation. Their constructors are protected,
which prevents user code from directly creating instances.
The main task behind the generation of a bridge is to pro-
duce, for each method m defined in class C as well as in its
superclasses, a corresponding method mB in BC that implements
the expected functionality of the bridge as described
in Section 4.1.
The structure of each mB method generated for m is uni-
form. First, a piece of code is inserted at the beginning of
the method to consult the security policy. If the access is
granted, a bridge, instead of the argument itself, is passed to
the protected object when forwarding the call. This has to
be done for each argument (except if the argument is primitive
or of class RemoteSpace, in which case the value of the
argument is copied) and this ensures that the protected object
cannot possess a direct access to arguments. Once the
arguments are converted, the method forwards the call to the
protected object. If the call returns a non-primitive value,
then as for the arguments, a bridge instead of the returned
value is returned to the caller object. Figure 9 presents the
bridge generated from the user class FileUpdater shown in
Figure
8.
To avoid exceptions leaking out internal objects, bridges
catch exceptions thrown in the protected object and generate
a bridge that encapsulates the exception, before throwing
this exception back to the caller.
The code that checks whether access to the protected
object is allowed is performed in the static method
checkAccess defined in class Space. This method takes the
space of a protected object as well as the space of its caller
as input and consults the access matrix stored in the two dimensional
array called authorizations for deciding whether
access can be granted or not.
The code that interposes a bridge between the arguments
of the call and the protected object is present in method
getBridgeForArg, whereas the code that interposes a bridge
between the returned value and the possessor of the bridge
is located inside method getBridgeForReturn. Both methods
are defined inside class BridgeFactory as shown in Figure
10. Notice that these methods handle several cases; either
the argument (respectively the returned object) is a
bridge, a RemoteSpace, a Space or an instance of a user class.
If it is a bridge, then the object protected by the bridge is
extracted and an appropriate bridge is interposed between
the protected object and the callee (respectively the caller).
public class BridgeInternal
Object protectedObj;
Space protectedObjSpace;
Space callerSpace;
//initialize fields.
initialize( Object go, Space goSpace, Space pSpace)
-.
Figure
6 Class BridgeInternal
class Space-
// the access matrix
static boolean[][] authorizations;
static boolean checkAccess( Space protectedObjSpace,
Space callerSpace )-
return
Figure
7 Method checkAccess controls access.
Each bridge contains a BridgeInternal object. The role
of this object is to store the information related to the state
of the bridge, i.e, its protected object, the latter's space,
and the space of the possessor. The class BridgeInternal is
shown in Figure 6. It is not possible to reserve a field for this
information inside a user bridge class because the generic
methods getBridgeForArg and getBridgeForReturn need
to access this information when they receive any bridge as
argument or returned object.
Soot is the framework for manipulating Java bytecode [24]
that we used for generating the bridge classes.
public class FileUpdater
public File concatFiles(File file1 , File file2)
throws FileNotFoundException
throw new
FileNotFoundException("File Not Found!");
return file1;
Figure
8 A user class example
4.4 Caveats for Java
This section looks in more detail at the implications of the
object space implementation for Java programs. In partic-
ular, several features of the language, such as final classes
and methods, are incompatible with the implementation ap-
proach. Dealing with these issues means imposing restrictions
on the classes of objects that can be referenced across
space boundaries.
Final and private clauses are important software engineering
notions for controlling the visibility of classes in an
application. For the object space implementation to work,
each class C of which an object is transfered through a
bridge has a class BC generated that subclasses C. Final
classes thus cannot have bridges generated. In the current
implementation, the bridge generator complains if an object
is passed whose class contains final clauses, though
public class FileUpdaterBridge extends FileUpdater
implements Bridge -
BridgeInternal new BridgeInternal();
FileUpdaterBridge()-
FileUpdaterBridge( Object obj,
Space protectedObjSpace,
Space callerSpace
bi.initialize( obj , protectedObjSpace ,
callerSpace
BridgeInternal getBridgeInternal()-return bi;
public File concatFiles(File arg1 , File arg2)
throws FileNotFoundException -
bi.callerSpace
try -
File
File
File returnedObj=((FileUpdater)bi.protectedObj).
concatFiles( arg1Bridge , arg2Bridge );
return (File)BridgeFactory.
catch (FileNotFoundException e) -
throw (FileNotFoundException)BridgeFactory.
getBridgeForReturn(e , bi);
catch (Throwable e) -
throw (RuntimeException)BridgeFactory.
getBridgeForReturn(e , bi);
else
throw new AccessException("Unauthorized Call");
Figure
9 Example bridge class generated.
this restriction does not apply to java.lang.Object (see be-
low). In order to handle final methods and classes, the object
space system loader could remove final modifiers from
class BridgeFactory -
// maps a pair (objectToProtect , callerSpace) to
// to the bridge interposed between them
Map objAndCallerSpaceToBridge;
static Bridge getBridge( Object object ,
Space protectedObjSpace,
Space callerSpace
// This method first checks if the map already
// contains a bridge interposed
// between the object and callerSpace.
// If so, it returns the bridge.
// If not, it checks whether the bridge's class
// file exists.
// If the class file does not exist, this method
asks Soot to build one.
// Finally, it creates and returns a new instance
// of the bridge.
static Object getBridgeForArg(Object arg,
BridgeInternal currentBI) -
BridgeInternal
// If the call argument is a bridge on a object
// located in the same space as the callee,
// return a direct reference to the object.
argBI.protectedObjSpace ==
currentBI.protectedObjSpace
return argBI.protectedObj;
// The call argument is located in another space.
// Get a handle on it.
return BridgeFactory.
argBI.protectedObjSpace ,
currentBI.protectedObjSpace
else if( arg instanceof RemoteSpace
required around RemetoSpace
return arg;
else if( arg instanceof Space
// Do not allow transfer of space objects.
return new RemoteSpace((Space)arg);
else
// The call argument lives in the caller's space.
// Get a handle on it.
return BridgeFactory.
currentBI.callerSpace,
currentBI.protectedObjSpace
// Same idea as getBridgeForArg but this time
// the returned object is protected from the caller
static Bridge getBridgeForReturn( Object returnedObj,
BridgeInternal currentBI )-.
Figure
class files before linking. To prevent illegal subclassing, the
loader must record the final modifiers in each class already
loaded, and verify that further classes loaded do not violate
final constraints. The loader must also remove private
modifiers from classes BC . This rewriting approach was
used by loaders in the JavaSeal [6] system to remove catchs
of ThreadDeath exceptions, since catching these exceptions
would allow an applet to ignore terminate signals from its
parent. The re-writing approach does not work for system
classes, as these are loaded and linked by the basic system
loader.
System classes These classes include the java.lang,
java.util and java.io classes. The problem with these
classes is that they can be final, e.g., java.lang.String, or
they contain final methods that cannot be overridden.
The class java.lang.Object must be permitted since every
class sub-classes it. The only final methods of this class
are notifyAll, notify, wait, and getClass. These methods
cannot be overridden, and so invocation of these methods on
objects cannot be controlled. The former three methods are
used for thread synchronization. However, locking is out
of the scope of our access control model; it is an issue for
a fully-fledged protection domain model but this requires
modification to the virtual machine in any case. Concerning
the method getClass, a program that calls getClass on a
bridge class gets a class object for the bridged object. How-
ever, the constructor of a bridge class is protected, so the
program can do nothing with the object.
Special treatment is also given to system classes like
String and Integer which are final classes in Java 2. Our
implementation provides tailored versions of these classes to
represent strings and integers exchanged across boundaries.
The class IOSString for instance is simply a wrapper around
a String object, and can be exchanged between spaces. The
reader may have noticed the class IOSString in the paper's
examples. The object space implementation also provides
a bridged class for IOSString. This class contains a copy
of the wrapped String object in IOSString, and is used
to transfer the value of the string across spaces in the set
method. The API of IOSString is given below. The second
constructor takes a String object; this allows a space to create
an IOSString from a String locally and to transmit that
string value to another domain.
public class IOSString implements Serializable-
protected String myString;
public IOSString()-;
public IOSString(String s);
public IOSString getString();
public void set(IOSString s);
public void print();
Lastly, since String is final and cannot have a bridge defined
for it, bridge classes define the toString method to
return null in order to avoid direct references to Strings
in remote domains. In cases where strings need to be exchanged
for convenience, like in exceptions for instance, the
user class should define a getMessage method that returns
an IOSString.
Field accesses Access to fields is also a form of inter-object
communication and must be controlled for security.
The current implementation however does not yet cater for
this. A solution would be for the loader to instrument the
bytecode with instructions that consult the access matrix
before each field access, or for field accesses to be converted
into method calls. The former approach is applied to Java
in [21]. In the current implementation of bridges, access to
fields in remote objects become access to fields in bridges.
These fields do not reflect the corresponding fields in the
protected object.
Static fields and methods Static methods pose two
problems. First, they cannot be redefined in subclasses. Sec-
ond, objects referenced by static variables could be shared
between protection domains without an access control check
taking place. In a fully fledged implementation of protection
domains, classes should not be shared between domains [6]
to avoid undetected sharing between domains. In the object
space implementation, the bridge generator signals an error
when an it receives an object of a user class that contains
static methods.
The problem of static variables is looked at in [8]. This
proposal strengthens isolation between loader spaces by
keeping a different copy of objects referenced by static variables
for each copy of the class used by a loader. Unfor-
tunately, we cannot use this solution for the object space
implementation since classes are shared across domains.
Remote Array
Figure
11 Treatment of arrays in object space implementation.
Arrays Arrays in Java are objects in the sense that methods
defined in Object can be executed on array objects.
Unfortunately, an array is not an object in the sense that
element selection using "[ ]" is not a method call, and this
requires special treatment. Our approach is outlined in Figure
11. Whenever a reference to an array object is copied
across a space boundary (i.e., through a bridge), the array
is copied locally. The copy is even made if the array contains
primitive types like int or char. The implication of
this approach is that method calls on array objects do not
traverse space boundaries and that array selection is done
locally. In effect, copy by value is being used for array ob-
jects; an array entry can be modified in one space without a
corresponding change in another domain. Entries in copied
arrays for objects become bridges if not already so. This
means that non-array objects are always named using the
same bridge object within a space, even though array objects
may be copied. If sharing of arrays is required, then
the programmer must furnish an array class that has entry
selectors as methods.
This solution has an interesting repercussion regarding
the example of the bug in Java cited in Section 2.3.2.
The signers object was in fact represented by an array
"Identity signers[]". If the object space model were used
to implement this, then a copy of the signer array would
be returned to the caller, whose modifications to the array
would remain innocuous.
Synchronization on objects is intricately influenced by
the interposition of bridges between objects. Two objects
located in different spaces and willing to synchronize on the
same protected object would experience undesired behavior
since they are implicitly performing their operations on two
different bridges protecting the protected object instead of
acting on the protected object itself. This problem arises if
synchronized statements are used instead of relying on solutions
that exploit synchronized methods. The latter is perfectly
valid since bridges forward calls to protected objects
and consequently, locking occurs on the protected objects
themselves.
Native methods can also lead to security flaws since
they could be used to leak object references between spaces
and there is no way to control this code. Our current implementation
for Java does not allow bridges for classes that
possess native methods, except for Object's methods, e.g.,
hashCode.
4.5 Performance evaluation
Efficiency is one of the goals of the object space model. In
particular, the cost of mediation of inter-space calls by bridge
should be generally inferior to the cost of copy-by-value (of
which "serialization" is an example) and the exchange of the
byte array over a communication channel.
We conducted performance measures for the implementation
running over SunOS 5.6 on a 333 MHz Ultra-Sparc-IIi
processor using Sun's VM for JDK1.2.1. All measures were
obtained after averaging over a large number of iterations.
One of the basic measures taken was to compare the cost
of a method call between protection domains using the space
(bridge) model and the loader (Java serialization) model. We
also made comparisons with J-Kernel [25]. The latter allows
domains to exchange parameters either by using the Java
serialization mechanism, or by using a faster serialization
tool developed for J-Kernel or by passing capabilities. A
J-Kernel capability is an object that denotes an object in
a remote domain; this is J-Kernel's equivalent of a bridged
object.
The table below shows comparisons for: A) a method call
with no parameters, B) for a method call with a string as
parameter, and C) for a method call with an article object
as a parameter. The Article class is used in the application
of Section 3.2, and consists of a hash-table of files representing
the article contents, as well as strings for the article
attributes. Times are shown in micro-seconds. For A, we
estimated that a basic method call without arguments (and
serialization) was around 5 nano-seconds.
A cross-domain call without arguments is faster in our approach
than with J-Kernel. For such a basic call, J-Kernel's
overhead can mainly be explained by the thread context
switch that has to be performed when crossing domains. In
our implementation, the only overhead resides in the security
policy check required during cross-domain calls. This
cost is quite low since this check reduces to a lookup in an
access matrix implemented as a static two dimensional array
stored in class Space. Even though accessing the matrix
is fast, the trade-off is that space required is O(N 2 ) in the
number of Spaces present in the system.
Mechanisms that use copy for passing parameters are as
expected slower that their counterpart that do not (J-Kernel
with capability and our object space model). Further, they
do not scale well with the size of arguments.
The cost of parameter passing with the object space model
is approximately two times slower than passing parameters
with capabilities in J-Kernel. The overhead comes from the
dynamic creation and lookup of bridges in our model. How-
ever, this cost comes with a benefit. Our model has stricter
controls on access to objects. In J-Kernel, once a capability
is released into the environment, it is not possible to control
its spread among domains whereas in our model, we can
selectively grant or revoke access to certain domains.
Space Serial. J-Kernel
Serial. fast copy capability
C 2.5 363.2 1004.2 587.2 1.4
The figures give an estimate of the basic mechanism. To
get a more general overview, we implemented the article
packager example of Section 3.2 using both the object space
model and the Java loader model. The space implementation
was described earlier. In the loader implementation, a
class loader object is created to load the client and article
classes. The service classes (Storage & Editor) are loaded
with the system loader. A communication channel object
is installed between the client and service objects for the
exchange of serialized messages.
The application is highly interactive, so a direct comparison
is not obvious; we therefore compared two types of com-
munication: the cost of saving an article stored within a
program on disk, and the cost of sending an event message
from the GUI Editor to the client.
In the loader version, the time to save a small article is approximately
5617 micro-seconds; this cost includes the time
to serialize the article. In the space version, the figure was
slightly less (5520 micro seconds). This also has an article
serialization since the article must be serialized to be
saved on disk. The figure includes a creation of a bridge
object for the article being passed to the Storage object.
The time to send a message from the GUI to the program
is about 143 micro-seconds in the space implementation. In
the loader implementation, this figure is around 1511 micro-seconds
due to serialization. The cost of serialization can
be reduced by making the classes of the objects exchanged
sharable (have them loaded by the system loader). How-
ever, the result of this is to weaken isolation because there
is greater scope for aliasing between domains.
Regarding space usage, a bridge requires 4 words: a reference
to a BridgeInternal object which contains 3 words
(reference to guarded object, and references to guarded and
possessor spaces). A Space object requires 3 words (an internal
Integer identifier, a reference for the parent space, a reference
to a hash-map object containing the children spaces);
the pointer to the access matrix is static, so is shared by
all Space objects. If there are N spaces active in a system,
then the overhead of a space is N 2 access matrix entries and
NM entries in the hash-map maintained by BridgeFactory
that maps object and space pairs to bridges. M denotes the
number of objects in the space referenced by objects in other
spaces. If all spaces contains M objects, then the maximum
number of bridges in the system is this represents the
case where all objects in all spaces are referenced by objects
in all other spaces.
The measures were taken for installed bridge classes. In
our implementation, a bridge class is generated on the fly if
the class cannot be found on disk. This is a costly process.
For instance, a bridge for the Editor class takes around 3.67
seconds to generate (due to parsing of the class file). On
startup of the article packager application, the root, service
and client spaces are created; this necessitates the creation
of 10 bridges, which takes around 6.24 seconds.
5 Related Work
This section compares our object space model with related
work; it is divided into Java related work, and more general
work on program security.
5.1 Java Security
Java has an advanced security model that includes protection
domains, whose design goal was to isolate applets from
each other. The basic mechanism used is the class loader.
Each applet in Java is assigned its own class loader which
loads a distinct and private version of a class for its protection
domain [17]. Java possesses the property that a class
of one loader has a distinct type to the same class loaded
by another loader. Typing is therefore the basis for isolation
since creating a reference from one loader space to another
is signaled as a type error.
A problem with this model is that dynamic typing can
violate the property that spaces do not reference each
other. This is because all classes loaded by the basic system
loader are shared by all other loader spaces - they are
never reloaded. The system loader loads all basic classes
(e.g., java.lang.*) so sharing between loader spaces is en-
demic. This sharing is enough to lead to aliasing between
loader spaces. Consider a class Password which is loaded
by two loader spaces i and j; the resulting class versions
are Password i and Password j . This class implements the interface
PasswordID with methods init and value. Suppose
that the interface PasswordID is loaded by the system loader.
In this case, the following program allows one password to
read the value of the other, that is, the password object of
loader space (UID 2) can directly invoke the password object
in the other space.
public final class Password implements PasswordID-
private int UID;
private PasswordID sister;
private String password;
public static void main(String[] args)
throws Exception-
// Create two loader spaces
MyLoader new MyLoader();
MyLoader new MyLoader();
// Root leaks references to each space
child1.init(1, "hth3tgh3", child2);
child2.init(2, "tr54ybb", child1);
public void init(int i, String s, PasswordID R)-
public String value()-
return password;
This program starts by creating two loaders of class
MyLoader. This loader reads files from a fixed directory.
It delegates loading of all basic Java classes and of the
PasswordID interface to the parent (system) loader. The
program then creates an instance of Password in each loader
space (by asking each loader to load and instantiate an instance
of the class). The program grants each password a
reference for each other. Despite the fact that each domain
has a distinct loader, the call on value by the second password
on the first succeeds.
Loader spaces are used to implement protection domains
in several Java-based systems, e.g., [3, 14, 6, 25]. Isolation is
obtained only if the shared classes do not make leaks such as
that in the above example. In the object space approach, the
model at least guarantees that if ever a reference to an object
escapes or is leaked to another space, use of that reference
is nevertheless mediated by a security policy. The security
policy prohibits calls between spaces by default: an access
right for a space must be explicitly granted, and this grant
can be undone by a subsequent revocation.
One advantage of the loader space model over the object
space model is that the former allows a program to control
the classes that are loaded into its protection domain. This
is important for preventing code injection attacks, where malicious
code is inserted into a domain in an attempt to steal
or corrupt data. In the object space model, only a parent
can force a child space to execute code not foreseen by the
program through the newInstance method. However, there
is no way to control the classes used by a particular space.
Section 4 compared the implementation of the object
space model with J-Kernel [25]. Recall that protection domains
in J-Kernel are made up of selected shared system
classes, user classes loaded by a domain loader, as well as
instances of these classes. A capability object is used to
reference an object in a remote domain. A call on a capability
object transfers control to the called domain; parameters
in the call are copied by value unless they are capability
objects, which are copied directly.
In comparison to the object space model, J-Kernel uses
copy-by-value by default, whereas the object space model
uses copy-by-reference. J-Kernel must explicitly create a
capability for an object to transfer it by reference; the
object space model must explicitly serialize an object to copy
it by value. The latter approach is a more natural object-oriented
choice. Access control is based on capabilities in J-
Kernel. A problem with capabilities is that their propagation
cannot be controlled: once a domain exports a capability
for one of its objects, it can no longer control what other
domain receives a copy of the capability. Revocation exists
but this entails revoking all copies of a capability, meaning
that the distribution of access rights for an object must start
again from scratch. In the object space model, an owner can
grant and revoke rights for spaces selectively to other spaces.
Another difference between the two systems is the presence
of the hierarchy in the object space model and the absence
of multiple class loaders and class instances.
The goal of the JavaSeal kernel [6] is to isolate mobile
agents from each other and from the host platform. A protection
domain in JavaSeal is known as a seal, and is also
implemented using the Java loader mechanism. The set of
seals is organized into a hierarchy. A message exchanged
between two seals is routed via their common parent seal,
which can suppress the message for security reasons. It was
in fact our experience with programming a newspaper application
[19] over JavaSeal that first motivated the object
space model. Many objects such as environment variables
and article objects needed to be distributed to several seals.
This meant copy-by-value semantics, which we found to be
cumbersome for mutable objects like key certificates and article
files. We wanted a safe form of object sharing to simplify
programming.
Interesting similarities exist between the object space
model and memory management in real-time Java [5]. The
latter has ScopedMemory objects that act as memory heaps
for temporary objects. A newly created real-time thread can
be assigned a ScopedMemory; alternatively, threads can enter
the context of a ScopedMemory by executing its enter()
method. The memory object contains a reference counter
that is incremented each time that a thread enters it. An
object created by a thread in a ScopedMemory is allocated
in that memory object. An object (in a ScopedMemory) may
create other ScopedMemory objects, thus introducing a hi-
erarchy. The goal of the scoped memory model is to avoid
use of a (slow) garbage collector to remove objects. When
the reference counter of a ScopedMemory object becomes 0,
the objects it contains can be removed. To prevent dangling
references, an object cannot hold a reference to an object
in a sibling ScopedMemory; the JVM dynamically checks all
reference assignments to verify this constraint.
Compared with the object space model, both approaches
use a hierarchy with accesses between spaces being dynamically
checked. However, the access restrictions in the object
space model can be dynamically modified, and accesses between
non-related spaces are possible. On the other hand,
the object space model does not deal with resource termination
5.2 Program Security Mechanisms
There has been much work on integrating access control into
programs. Some approaches annotate programs with calls to
a security policy checker [21, 9]. In Java for instance [9], a
system class contains a method call to a SecurityManager
object that checks whether the calling thread has the right
to pursue the call. Another approach to program security
uses programming language support. For instance, the languages
[13, 18] include the notion of access rights; programs
can possess rights for objects and access by a program to
an object can only progress if it possesses the access right.
Language designers today tend to equate security to type
correctness. In this way, security is just another "good be-
havior" property of a program, that can be verified using
static analysis or dynamic checking [20, 16]
Leroy and Rouaix use typing to enforce security in environments
running applets [16]. Security in this context
means that an applet cannot gain access to certain objects
(such as those private to an environment function), and that
objects which are accessible can only be assigned a specified
set of permitted values. Each system type has special
versions t that each define a set of permitted values. For
instance, may be String, and t be CLASSPATH with possible
values being /applet/public and /bin/java. Each
conversion from to t on an object entails verifying that the
object respects the permitted values. Environment functions
available to applets are bridge-like in the sense that each in-coming
reference of type is cast to t. This is similar to the
object space model implementation in Java since for each
class , a bridge class t is constructed that contains code to
verify the system's access control policy. Access permissions
are specified by an access matrix in the object space model,
rather than by permitted object values.
The goal of JFlow [20] is information flow security ; this
deals with controlling an attacker's ability to infer information
from an object rather than with controlling access to
the object's methods. JFlow extends the Java language by
associating security labels with variables. A security label
denotes the sensitivity of an object's information. JFlow
has a static analyzer that ensures that an object does not
transfer a reference or data to an object with an inferior
security label, as this would constitute a leak. The complexity
of the mechanism comes from ensuring that information
about objects used in an conditional expression evaluation
is not implicitly leaked to objects modified in the scope of
the conditional expression.
In comparison to these works, the object space implementation
relies on typing to ensure that each object access is
made using a secure version of the class (i.e., one that includes
access control checks). Annotation of classes with
checks could be included to check field accesses between objects
[21], as we mentioned in Section 4.
A related topic to access control is aliasing control,
e.g., Confined types [4], Balloons [1] or Islands [11]. Confined
types is a recent effort to control the visibility of kernel
objects by controlling the visibility of class names. A
confined type is a class whose objects are invisible to specific
user programs. The advantage of this approach is that
the confinement of a type is verified by the compiler. On
the other hand, classes cannot be confined and non-confined
at the same time. It is important that one can designate
some objects of a class as protected, and other objects as
public. For instance, the visibility of Strings that represent
passwords must be confined, though the class String
is a general class that should be accessible to all programs.
Another problem with aliasing control is that object visibility
can vary during the object's lifetime. For instance,
an object given to a server must remain accessible to that
server during the server's work-time. Once the server has
completed its task, the object should be removed from the
server's visibility; this is the server containment property [7].
The object space model controls access on an object basis,
and the visibility constraints can be dynamically altered.
6 Conclusions
This paper has presented an access control model for an
object-oriented environment. The model API is strongly influenced
by Java and its loader spaces programming model,
though aims to overcome weaknesses in Java access control
caused by aliasing. We evaluated our proposition for an implementation
over Java 2. Though the implementation has
the advantage of portability, it means that we cannot address
resource control and domain termination issues. These
issues must be treated if object spaces are to become fully-fledged
protection domains. Virtual machine support could
also be useful to overcome the limitations of the model in
Java, e.g., the prohibition of field accesses and the work
around of final modifiers.
Our results show that interposition of access control programs
between objects of different domains can be more efficient
than a simple copy-by-value of data between loader
spaces. We believe that the object space model is a more
natural object programming style than copy-by-value, especially
for objects that need to be accessed by many programs
and whose value can change often, e.g., environment vari-
ables. The model also has the advantage that any leak of a
reference between spaces is innocuous if the receiving space
has not been explicitly granted the right to use the space of
the referenced object. And even if access has been granted,
this right may always be removed.
Acknowledgments
The authors are grateful to the
anonymous referees and to Jan Vitek for very valuable comments
on the content and presentation of this paper. This
work was support by the Swiss National Science Foundation,
under grant FNRS 20-53399.98.
--R
Balloon Types: Controlling Sharing of State in Data Types.
The Java Programming Language.
Confined Types.
The Real-Time Specification for Java
The Javaseal Mobile Agent Ker- nel
Protection in the Hydra Operating System.
Application Isolation in the Java Virtual Ma- chine
Inside Java 2 platform security: architecture
Islands: Aliasing Protection in Object-Oriented Languages
The Geneva Convention on the Treatment of Object Alias- ing
A Language Extension for Controlling Access to Shared Data.
Security in the Ajanta Mobile Agent System.
A Note on the Confinement Problem.
Security properties of typed ap- plets
Dynamic Class Loading in the Java Virtual Machine.
Access Control in Parallel Programs.
Commercialization of Electronic Information.
JFlow: practical mostly-static information flow control
Providing Fine-Grained Access Control for Java Programs
The Protection of Information in Computer Systems.
Optimizing Java Bytecode using the Soot framework: Is it feasible?
--TR
Islands
The Geneva convention on the treatment of object aliasing
The use of name spaces in Plan 9
Security properties of typed applets
Dynamic class loading in the Java virtual machine
The Java programming language (2nd ed.)
JFlow
Inside Java 2 platform security architecture, API design, and implementation
Confined types
Commercialization of electronic information
Application isolation in the Java Virtual Machine
A note on the confinement problem
J-Kernel
The Real-Time Specification for Java
Providing Fine-grained Access Control for Java Programs
Optimizing Java Bytecode Using the Soot Framework
Signing, Sealing, and Guarding Java Objects
The JavaSeal Mobile Agent Kernel
Protection in the Hydra Operating System
--CTR
Chris Hawblitzel , Thorsten von Eicken, Luna: a flexible Java protection system, Proceedings of the 5th symposium on Operating systems design and implementation Due to copyright restrictions we are not able to make the PDFs for this conference available for downloading, December 09-11, 2002, Boston, Massachusetts
Chris Hawblitzel , Thorsten von Eicken, Luna: a flexible Java protection system, ACM SIGOPS Operating Systems Review, v.36 n.SI, Winter 2002
Laurent Dayns , Grzegorz Czajkowski, Lightweight flexible isolation for language-based extensible systems, Proceedings of the 28th international conference on Very Large Data Bases, p.718-729, August 20-23, 2002, Hong Kong, China
Grzegorz Czajkowski , Laurent Dayns, Multitasking without comprimise: a virtual machine evolution, ACM SIGPLAN Notices, v.36 n.11, p.125-138, 11/01/2001
Krzysztof Palacz , Jan Vitek , Grzegorz Czajkowski , Laurent Daynas, Incommunicado: efficient communication for isolates, ACM SIGPLAN Notices, v.37 n.11, November 2002 | access control;protection domains;sharing;aliasing |
353280 | Merging and Splitting Eigenspace Models. | AbstractWe present new deterministic methods that given two eigenspace modelseach representing a set of $n$-dimensional observationswill: 1) merge the models to yield a representation of the union of the sets and 2) split one model from another to represent the difference between the sets. As this is done, we accurately keep track of the mean. Here, we give a theoretical derivation of the methods, empirical results relating to the efficiency and accuracy of the techniques, and three general applications, including the construction of Gaussian mixture models that are dynamically updateable. | Introduction
The contributions of this paper are: (1) a method for
merging eigenspace models; (2) a method for splitting
eigenspace models. These represent an advance in
that previous methods for incremental computation
of eigenspace models considered only the addition (or
subtraction) of a single observation to (or from) an
eigenspace model [1, 2, 3, 4, 5, 6]. Our method also
allows the origin to be updated, unlike most other
methods. Thus, our methods allow large eigenspace
All authors are with the Department of Computer Science,
University of Wales, Cardiff, PO Box 916, Cardiff CF2 3XF,
Wales UK: peter@cs.cf.ac.uk
models to be updated more quickly and more accurately
than when using previous methods. A second
advantage is that very large eigenspace models
may now be reliably constructed using a divide-and-
conquer approach. Limitations of existing techniques
are mentioned in [7].
Eigenspace models have a wide variety of applica-
tions, for example: classification for recognition systems
[8], characterising normal modes of vibration
for dynamic models, such as the heart [9], motion
sequence analysis [10], and the temporal tracking of
signals [4]. Clearly, in at least some of these appli-
cations, merging and splitting of eigenspace models
will be useful, as the data is likely to be presented
or processed in batches. For example, a common
paradigm for machine learning systems is to identify
clusters [11], our methods allow clusters to be
split and merged, and dynamically updated as new
data arrive. As another example, an image database
of university students may require as much as one
quarter of its records to be replaced each year. our
methods permit this without the need to recompute
the eigenspace model ab initio.
This paper is solely concerned with deriving a
theoretical framework for merging and splitting
eigenspaces, and an empirical evaluation of these new
techniques, rather than their particular application in
any area.
An eigenspace model is a statistical description of a
set of N observations in n-dimensional space; such a
model may be regarded as a multi-dimensional Gaussian
distribution. From a geometric point of view,
an eigenspace model can be thought of as a hyperellipsoid
that characterises a set of observations: its
centre is the mean of the observations; its axes point
in directions along which the spread of observations
is maximised, subject to them being orthogonal; the
surface of the hyperellipsoid is a contour that lies at
one standard deviation from the mean. Often, the
hyperellipsoid is almost flat along certain directions,
and thus can be modelled as having lower lower dimension
than the space in which it is embedded.
Eigenspace models are computed using either
eigenvalue decomposition (also called principal component
analysis) or singular-value decomposition.
We wish to distinguish between batch and incremental
computation. In batch computation all observations
are used simultaneously to compute the
eigenspace model. In an incremental computation,
an existing eigenspace model is updated using new
observations.
Previous research in incremental computation of
eigenspace models has only considered adding exactly
one new observation at a time to an eigenspace
model [1, 2, 3, 4, 5, 6]. A common theme of these
methods is that none require the original observations
to be retained. Rather, a description of the
hyperellipsoid is sufficient information for incremental
computation of the new eigenspace model. Each
of previous these approaches allows for a change in
dimensionality of the hyperellipsoid, so that a single
additional axis is added if necessary. Only our
previous work allows for a shift of the centre of the
hyperellipsoid [6]: other methods keep it fixed at the
origin. This proves crucial if the eigenspace model is
to be used for classification, as explained in [6]: a set
of observations whose mean is far from the origin is
clearly not well modelled by a hyperellipsoid centred
at the origin.
When using incremental methods previous observations
need not be kept - thus reducing storage requirements
and making large problems computationally
feasible. Incremental methods must be used if
not all observations are available simultaneously. For
example, a computer may lack the memory resources
required to store all observations. This is true even
if low-dimensional methods are used to compute the
eigenspace [5, 12]. (We will mention low-dimensional
methods later, but they give an advantage when the
number of observations of less than the dimensionality
of the space, N ! n, which is often true when
observations are images.) Even if all observations
are available, it is usually faster to compute a new
eigenspace model by incrementally updating an existing
one rather than using batch computation [3].
This is because the incremental methods typically
compute p eigenvectors, with p - min(n; N ). The
disadvantage of incremental methods is their accuracy
compared to batch methods. When only few incremental
updates are made the inaccuracy is small,
and is probably acceptable for the great majority of
applications [6]. When many thousands of updates
are made, as when eigenspace models are incremented
with a single observation at a time, the inaccuracies
build up, although methods exist to circumvent this
problem [4]. In contrast, our methods allow a whole
new set of observations to be added in a single step,
thus reducing the total number of updates to an existing
model.
Section 2 defines eigenspace models in detail, standard
methods for computing them, and how they
are used for representing and classifying observations.
Section 3 discusses merging of eigenspace models,
while Section 4 treats splitting. Section 5 presents
empirical results, and Section 6 gives our conclusions.
Eigenspace models
In this section, we describe what we mean by
eigenspace models, briefly discuss standard methods
for their batch computation, and how observations
can be represented using them. Firstly, we establish
our notation for the rest of the paper.
Vectors are columns, and denoted by a single un-
derline. Matrices are denoted by a double under-
line. The size of a vector, or matrix, is often im-
portant, and where we wish to emphasise this size,
it is denoted by subscripts. Particular column vectors
within a matrix are denoted by a superscript,
and a superscript on a vector denotes a particular
observation from a set of observations, so we treat
observations as column vectors of a matrix. As an
example, A i
mn is the ith column vector in an (m \Theta n)
matrix. We denote matrices formed by concatenation
using square brackets. Thus [A mn b] is an (m \Theta (n+1)
matrix, with vector b appended to A mn
, as a last column
2.1 Theoretical background
Consider N observations, each a column vector x
We compute an eigenspace model as follows:
The mean of the observations is
and their covariance is
Note that C nn
is real and symmetric.
The axes of the hyperellipsoid, and the spread of
observations over each axis are the eigenvectors and
eigenvalues of the eigenproblem
or, equivalently, the eigenvalue decomposition (EVD)
of C nn is
nn
nn
where the columns of U nn
are eigenvectors, and nn
is a diagonal matrix of eigenvalues. The eigenvectors
are orthonormal, so that U T
nn
U nn
I nn
, the (n \Theta n)
identity matrix.
The ith eigenvector U i and ith eigenvalue ii
nn are
associated; the eigenvalue is the length of the eigen-
vector, which is the ith axis of the hyperellipsoid.
Typically, only p - min(n; N) of the eigenvectors
have significant eigenvalues, and hence only p of the
eigenvectors need be retained. This is because
the observations are correlated so that the covariance
matrix is, to a good approximation rank-degenerate:
small eigenvalues are presumed to be negligible. Thus
an eigenspace model often spans a p-dimensional sub-space
of the n-dimensional space in which it is embedded
Different criteria for discarding eigenvectors and
eigenvalues exist, and these suit different applications
and different methods of computation. Three common
methods are: (1) stipulate p as a fixed inte-
ger, and so keep the p largest eigenvectors [5]; (2)
those p eigenvectors whose size is larger than
an absolute threshold [3]; (3) keep the p eigenvectors
such that a specified fraction of energy in the
eigenspectrum (computed as the sum of eigenvalues)
is retained.
Having chosen to discard certain eigenvectors and
eigenvalues, we can recast Equation 4 using block
form matrices and vectors. Without loss of general-
ity, we can permute the eigenvectors and eigenvalues
such that U np are those eigenvectors that are kept,
and pp their eigenvalues. If nd and
dd
are those discarded. We may rewrite Equation 4
as:
nn
U nd
dd
[U np
U nd
nd (5)
Hence
with error U nd dd U T
nd , which is small if dd - 0 nd .
Thus, we define an eigenspace model, \Omega\Gamma as the
mean, a (reduced) set of eigenvectors, their eigen-
values, and the number of observations:
2.2 Low-dimensional computation of
eigenspace models
Low-dimensional batch methods are often used to
compute eigenspace models, and are especially important
when the dimensionality of the observations
is very large compared to their number. Thus, they
may be used to compute eigenspace models that
would otherwise be infeasible. Incremental methods
also use a low dimensional approach.
In principle, computing an eigenspace model requires
that we construct an (n \Theta n) matrix, where n
is the dimension of each observation. In practice, the
model can be computed by using an (N \Theta N) ma-
trix, where N is the number of observations. This
is an advantage in applications like image processing
where, typically, N - n.
We show how this can be done by first considering
the relationship between eigenvalue decomposition
(EVD) and singular value decomposition (SVD). This
leads to a simple derivation for a low-dimensional
batch method for computing the eigenspace model.
The same results were obtained, at greater length,
by [5], see also [12].
Let Y nN be the set of observations shifted to the
mean, so that Y
x. Then a SVD of Y nN
is:
where U nn are the left singular vectors, which are
identical to the eigenvectors previously given; \Sigma nN
is
a matrix with singular values on its leading diagonal,
with nn
are right singular
vectors. Both U nn
and V NN
are orthonormal matrices
This can now be used to compute eigenspace models
in a low-dimensional way, as follows:
Y nN
is an (N \Theta N) eigenproblem. S NN
is the same as
nn =N , except for the presence of extra trailing zeros
on the main diagonal of nn
. If we discard the small
singular values, and their singular vectors, following
the above, then remaining eigenvectors vectors are
U np
pp
This result formed the basis of the incremental
technique developed by [5] but they did not allow
for a change in origin, nor does their approach readily
generalise to merging and splitting. Others [3]
observe that a solution based on the matrix product
, as above, is likely to lead to inaccurate
results because of conditioning problems, and they
develop a method for incrementally updating SVD
solutions with a single observation. Although their
SVD method has proven more accurate (see [6]), it
is, again, very difficult to generalise, especially if a
change of origin is to be taken into account.
SVD methods were actually proposed quite early in
the development of incremental eigenproblem analysis
[2]. This early work included a proposal to delete
single observations, but did not extend to merging
and splitting. SVD also formed the basis of a proposal
to incrementally update an eigenspace with several
observations at one step [10]. However, contrary
to our method, a possible change in the dimension of
the solution eigenspace was not considered. Further-
more, none of these methods considered a change in
origin.
Our incremental method is based on the matrix
product C
nN , and specifically its approximation
as in Equation 6. It is a generalisation of our
earlier work [6], which now appears naturally as the
special case of adding a single observation.
2.3 Representing and classifying observation
High-dimensional observations may be approximated
by a low-dimensional vector using an eigenspace
model. Eigenspace models may also be used for clas-
sification. We briefly discuss both ideas, prior to using
them in our results section.
An n-dimensional observation x n is represented using
an eigenspace
N) as a
p-dimensional vector
This shifts the observation to the mean, and then
represents it by components along each eigenvec-
tor. This is also called the Karhunen-Lo'eve transform
[13].
The n-dimensional residue vector is defined by:
and h n is orthogonal to every vector in U np . Thus, h n
is the error in the representation of x n with respect
The likelihood associated with the same observation
is given by:
Clearly, the above definition cannot be used directly
in cases where N - n, as C nn
is then rank
degenerate. In such cases we use an alternative definition
due to Moghaddam and Pentland [8] is appropriate
(a full explanation of which is beyond the
scope of this paper).
Merging Eigenspace models
We now turn our attention to one of the two main
contributions of this paper, merging eigenspace models
We derive a solution to the following problem. Let
and Y nM
be two sets of observations. Let
their eigenspace models
respectively. The problem is
to compute the eigenspace model \Phi (-z; W nr
for Z n(N+M)
Y nM
using
only\Omega and \Psi.
Clearly, the total number of new observations is
The combined mean is:
-z =(N +M)
The combined covariance matrix is:
where C nn
and D nn
are the covariance matrices for
and Y nM
, respectively.
We wish to compute the s eigenvectors and eigen-values
that satisfy:
ns
ss
ns
where some eigenvalues are subsequently discarded
to give r non-negligible eigenvectors and eigenvalues.
The problem to be solved is of size s, and this is
necessarily bounded by
We explain the perhaps surprising additional 1 in
the upper limit later(Section 3.1.1), but briefly, it
is needed to allow for the vector difference between
the means, -
y.
3.1 Method of solution
This problem may be solved in three steps:
1. Construct an orthonormal basis set, \Upsilon ns , that
spans both eigenspace models and -
x\Gamma-y. This basis
differs from the required eigenvectors, W ns ,
by a rotation, R ss , so that:
ns
\Upsilon ns R ss
2. Use \Upsilon ns to derive an intermediate eigenproblem.
The solution of this problem provides the eigen-
values, \Pi ss , needed for the merged eigenmodel.
The eigenvectors, R ss
, comprise the linear transform
that rotates the basis set \Upsilon ns
3. Compute the eigenvectors W ns
, as above, and
discard any eigenvectors and eigenvalues using
the chosen criteria (as discussed above) to yield
and \Pi rr
We now give details of each step.
3.1.1 Construct an orthonormal basis set
To construct an orthonormal basis for the combined
eigenmodels we must chose a set of orthonormal
vectors that span three subspaces: (1) the sub-space
spanned by eigenvectors U np
; (2): the sub-space
spanned by eigenvectors V nq
;(3) the subspace
spanned by (-x \Gamma - y). The last of these is a single vec-
tor. It is necessary because the vector joining the
centre of the two eigenspace models need not belong
to either eigenspace. This accounts for the additional
1 in the upper limit of the bounds of s (Equation 17).
For example, consider the case in which each of the
eigenspaces is a 2D ellipse in a 3D space, and the
ellipses are parallel but separated by a vector perpendicular
to each of them. Clearly, a merged model
should be a 3D ellipse because of the vector between
their origins.
A sufficient spanning set is:
ns nt
nt
is an orthonormal basis set for that component
of the eigenspace of \Psi which is orthogonal to
the eigenspace of \Omega\Gamma and in addition accounts for that
component of (-x \Gamma - y) orthogonal to both eigenspaces;
To construct - nt
we start by computing the
residues of each of the eigenvectors in V nq
with respect
to the eigenspace of \Omega\Gamma
The H nq
are all orthogonal to U np
in the sense that
In general, however, some of
the H nq
are zero vectors, because such vectors represent
the intersection of the two eigenspaces. We also
compute the residue h of -
x with respect to the
eigenspace of \Omega\Gamma using Equation 12.
- nt can now be computed by finding an orthonormal
basis for [H nq ; h], which is sufficient to ensure
that \Upsilon ns is orthonormal. Gramm-Schmidt orthonor-
malisation may be used to do this:
nt
3.1.2 Forming a intermediate eigenproblem
We now form a new eigenproblem by substituting
Equation 19 into Equation and the result together
with Equation 15 into Equation 16 to obtain:
nt
ss
\Pi ss R T
ss
nt
Multiplying both sides on the left by [U nt
on the right by [U np
nt
and using the fact that
[U np
nt
T is a left inverse of [U np
nt
we obtain:
nt
nt ss \Pi ss R T
ss
which is a new eigenproblem whose solution eigenvectors
constitute the R ss
we seek, and whose eigenvalues
provide eigenvalues for the combined eigenspace
model. We do not know the covariance matrices C nn
or D nn , but these can be eliminated as follows:
The first term in Equation 24 is proportional to:
nt
nt
nt C nn U np - T
nt C nn - nt
By Equation 6 U T
U np
. Also, U T
nt
pt by construction, and again, using Equation 6 we
conclude:
[U np
nt
[U np
nt
The second term in Equation 24 is proportional to:
nt
nt
np D nn U np U T
np D nn - nt
nt
nt D nn - nt
We have D nn
, which on substitution
gives:
U np
nt
nt
nt
nt
From Equation 20 we have G pq
.
nt V nq
. We obtain:
nt nt
Now consider the final term in Equation 24:
nt nt
nt
nt
nt
nt
Setting
nt
becomes: "
So, the new eigenproblem to be solved may be approximated
by
G pq
G pq
ss
ss
R T
ss
Each matrix is of size
Thus we have eliminated the need
for the original covariance matrices. Note this also reduces
the size of the central matrix on the left hand
side. This is of crucial computational importance because
it makes the eigenproblem tractable.
3.1.3 Computing the eigenvectors
The matrix \Pi ss
is the eigenvalue matrix we set out
to compute. The eigenvectors R ss
comprise a rotation
for \Upsilon ns
. Hence, we use Equation to compute
the eigenvectors for \Pi ss
. However, not all eigenvectors
and eigenvalues need be kept, and some
of them) may be discarded using a criterion as previously
discussed in Section 2. This discarding of eigen-vectors
and eigenvalues should usually be carried out
each time a pair of eigenspace models is merged.
3.2 Discussion on the form of the so-
lution
We now briefly justify that the solution obtained is of
the correct form by considering several special cases.
First, suppose that both eigenspace models are
null, that is each is specified by (0; 0; 0; 0). Then the
system is clearly degenerate and null eigenvectors and
zero eigenvalues are computed.
If exactly one eigenspace model is null, then the
non-null eigenspace model is computed and returned
by this process. To see this, suppose that \Psi is null.
Then, the second and third matrices on the left-hand
side of Equation 31 both disappear. The first matrix
reduces to pp
exactly hence the eigen-values
remain unchanged. In this case, the rotation
R ss is the identity matrix, and the eigenvectors are
also unchanged. If
instead\Omega is a null model, then
only the second matrix will remain (as
- nt and V nq will be related by a rotation (or else
identical). The solution to the eigenproblem then
computes the inverse of any such rotation, and the
eigenspace model remain unchanged.
Suppose \Psi has exactly one observation, then it is
specified by (y; 0; 0; 1). Hence the middle term on
the left of Equation 31 disappears, and - nt is the
unit vector in the direction
x. Hence
is a scalar, and the eigenproblem becomes
which is exactly the form obtained when one observation
is explicitly added, as we have proven elsewhere
[6]. This special case has interesting properties
too: if the new observation lies within the subspace
spanned by U np
, then any change in the
eigenvectors and eigenvalues can be explained by rotation
and scaling caused by g p g T
. Furthermore, in
the unlikely event that -
y, then the right matrix
disappears altogether, in which case the eigenvalues
are scaled by N=(N +1), but the eigenvectors are un-
changed. Finally, as
indicating a stable model in
the limit.
If
the\Omega has exactly one observation, then it is specified
by (x; 0; 0; 1). Thus the first matrix on the left
of Equation 31 disappears. The G pq
is a zero matrix,
and - nt = [V nq h], where h is the component of -
which is orthogonal to the eigenspace of \Psi. Hence
the eigenproblem is:
Given that in this case \Gamma tq
, then
has the form of \Delta qq
, but with a row and
column of zeros appended. Also, fl t
Substitution of these terms shows that in this case
too, the solution reduces to the special case of adding
a single new observation: Equation 33 is of the same
form as Equation 32, as can readily shown.
If
the\Omega and \Psi models are identical, then - y.
In this case the third term on the left of Equation 31
disappears. Furthermore, \Gamma tq
is a zero matrix, and
G pq
U np
is the identity matrix, with
Hence, the first and second matrices on the left of
Equation 31 are identical, with
reduce to the matrices of eigenvalues. Hence, adding
two identical eigenmodels yields an identical third.
Finally, notice that for fixed M , as N !1 so the
solution tends to
the\Omega model; for fixed N the reverse
is true; and if M and N tend to 1 simultaneously,
then the final term loses its significance.
3.3 Algorithm
Here, for completeness, we now express the mathematical
results obtained above, for merging models,
in the form of an algorithm for direct computer im-
plementation; see Figure 1
3.4 Complexity
Computing an eigenspace model of size N as a single
batch incurs a computational cost O(N 3 ). Examination
of our merging algorithm shows that it
also requires an intermediate eigenvalue problem to
be solved, as well as other steps; again overall giving
cubic computational requirements. Nevertheless,
let us suppose
with N observations, can be
represented by p eigenvectors, and that \Psi, with M
Function Merge( -
returns (-z; W; \Pi; P )
BEGIN
y
for each column vector of H
discard this column,
if it is of small magnitude.
endfor
of
of \Delta
number of basis vectors in -
construct LHS of Equation 31
eigenvalues of A
eigenvectors of A
discard small eigensolutions, as appropriate
END
Figure
1: Algorithm for merging two eigenspace models
observations, can be represented by q eigenvectors.
Typically p and q are (much) less than N and M ,
respectively.
To compute an overall model with the batch
method requires O((N +M) 3 ) operations. Assuming
that both models to be merged are already known,
our merging method requires at most O((p
the problem to be solved becomes
smaller the greater the amount of overlap between
the eigenspaces
of\Omega and \Psi. (In fact, the number
of operations required is O(s 3 see the end of Section
3.1.1.)
If one, or both, of the models to be merged are unknown
initially, then we incur an extra cost of O(N 3 ),
reduces any advan-
tage. Nevertheless, in one typical scenario, we might
expect\Omega to be known (an existing large database of
N observations), while \Psi is a relatively small batch
of M observations to be added to it. In this case,
the extra penalty, of O(M 3 ), is of little significance
compared to O((N +M) 3 ).
while an exact analysis is complicated and
indeed data dependent, we expect efficiency gains in
both time and memory resources in practice.
Furthermore, if computer memory is limited, sub-division
of the initial set may be unavoidable in order
to manage eigenspace model computation. We have
now provided a tractable solution to this problem.
Splitting eigenspace models
Here we show how to split two eigenspace models.
Given an eigenspace model
remove M) from it to give a third
use \Pi rr
, because
ss
is not available in general. Although splitting is
essentially the opposite of merging, this is not completely
true as it is impossible to regenerate information
which has been discarded when the overall model
is created (whether by batch methods or otherwise).
Thus, if we split one eigenspace model from a larger
one, the eigenvectors of the remnant must still form
some subspace of the larger.
We state the results for splitting without proof.
Clearly, . The new mean is:
As in the case of merging, new eigenvalues and eigen-vectors
are computed via an intermediate eigenprob-
lem. In this case it is:
G rp
rp
r
rr
x).
The eigenvalues we seek are the q non-zero elements
on the diagonal of rr . Thus we can permute
R rr
and rr , and write without loss of generality:
R rr
rr
R rt
[R rp
R rt
Hence we need only identify the eigenvectors in R rr
with non-zero eigenvalues, and compute the U np as:
U np
In terms of complexity, splitting must always involve
the solution an eigenproblem of size r. An algorithm
for splitting may readily be derived using a
similar approach to that for merging.
This section describes various experiments that we
carried out to compare the computational efficiency
of a batch method and our new methods for merging
and splitting, and the eigenspace models produced.
We compared models in terms of Euclidean distance
between the means, mean angular deviation
of corresponding eigenvectors, and mean rela-
tiveabsolute difference between corresponding eigen-
values. In doing so, we took care that both models
had the same number of dimensions.
As well as the simple measures above, other
performance measures may be more relevant when
eigenspace models are used for particular ap-
plications, and thus other tests were also per-
formed. Eigenspace models may be used for approximating
high-dimensional observations with a low-dimensional
vector; the error is the size of the residue
vector. The sizes of such residue vectors can readily
be compared for both batch and incremental meth-
ods. Eigenspace models may also be used for classifying
observations, giving the likelihood that an observation
belongs to a cluster. Different eigenspace
models may be compared by relative differences in
likelihoods. We average these differences over all corresponding
observations.
We used a database of 400 face images (each of
10304 pixels) available on-line 1 in the tests reported
here ; similar results were obtained in tests with randomly
generated data. The gray levels in the images
were scaled into the range [0; 1] by division only, but
no other preprocessing was done. We implemented
all functions using commercially available software
(Matlab) on a computer with standard configuration
(Sun Sparc Ultra 10, 300 Hz, 64 Mb RAM).
The results we present used up to 300 images, as
the physical resources of our computer meant that
heavy paging started to occur beyond this limit for
the batch method, although such paging did not affect
the incremental method.
For all tests, the experimental procedure used
was to compute eigenspace models using a batch
method [12], and compare these to models produced
by merging or splitting other models also produced
by the batch method. In each case, the largest of
the three data sets contained 300 images. These were
partitioned into two data sets, each containing a multiple
of 50 images. We included the degenerate cases
when one model contained zero images. Note that we
tested both smaller models merged with larger ones,
and vice-versa.
The number of eigenvectors retained in any model,
including a merged model, was set to be 100 as a
maximum, for ease of comparing results. (Initial
tests using other strategies indicate that the resulting
eigenspace model is little effected.)
5.1 Timing
When measuring CPU time we ran the same code
several times and chose the smallest value, to minimise
the effect of other concurrently running process.
Initially we measured time taken to compute a
model using the batch methods, for data sets of different
sizes. Results are presented in Figure 2 and
show cubic complexity, as predicted.
1 The Olivetti database of faces:
http://www.cam-orl.co.uk/facedatabase.html
Number of input images
time
in
seconds
observed data
cubic fit
Figure
2: Time to compute an eigenspace model with
a batch method versus the number of images, N . The
time is approximated by the cubic: 5:3 \Theta 10 \Gamma4 N
5.1.1 Merging
We then measured the time taken to merge two previously
constructed models. The results are shown
in
Figure
3. This shows that time complexity is approximately
symmetric about the point
half the number of input images. This result may
be surprising because the algorithm given for merging
is not symmetric with respect to its inputs, despite
that fact that the mathematical solution is independent
of order. The approximate symmetry in
time-complexity can be explained by assuming independent
eigenspaces with a fixed upper-bound on
the number of eigenvectors: suppose the numbers of
eigenvectors in the models are N and M . Then complexities
of the main steps are approximately as fol-
lows: computing a new spanning set, - is O(M 3 );
solving an eigenproblem is O(N 3 +M 3 ); rotating the
new eigenvectors is O(N 3 +M 3 ). Thus the time com-
plexity, under the stated conditions, is approximately
O(N 3 +M 3 ), which is symmetric.
Next, the times taken to compute an eigenspace
model from 300 images in total, using the batch
method and our merging method, are compared in
Number of images in first model
time
in
seconds
incremental time
joint
Figure
3: Time to merge two eigenspace models of
images
versus the number of im-
ages, N , in \Omega\Gamma The number of images in \Phi is 300 \Gamma N .
Hence, the total number of different images used to
compute \Phi is constant 300.
Figure
4. The incremental time is the time needed
to compute the eigenspace model to be merged, and
merge it with a pre-computed existing one. The joint
time is the time to compute both smaller eigenmodels
and then merge them. As might be expected, incremental
time falls as the additional number of images
required falls. The joint time is approximately con-
stant, and very similar to the total batch time.
While the incremental method offers no time saving
in the cases above, it does use much less memory.
This could clearly be seen when a model was computed
using 400 images: paging effects set in when
a batch method was used and the time taken rose
to over 800 seconds. The time to produce an equivalent
model by merging two sub-models of size 200,
however, took less than half that.
5.1.2 Splitting
Time complexity for splitting eigenspaces should depend
principally on the size of the large eigenspace
which from which the smaller space is being removed,
and the size of the smaller eigenspace should have
Number of images in first model
time
in
seconds
incremental time
joint
batch time
Figure
4: Time to make a complete eigenspace model
for a database of 300 images. The incremental time
is the addition of the time to construct only the
eigenspace to be added. The joint time is the time to
compute both eigenspace models and merge them.
little effect. This is because the size of the intermediate
eigenproblem to be solved depends on the
size of the larger space, and therefore dominates the
complexity. These expectations are borne out experi-
mentally. We computed a large eigenmodel using 300
images, as before. We then removed smaller models
of sizes between 50 and 250 images inclusive, in
steps of 50 images. At most, 100 eigenvectors were
kept in any model.The average time taken was approximately
constant, and ranged between 9 and 12
seconds, with a mean time of about 11.4 seconds.
These figures are much smaller than those observed
for merging because the large eigenspace contains
only 100 eigenvectors. Thus the matrices involved in
the computation were of size (100 \Theta 100), whereas in
merging the size was at least (150 \Theta 150), and other
computations were involved (such as computing an
orthonormal basis).
5.2 Similarity and performance
The measures used for assessing similarity and performance
of batch and incremental methods were described
above.
5.2.1 Merging
We first compared the means of the models produced
by each method using Euclidean distance. This distance
is greatest when the models to be merged have
the same number of input images (150 in this case),
as fall smoothly to zero when either of the models to
be merged is empty. The value at maximum is typically
very small, and we measured it to be 3:5 \Theta 10 \Gamma14
units of gray level. This compares favourably with
the working precision of Matlab, which is 2:2 \Theta 10 \Gamma16 .
We next compared the directions of the eigenvectors
produced by each method. The error in eigenvector
direction was measured by the mean angular
deviation, as shown in Figure 5. Ignoring the degenerate
cases, when one of the models is empty, we see
that angular deviation has a single minimum when
the eigenspace models were built with about the same
number of images. This may be because when a small
model is added to a large model its information tends
to be swamped.
These results show angular deviation to be very
small on average.
Number of input images
eigenvector
mean
angular
deviation
in
degrees
Figure
5: Angular deviation between eigenvectors
produced by batch and incremental methods versus
the number of images in the first eigenspace model.
The sizes of eigenvalues from both methods were
compared next. In general we observed that the
smaller eigenvalues had larger errors, as might be expected
as they contain relatively little information
and so are more susceptible to noise. In Figure 6
we given a mean absolute difference in eigenvalue.
This rises to a single peak when the number of input
images in both models is the same. Even so, the
maximal value is small, 7 \Theta 10 \Gamma3 units of gray level.
The largest eigenvalue is typically about 100.
Number of input images
eigenvalue
mean
absolute
deviation
Figure
Difference between eigenvalues produced by
batch and incremental methods versus the number of
images in the first eigenspace model.
We now turn to performance measures. The
merged eigenspaces represent the image data with
little loss in accuracy, as measured by the mean difference
in residue error, Figure 7. This performance
measure is typically small, about 10 \Gamma6 units of gray
level per pixel, clearly below any noticeable effect.
Finally we compared differences in likelihood values
(Equation produced by the two methods.
This difference is again small, typically of the order
8 shows; this should be compared
with a mean likelihood over all observations of the
Again the differences in classifications
that would be made by these models would be very
small.
Number of input images
difference
in
mean
residue
Figure
7: Difference in reconstruction errors per pixel
produced by batch and incremental methods versus
the number of images in the first eigenspace model.
Number of input images
mean
class
difference
Figure
8: Difference in likelihoods produced by batch
and incremental methods versus the number of images
in the first eigenspace model.
5.2.2 Splitting
Similar measures for splitting were computed using
exactly those conditions described for testing the timing
of splitting, and for exactly those characteristics
described for merging. In each case a model to be
subtracted was computed by a batch method, and
removed from the overall model by our splitting pro-
cedure. Also, a batch model was made for purposes
of comparison from the residual data set. In all that
follows the phrase "size of the removed eigenspace"
means the number of images used to construct the
eigenspace removed from the eigenspace built from
300 images.
The Euclidean distance between the means of the
models produced by each method grows monotonically
as the size of the removed eigenspace falls,
and never exceeds about 1:5 \Theta 10 \Gamma13 gray-level units.
Splitting is slightly less accurate in this respect than
merging.
The mean angular deviation between corresponding
eigenvector directions rises in similar fashion,
from about 0.6 degrees when the size of the removed
eigenspace is 250, to about 1.1 when the removed
eigenmodel is of size 100. This represented a maximum
in the deviation error, because an error of about
degree was obtained when the removed model is of
size 50. Again, these angular deviations are somewhat
larger than those for merging.
The mean difference in eigenvalues shows the same
general trend. Its maximum is about 0.5 units of
gray level, when the size of the removed eigenspace
is 50. This is a much larger error than in the case
of merging, but is still relatively small compared to
a maximum eigenvalue of about 100. As in the case
of merging, the deviation in eigenvalue grows larger
as the size (importance) of the eigenvalue falls.
Difference in reconstruction error rises as the size
of the removed eigenspace falls. Its size is of the
units of gray level per pixel, which again
is negligible.
The difference in likelihoods is significant, the relative
difference some cases being factors of 10 or more.
After conducting further experiments, we found that
this relative difference is sensitive to the errors introduced
when eigenvectors and eigenvalues are dis-
carded. This is not a surprise, given that likelihood
differences are magnified exponentially. We found
that changing the criteria for discarding eigenvectors
very much reduced: relative difference in likelihood
of the order 10 \Gamma14 were achieved in some cases. We
conclude that should an application require not only
splitting, but also require classification, then eigen-vectors
and eigenvalues must be discarded with care.
We suggest that keeping eigenvectors which exceed
some significance threshold be kept.
Overall the trend is clear; accuracy and performance
grew worse, against any measure we used, as
the size of the eigenmodel being removed falls.
6 Conclusion
We have shown that merging and splitting eigenspace
models is possible, allowing a batch of new observations
to be processed as a whole. This theoretical
result is novel. Our experimental results show that
the methods are wholly practical: computation times
are feasible, the eigenspaces are very similar, and the
performance characteristics differ little enough to not
matter to applications.
time advantage is obtained over batch methods
whenever one of the eigenspace models exists already.
A typical scenario is the addition of a set of observations
(a new year's in take of student faces, say) to
an existing, large, database. Our merging method is
even more advantageous when both eigenspace models
exist already. A typical scenario is dynamic clustering
for classification, in which two eigenspace models
can be merged, perhaps to create a hierarchy of
eigenspace models.
--R
An eigenspace update algorithm for image analysis.
Kumar.
Natural basis functions and topographic memory for face recognition.
Probabilistic visual learning for object representa- tion
3rd Edition.
On the generalised karhunen-loeve expansion
--TR
--CTR
Luis Carlos Altamirano , Leopoldo Altamirano , Matas Alvarado, Non-uniform sampling for improved appearance-based models, Pattern Recognition Letters, v.24 n.1-3, p.521-535, January
Ko Nishino , Shree K. Nayar , Tony Jebara, Clustered Blockwise PCA for Representing Visual Data, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.10, p.1675-1679, October 2005
Jieping Ye , Qi Li , Hui Xiong , Haesun Park , Ravi Janardan , Vipin Kumar, IDR/QR: an incremental dimension reduction algorithm via QR decomposition, Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, August 22-25, 2004, Seattle, WA, USA
Hoi Chan , Thomas Kwok, An Autonomic Problem Determination and Remediation Agent for Ambiguous Situations Based on Singular Value Decomposition Technique, Proceedings of the IEEE/WIC/ACM international conference on Intelligent Agent Technology, p.270-275, December 18-22, 2006
Tae-Kyun Kim , Ognjen Arandjelovi , Roberto Cipolla, Boosted manifold principal angles for image set-based recognition, Pattern Recognition, v.40 n.9, p.2475-2484, September, 2007
Jieping Ye , Qi Li , Hui Xiong , Haesun Park , Ravi Janardan , Vipin Kumar, IDR/QR: An Incremental Dimension Reduction Algorithm via QR Decomposition, IEEE Transactions on Knowledge and Data Engineering, v.17 n.9, p.1208-1222, September 2005
Xiang Sean Zhou , Dorin Comaniciu , Alok Gupta, An Information Fusion Framework for Robust Shape Tracking, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.1, p.115-129, January 2005
John P. Collomosse , Peter M. Hall, Cubist Style Rendering from Photographs, IEEE Transactions on Visualization and Computer Graphics, v.9 n.4, p.443-453, October | principal component analysis;eigenspace models;model merging;model splitting;gaussian mixture models |
353382 | Enforceable security policies. | A precise characterization is given for the class of security policies enforceable with mechanisms that work by monitoring system execution, and automata are introduced for specifying exactly that class of security policies. Techniques to enforce security policies specified by such automata are also discussed. | Introduction
A security policy defines execution that, for one reason or another, has been
deemed unacceptable. For example, a security policy might concern
access control, and restrict what operations principals can perform on
objects,
ffl information flow, and restrict what things principals can infer about
objects from observing other aspects of system behavior, or
ffl availability, and prohibit a principal from being denied use of a resource
as a result of execution by other principals.
Supported in part by ARPA/RADC grant F30602-96-1-0317 and AFOSR grant
F49620-94-1-0198. The views and conclusions contained herein are those of the author and
should not be interpreted as necessarily representing the official policies or endorsements,
either expressed or implied, of these organizations or the U.S. Government.
To date, general-purpose security policies, like those above, have attracted
most of the attention. But, application-dependent and special-purpose
security policies are increasingly important. A system to support
mobile code, like Java [5], might prevent information leakage by enforcing a
security policy that bars messages from being sent after files have been read.
To support electronic commerce, a security policy might prohibit executions
in which a customer pays for a service but the seller does not provide that
service. And finally, electronic storage and retrieval of intellectual property
is governed by complicated rights-management schemes that restrict not
only the use of stored materials but also the use of any derivatives [17].
The practicality of any security policy depends on whether that policy
is enforceable and at what cost. In this paper, we address those questions
for the class of enforcement mechanisms, which we call EM, that work by
monitoring a target system and terminating any execution that is about
to violate the security policy being enforced. Class EM includes security
kernels, reference monitors, and all other operating system and hardware-based
enforcement mechanisms that have appeared in the literature. Thus,
understanding what can and cannot be accomplished using mechanisms in
EM has value.
Excluded from EM are mechanisms that use more information than is
available from monitoring the execution of a target system. Therefore, compilers
and theorem-provers, which analyze a static representation of the
target system to deduce information about all of its possible executions, are
excluded from EM. The availability of information about all possible target
system executions gives power to an enforcement mechanism-just how
much power is an open question. Also excluded from EM are mechanisms
that modify a target system before executing it. Presumably the modified
target system would be "equivalent" to the original except that it satisfies
the security policy of interest; a definition for "equivalent" is needed in order
to analyze this class of mechanisms.
We proceed as follows. In x2, a precise characterization is given for security
policies that can be enforced using mechanisms in EM. An automata-based
formalism for specifying those security policies is the subject of x3.
Mechanisms in EM for enforcing security policies specified by automata
are described in x4. Next, x5 discusses some pragmatic issues related to
specifying and enforcing security policies as well as the application of our
enforcement mechanisms to safety-critical systems.
2 Characteristics of EM Enforcement Mechanisms
Formally, we represent executions by finite and infinite sequences, where \Psi
is the set of all possible such sequences. 1 Thus, a target system S defines
a subset \Sigma S of \Psi, and a security policy is a predicate on sets of executions.
Target system S satisfies security policy P if and only if P (\Sigma S ) equals true.
Notice that, for sets \Sigma and \Pi of executions, we do not require that if \Sigma
satisfies P and \Pi ae \Sigma holds, then \Pi satisfies P. Imposing such a requirement
on security policies would disqualify too many useful candidates. For
instance, the requirement would preclude information flow (as defined informally
in x1) from being considered a security policy-set \Psi of all executions
satisfies information flow, but a subset \Pi containing only those executions
in which the value of a variable x in each execution is correlated with the
value of y (say) does not. In particular, when an execution is known to be
in \Pi then the value of variable x reveals information about the value of y.
By definition, enforcement mechanisms in EM work by monitoring execution
of the target system. Thus, any security policy P that can be enforced
using a mechanism from EM must be equivalent to a predicate of the form
P is a predicate on executions. b
P formalizes the criteria used by the
enforcement mechanism for deciding to terminate an execution that would
otherwise violate the policy being enforced. In [1], a set of executions that
can be defined by checking each execution individually is called a property.
Using that terminology, we conclude from (1) that a security policy must
characterize sets that are properties in order for the policy to have an enforcement
mechanism in EM.
Not every security policy characterizes sets that are properties. Some
predicates on sets of executions cannot be put in form (1) because they
cannot be defined in terms of criteria that individual executions must each
satisfy in isolation. For example, the information flow policy discussed above
characterizes sets that are not properties (as is proved in [11]). Whether
information flows from x to y in a given execution depends, in part, on
what values y takes in other possible executions (and whether those values
are correlated with the value of x). A predicate to characterize such sets
of executions cannot be constructed only using predicates defined on single
executions.
1 The manner in which executions are represented is irrelevant here. Finite and infinite
sequences of atomic actions, of higher-level system steps, of program states, or of
state/action pairs are all plausible alternatives.
Enforcement mechanisms in EM cannot base decisions on possible future
execution, since that information is, by definition, not available to mechanisms
in EM. This further restricts what security policies can be enforced
by mechanisms from EM. In particular, consider security policy P of (1),
and suppose - is the prefix of some execution - 0 where b
enforcement mechanism for P must prohibit - even
though extension - 0 satisfies b
because otherwise execution of the target
system might terminate before - is extended into - 0 , and the enforcement
mechanism would then have failed to enforce P.
We can formalize this requirement as follows. For oe a finite or infinite
execution having i or more steps, and - a finite execution, let:
oe[::i] denote the prefix of oe involving its first i steps
- oe denote execution - followed by execution oe
and define \Pi \Gamma to be the set of all finite prefixes of elements in set \Pi. Then,
the above requirement concerning execution prefixes violating b
policy P defined by (1), is:
P(- oe))) (2)
Finally, note that any execution rejected by an enforcement mechanism
must be rejected after a finite period. This is formalized by:
policies satisfying (2) and (3) are satisfied by sets that are safety
properties [7], the class of properties that stipulate no "bad thing" happens
during an execution. Formally, a property S is defined [8] to be a safety
property if and only if for any finite or infinite execution oe,
oe
holds. This means that S is a safety property if and only if S is characterized
by a set of finite executions that are excluded and, therefore, the prefix of
no execution in S . Clearly, a security policy P satisfying (2) and (3) has
such a set of finite prefixes-the set of prefixes - 2 \Psi \Gamma such that : b
holds-so P is satisfied by sets that are safety properties according to (4).
Our analysis of enforcement mechanisms in EM has established:
Unenforceable Security Policy: If the sets of executions characterized
by a security policy P are not safety properties, then an enforcement
mechanism from EM does not exist for P.
Obviously, the contrapositive holds as well: all EM enforcement mechanisms
enforce safety properties. But, as discussed in x4, the converse-that all
safety properties have EM enforcement mechanisms-does not hold.
Revisiting the three application-independent security policies described
in x1, we find:
ffl Access control defines safety properties. The set of proscribed partial
executions contains those partial executions ending with an unacceptable
operation being invoked.
ffl Information flow does not define sets that are properties (as argued
above), so it does not define sets that are safety properties. Not being
safety properties, there are no enforcement mechanisms in EM for
exactly this policy. 2
ffl Availability defines sets that are properties but not safety properties.
In particular, any partial execution can be extended in a way that allows
a principal to access a resource, so availability lacks a defining set
of proscribed partial executions that every safety property must have.
Thus, there are no enforcement mechanisms in EM for availability-at
least as that policy is defined in x1. 3
3 Security Automata
Enforcement mechanisms in EM work by terminating any target-system
execution after seeing a finite prefix oe such that : b
P(oe) holds, for some
predicate b
defined by the policy being enforced. We established in x2 that
the set of executions satisfying b
also must be a safety property. Those
being the only constraints on b
P , we conclude that recognizers for sets of
2 Mechanisms from EM purporting to prevent information flow do so by enforcing a
security policy that implies, but is not equivalent to, the absence of information flow.
Given security policies P and Q for which P ) Q holds, a mechanism that enforces
does suffice for enforcing Q. And, there do exist security policies that both imply
restrictions on information flow and define sets that are safety properties. However, a
policy P that implies Q might rule out executions that do not violate Q, so using the
stronger policy is not without adverse consequences.
3 There are alternative formulations of availability that do characterize sets that are
safety properties. An example is "one principal cannot be denied use of a resource for
more than D steps as a result of execution by other principals". Here, the defining set of
partial executions contains intervals that exceed D steps and during which a principal is
denied use of a resource.
FileRead
not FileRead not Send
Figure
1: No Send after F ileRead
executions that are safety properties can serve as the basis for enforcement
mechanisms in EM.
A class of automata for recognizing safety properties is defined (but not
named) in [2]. We shall refer to these recognizers as security automata; they
are similar to ordinary non-deterministic finite-state automata [6]. Formally,
a security automaton is defined by:
- a (finite) set Q of automaton states,
- a set Q 0
' Q of initial automaton states,
- a (countable) set I of input symbols, and
- a transition function,
I is dictated by the security policy being enforced; the symbols in I might
correspond to system states, atomic actions, higher-level actions of the sys-
tem, or state/action pairs. In addition, the symbols of I might also have to
encode information about the past-for some safety properties, the transition
function will require that information.
To process a sequence s 1 of input symbols, the automaton starts
with its current state set equal to Q 0
and reads the sequence, one symbol at
a time. As each symbol s i is read, the automaton changes its current state
set Q 0 to the set Q 00 of automaton states, where
If ever Q 00 is empty, the input is rejected; otherwise the input is accepted.
Notice that this acceptance criterion means that a security automaton can
accept sequences that have infinite length as well as those having finite
length.
A(prin, obj, oper)
Figure
2: Access Control
Figure
1 depicts a security automaton for a security policy that prohibits
execution of Send operations after a F ileRead has been executed. The
automaton's states are represented by the two nodes labeled q nfr
(for "no
file read" ) and q fr (for "file read"). Initial states of the automaton are
represented in the figure by unlabeled incoming edges, so automaton state
is the only initial automaton state. In the figure, transition function
is specified in terms of edges labeled by transition predicates, which are
Boolean-valued effectively computable total functions with domain I. Let
ij denote the predicate that labels the edge from node q i to node q j . Then,
the security automaton, upon reading an input symbol s when Q 0 is the
current state set, changes its current state set to
In
Figure
transition predicate not FileRead is assumed to be satisfied
by system execution steps that are not file read operations, and transition
predicate not Send is assumed to be satisfied by system execution steps that
are not message-send operations. Since no transition is defined from q fr
for
input symbols corresponding to message-send execution steps, the security
automaton if Figure 1 rejects inputs in which a message is sent after a file
is read.
Another example of a security automaton is given in Figure 2. Different
instantiations for transition predicate A(prin; obj; oper) allow this automaton
to specify either discretionary access control [9] or mandatory access
control [3].
Discretionary Access Control. This policy prohibits operations according
to an access control matrix. Specifically, given access control matrix
M , a principal P rin is permitted by the policy to execute an
operation Oper involving object Obj only if Oper 2 M [P rin; Obj]
holds.
To specify this policy using the automaton of Figure 2, transition
predicate A(prin; obj; oper) would be instantiated by:
Mandatory Access Control. This policy prohibits execution of operations
according to a partially ordered set of security labels that are
associated with system objects. Information in objects assigned higher
labels is not permitted to be read and then stored into objects assigned
lower labels.
For example, a system's objects might be assigned labels from the set
ftopsecret; secret; sensitive; unclassifiedg
ordered according to:
topsecret - secret - sensitive - unclassified
Suppose two system operations are supported-read and write. A
mandatory access control policy might restrict execution of these operations
according to:
(i) a principal p with label -(p) is permitted to execute read(F ),
which reads a file F with label -(F ), only if -(p) -(F ) holds.
(ii) a principal p with label -(p) is permitted to execute write(F ),
which writes a file F with label -(F ), only if -(F ) -(p) holds.
To specify this policy using the automaton of Figure 2, transition
predicate A(prin; obj; oper) is instantiated by:
As a final illustration of security automata, we turn to electronic com-
merce. We might, for example, desire that a service-provider be prevented
from engaging in actions other than delivering the service for which a customer
has paid. This requirement is a security policy; it can be formalized in
terms of the following predicates, if executions are represented as sequences
of operations:
not pay(C)
Figure
3: Security automaton for fair transaction
customer C requests and pays for service
customer C is rendered service
The security policy of interest proscribes executions in which the service-provider
executes an operation that does not satisfy serve(C) after having
engaged in operation that satisfies pay(C). A security automaton for this
policy is given in Figure 3.
Notice, the security automaton of Figure 3 does not stipulate that payment
guarantees service. The security policy it specifies only limits what
the service-provider can do once a customer has made payment. In partic-
ular, the security policy that is specified allows a service-provider to stop
executing (i.e. stop producing input symbols) rather than rendering a paid-
for service. We do not impose the stronger security policy that service be
guaranteed after payment because that is not a safety property-there is
no defining set of proscribed partial executions-and therefore, according to
x2, it is not enforceable using a mechanism from EM.
4 Using Security Automata for Enforcement
Any security automaton can serve as the basis for an enforcement mechanism
in class EM, as follows. Each step the target system will next take
is represented by an input symbol and sent to an implementation of the
security automaton.
(i) If the automaton can make a transition on that input symbol, then
the target system is allowed to perform that step and the automaton
state is changed according to its transition predicates.
(ii) If the automaton cannot make a transition on that input symbol, then
the target system is terminated.
In fact, any security policy enforceable using a mechanism from EM can
be enforced using such a security-automaton implementation. This is because
all safety properties have specifications as security automata, and Unenforceable
Security Policy of x2 implies that EM enforcement mechanisms
enforce safety properties. Consequently, by understanding the limitations
of security-automata enforcement mechanisms, we can gain insight into the
limitations of all enforcement mechanisms in class EM.
Implicit in (ii) is the assumption that the target system can be terminated
by the enforcement mechanism. Specifically, we assume that the
enforcement mechanism has sufficient control over the target system to stop
further automaton input symbols from being produced. This control requirement
is subtle and makes certain security policies-even though they characterize
sets that are safety properties-unenforceable using mechanisms from
EM.
For example, consider the following variation on availability of x1:
Real-Time Availability: One principal cannot be denied use of a resource
for more than D seconds.
Sets satisfying Real-Time Availability are safety properties-the "bad thing"
is an interval of execution spanning more than D seconds during which some
principal is denied the resource. The input symbols of a security automaton
for Real-Time Availability must therefore encode time. However, the
passage of time cannot be stopped, so a target system with real-time clocks
cannot be prevented from continuing to produce input symbols. Real-Time
Availability simply cannot be enforced using one of our automata-based enforcement
mechanisms, because target systems lack the necessary controls.
And, since the other mechanisms in EM are no more powerful, we conclude
that Real-Time Availability cannot be enforced using any mechanism in EM.
Two mechanisms are involved in our security-automaton implementation
of an enforcement mechanism.
Automaton Input Read: A mechanism to determine that an input symbol
has been produced by the target system and then to forward that
symbol to the security automaton.
Automaton Transition: A mechanism to determine whether the security
automaton can make a transition on a given input and then to perform
that transition.
Their aggregate cost can be quite high. For example, when the automaton's
input symbols are the set of program states and its transition predicates
are arbitrary state predicates, a new input symbol is produced for each
machine-language instruction that the target system executes. The enforcement
mechanism must be invoked before every target-system instruction.
However, for security policies where the target system's production of
input symbols coincides with occurrences of hardware traps, our automata-based
enforcement mechanism can be supported quite cheaply by incorporating
it into the trap-handler. One example is implementing an enforcement
mechanism for access control policies on system-supported objects, like files.
Here, the target system's production of input symbols coincides with invocations
of system operations, hence the production of input symbols coincides
with occurrences of system-call traps.
A second example of exploiting hardware traps arises in implementing
memory protection. Memory protection implements discretionary access
control with operations read, write, and execute and an access control matrix
that tells how processes can access each region of memory. The security
automaton of Figure 2 specifies this security policy. Notice that this security
automaton expects an input symbol for each memory reference. But
most, if not all, of the these input symbols cause no change to the security
automaton's state. Input symbols that do not cause automaton state
transitions need not be forwarded to the automaton, and that justifies the
following optimization of Automaton Input Read:
Automaton Input Read Optimization: Input symbols are not forwarded
to the security automaton if the state of the automaton just after
the transition would be the same as it was before the transition.
Given this optimization, the production of automaton input symbols for
memory protection can be made to coincide with occurrences of traps.
The target system's memory-protection hardware-base/bounds registers
or page and segment tables-is initialized so that a trap occurs when an
input symbol should be forwarded to the memory protection automaton.
Memory references that do not cause traps never cause a state transition or
undefined transition by the automaton.
Inexpensive implementation of our automata-based enforcement mechanisms
is also possible when programs are executed by a software-implemented
virtual machine (sometimes known as a reference monitor). The virtual machine
instruction-processing cycle is augmented so that it produces input
symbols and makes automaton transitions, according to either an internal
or an externally specified security automaton. For example, the Java virtual
machine[10] could easily be augmented to implement the Automaton
Input Read and Automaton Transition mechanisms for input symbols that
correspond to method invocations.
Beyond Class EM Enforcement Mechanisms
The overhead of enforcement can sometimes be reduced by merging the
enforcement mechanism into the target system. One such scheme, which
has recently attracted attention, is software-based fault isolation (SFI), also
known as "sandboxing" [19, 16]. SFI implements memory protection, as
specified by a one-state automaton like that of Figure 2, but does so without
hardware assistance. Instead, a program is edited before it is executed, and
only such edited programs are run by the target system. (Usually, it is the
object code that is edited.) The edits insert instructions to check and/or
modify the values of operands, so that illegal memory references are never
attempted,
SFI is not in class EM because SFI involves modifying the target sys-
tem, and modifications are not permitted for enforcement mechanisms in
EM. But viewed in our framework, the inserted instructions for SFI can be
seen to implement Automaton Input Read by copying code for Automaton
Transition in-line before each target system instruction that produces an
input symbol. Notice that nothing prevents the SFI approach from being
used with multi-state automata, thereby enforcing any security policy that
can be specified as a security automaton. Moreover, a program optimizer
should be able to simplify the inserted code and eliminate useless portions 4
although this introduces a second type of program analysis to SFI and requires
putting further trust in automated program analysis tools.
Finally, there is no need for any run-time enforcement mechanism if
the target system can be analyzed and proved not to violate the security
policy of interest. This approach has been employed for a security policy like
what SFI was originally intended to address-a policy specified by one-state
security automata-in proof carrying code (PCC) [13]. With PCC, a proof
is supplied along with a program, and this proof comes in a form that can be
checked mechanically before running that program. The security policy will
not be violated if, before the program is executed, the accompanying proof is
checked and found to be correct. The original formulation of PCC required
that proofs be constructed by hand. This restriction can be relaxed. For
Ulfar Erlingsson of Cornell has implemented a system that does exactly this for the
Java virtual machine and for Intel x86 machine language.
certain security policies specified by one-state security automata, a compiler
can automatically produce PCC from programs written in high-level, type-safe
programming languages[12, 14].
To extend PCC for security policies that are specified by arbitrary security
automata, a method is needed to extract proof obligations for establishing
that a program satisfies the property given by such an automaton.
Such a method does exist-it is described in [2].
The utility of a formalism partly depends on the ease with which objects
of the formalism can be read and written. Users of the formalism must
be able to translate informal requirements into objects of the formalism.
With security automata, establishing the correspondence between transition
predicates and informal requirements on system behavior is crucial and can
require a detailed understanding of the target system. The automaton of
Figure
1, for example, only captures the informal requirement that messages
are not sent after a file is read if it is impossible to send a message unless
transition predicate Send is true and it is impossible to read a file unless
transition predicate F ileRead is true. There might be many ways to send
messages-some obvious and others buried deep within the bowels of the
target system. All must be identified and included in the definition of Send;
a similar obligation accompanies transition predicate F ileRead.
The general problem of establishing the correspondence between informal
requirements and some purported formalization of those requirements is not
new to software engineers. The usual solution is to analyze the formalization,
being alert to inconsistencies between the results of the analysis and the
informal requirements. We might use a formal logic to derive consequences
from the formalization; we might use partial evaluation to analyze what the
formalization implies about one or another scenario, a form of testing; or,
we might (manually or automatically) transform the formalization into a
prototype and observe its behavior in various scenarios.
Success with proving, testing, or prototyping as a way to gain confidence
in a formalization depends upon two things. The first is to decide
what aspects of a formalization to check, and this is largely independent of
the formalism. But the second, having the means to do those checks, not
only depends on the formalism but largely determines the usability of that
formalism. To do proving, we require a logic whose language includes the
formalism; to do testing, we require a means of evaluating a formalization
in one or another scenario; and to do prototyping, we must have some way
to transform a formalization into a computational form.
As it happens, a rich set of analytical tools does exist for security au-
tomata, because security automata are a class of Buchi automata [4], and
Buchi automata are widely used in computer-aided program verification
tools. Existing formal methods based either on model checking or on theorem
proving can be employed to analyze a security policy that has been
specified as a security automaton. And, testing or prototyping a security
policy that is specified by a security automaton is just a matter of running
the automaton.
Guidelines for Structuring Security Automata
Real system security policies are best given as collections of simpler policies,
a single large monolithic policy being difficult to comprehend. The system's
security policy is then the result of composing the simpler policies in the collection
by taking their conjunction. To employ such a separation of concerns
when security policies are specified by security automata, we must be able
to compose security automata in an analogous fashion. Given a collection of
security automata, we must be able to construct a single conjunction security
automaton for the conjunction of the security policies specified by the
automata in the collection. That construction is not difficult: An execution
is rejected by the conjunction security automaton if and only if it is rejected
by any automaton in the collection.
Beyond comprehensibility, there are other advantages to specifying system
security policies as collections of security automata. First, having a
collection allows different enforcement mechanisms to be used for the different
automata (hence the different security policies) in the collection. Second,
security policies specified by distinct automata can be enforced by distinct
system components, something that is attractive when all of some security
automaton's input symbols correspond to events at a single system compo-
nent. Benefits that accrue from having the source of all of an automaton's
input symbols be a single component include:
ffl Enforcement of a component's security policy involves trusting only
that component.
ffl The overhead of an enforcement mechanism is lower because communication
between components can be reduced.
For example, the security policy for a distributed system might be specified
by giving a separate security automaton for each system host. Then, each
host would itself implement Automaton Input Read and Automaton Transitions
mechanisms for only the security automata concerning that host.
The designer of a security automaton often must choose between encoding
security-relevant information in the target system's state and in an
automaton state. Larger automata are usually more complicated, hence
more difficult to understand, and often lead to more expensive enforcement
mechanisms. For example, our generalization of SFI involves modifying the
target system by inserting code and then employing a program optimizer
to simplify the result. The inserted code simulates the security automaton,
and the code for a smaller security automaton will be smaller, cheaper to
execute, and easier to optimize. Similarly, for our generalization of PCC,
proof obligations derived according to [2] are fewer and simpler if the security
automata is smaller.
However, we conjecture that the cost of executing a reference monitor
that implements Automaton Transition is probably insensitive to whether
security-relevant state information is stored in the target system or in automaton
states. This is because the predicates that must be evaluated in the
two implementations will differ only in whether a state component has been
associated with the security automaton or the target system, and the cost
of reading that state information and of evaluating the predicates should be
similar.
Application to Safety-Critical Systems
The idea that security kernels might have application in safety-critical systems
is eloquently justified in [15] and continues to interest researchers [18].
Safety-critical systems are concerned with enforcing properties that are
safety properties (in the sense of [8]), so it is natural to expect an enforcement
mechanism for safety properties to have application in this class of
systems. And, we see no impediments to using security automata or our
security-automata based enforcement mechanisms for enforcing safety properties
in safety-critical systems.
The justification given in [15] for using security kernels in safety-critical
systems involves a characterization of what types of properties can be enforced
by a security kernel. As do we in this paper, [15] concludes that safety
properties but not liveness properties are enforceable. However, the arguments
given in [15] are informal and are coupled to the semantics of kernel-supported
operations. The essential attributes of enforceability, which we
isolate and formalize by equations (1), (2), and (3), are neither identified
nor shown to imply that only safety properties can be enforced.
In addition, because [15] concerns kernelized systems, the notion of property
there is restricted to being sequences of kernel-provided functions. By
allowing security automata to have arbitrary sets of input symbols, our results
can be seen as generalizing those of [15]. And the generalization is a
useful one, because it applies to enforcement mechanisms that are not part of
a kernel. Thus, we can now extend the central thesis of [15], that kernelized
systems have application beyond implementing security policies, to justify
the use of enforcement mechanisms from EM when building safety-critical
systems.
Acknowledgments
I am grateful to Robbert van Renesse, Greg Morrisett, '
Ulfar Erlingsson,
Yaron Minsky, and Lidong Zhou for helpful feedback on the use and implementation
of security automata and for comments on previous drafts of
this paper. Helpful comments on an earlier draft of this paper were also
provided by Earl Boebert, Li Gong, Robert Grimm, Keith Marzullo, and
John Rushby. John McLean served as a valuable sounding board for these
ideas as I developed them. Feedback from Martin Abadi helped to sharpen
the formalism. And, the University of Tromso was a hospitable setting and
a compelling excuse for performing some of the work reported herein.
--R
Defining liveness.
Recognizing safety and liveness.
Secure computer systems: Mathematical foundations.
Java security: Present and near future.
Formal Languages and Their Relation to Automata.
Proving the correctness of multiprocess programs.
Logical Foundation.
The Java Virtual Machine Specification.
A general theory of composition for trace sets closed under selective interleaving functions.
From ML to typed assembly language.
Kernels for safety?
A tool for constructing safe extensible C
Letting Loose the Light: Igniting Commerce in Electronic Publication.
On the enforcement of software safety policies.
Efficient Software-Based Fault Isolation
--TR
A note on denial-of-service in operating systems
Distributed systems: methods and tools for specification. An advanced course
Verifying temporal properties without temporal logic
The DIAMOND security policy for object-oriented databases
Partial evaluation and automatic program generation
Efficient software-based fault isolation
Proof-carrying code
From system F to typed assembly language
The design and implementation of a certifying compiler
History-based access control for mobile code
SASI enforcement of security policies
Guarded commands, nondeterminacy and formal derivation of programs
Providing policy-neutral and transparent access control in extensible systems
Automata, Languages, and Machines
Java Virtual Machine Specification
Java Security
Authorization in Distributed Systems
A General Theory of Composition for Trace Sets Closed under Selective Interleaving Functions
A Logical Language for Expressing Authorizations
--CTR
James Ezick, Resolving and applying constraint queries on context-sensitive analyses, Proceedings of the ACM-SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering, June 07-08, 2004, Washington DC, USA
Scott C.-H. Huang , Kia Makki , Niki Pissinou, On optimizing compatible security policies in wireless networks, EURASIP Journal on Wireless Communications and Networking, v.2006 n.2, p.71-71, April 2006
Alan Shieh , Dan Williams , Emin Gn Sirer , Fred B. Schneider, Nexus: a new operating system for trustworthy computing, Proceedings of the twentieth ACM symposium on Operating systems principles, October 23-26, 2005, Brighton, United Kingdom
Gary McGraw , Greg Morrisett, Attacking Malicious Code: A Report to the Infosec Research Council, IEEE Software, v.17 n.5, p.33-41, September 2000
Krzysztof M. Brzezinski , Norbert Malinski, Reference specification issues in on-line verification by passive testing, Proceedings of the 24th IASTED international conference on Parallel and distributed computing and networks, p.186-191, February 14-16, 2006, Innsbruck, Austria
Jacob Zimmermann , George Mohay, Distributed intrusion detection in clusters based on non-interference, Proceedings of the 2006 Australasian workshops on Grid computing and e-research, p.89-95, January 16-19, 2006, Hobart, Tasmania, Australia
Vir V. Phoha , Amit U. Nadgar , Asok Ray , Shashi Phoha, Supervisory Control of Software Systems, IEEE Transactions on Computers, v.53 n.9, p.1187-1199, September 2004
Dries Vanoverberghe , Frank Piessens, Supporting Security Monitor-Aware Development, Proceedings of the Third International Workshop on Software Engineering for Secure Systems, p.2, May 20-26, 2007
Massimo Bartoletti , Pierpaolo Degano , Gian Luigi Ferrari, Policy framings for access control, Proceedings of the 2005 workshop on Issues in the theory of security, p.5-11, January 10-11, 2005, Long Beach, California
R. Sekar , C. R. Ramakrishnan , I. V. Ramakrishnan , S. A. Smolka, Model-Carrying Code (MCC): a new paradigm for mobile-code security, Proceedings of the 2001 workshop on New security paradigms, September 10-13, 2001, Cloudcroft, New Mexico
Prasad Naldurg , Roy H. Campbell, Dynamic access control: preserving safety and trust for network defense operations, Proceedings of the eighth ACM symposium on Access control models and technologies, June 02-03, 2003, Como, Italy
J. J. Whitmore, A method for designing secure solutions, IBM Systems Journal, v.40 n.3, p.747-768, March 2001
Zaid Dwaikat , Francesco Parisi-Presicce, Risky trust: risk-based analysis of software systems, ACM SIGSOFT Software Engineering Notes, v.30 n.4, July 2005
Fabio Martinell , Ilaria Matteucci, Through Modeling to Synthesis of Security Automata, Electronic Notes in Theoretical Computer Science (ENTCS), 179, p.31-46, July, 2007
Arie Orlovsky , Danny Raz, Decentralized enforcement of security policies for distributed computational systems, Proceedings of the 2007 ACM symposium on Applied computing, March 11-15, 2007, Seoul, Korea
Shin Nakajima , Tetsuo Tamai, Formal specification and analysis of JAAS framework, Proceedings of the 2006 international workshop on Software engineering for secure systems, May 20-21, 2006, Shanghai, China
Dachuan Yu , Ajay Chander , Nayeem Islam , Igor Serikov, JavaScript instrumentation for browser security, ACM SIGPLAN Notices, v.42 n.1, January 2007
Lujo Bauer , Jay Ligatti , David Walker, Composing security policies with polymer, ACM SIGPLAN Notices, v.40 n.6, June 2005
Vijay V. Raghavan, Toward an integrative model of application-software security, Practicing software engineering in the 21st century, Idea Group Publishing, Hershey, PA,
Mark Reith , Jianwei Niu , William H. Winsborough, Engineering Trust Management into Software Models, Proceedings of the International Workshop on Modeling in Software Engineering, p.9, May 20-26, 2007
Franois Siewe , Antonio Cau , Hussein Zedan, A compositional framework for access control policies enforcement, Proceedings of the ACM workshop on Formal methods in security engineering, p.32-42, October 30, 2003, Washington, D.C.
Claudio Bettini , Sushil Jajodia , X. Sean Wang , Duminda Wijesekera, Provisions and obligations in policy management and security applications, Proceedings of the 28th international conference on Very Large Data Bases, p.502-513, August 20-23, 2002, Hong Kong, China
Galen C. Hunt , James R. Larus , David Tarditi , Ted Wobber, Broad new OS research: challenges and opportunities, Proceedings of the 10th conference on Hot Topics in Operating Systems, p.15-15, June 12-15, 2005, Santa Fe, NM
Douglas R. Smith, Requirement enforcement by transformation automata, Proceedings of the 6th workshop on Foundations of aspect-oriented languages, p.5-14, March 13-13, 2007, Vancouver, British Columbia, Canada
Frdric Besson , Thomas de Grenier de Latour , Thomas Jensen, Secure calling contexts for stack inspection, Proceedings of the 4th ACM SIGPLAN international conference on Principles and practice of declarative programming, p.76-87, October 06-08, 2002, Pittsburgh, PA, USA
Panagiotis Manolios , Richard Trefler, A lattice-theoretic characterization of safety and liveness, Proceedings of the twenty-second annual symposium on Principles of distributed computing, p.325-333, July 13-16, 2003, Boston, Massachusetts
Joel Coburn , Srivaths Ravi , Anand Raghunathan , Srimat Chakradhar, SECA: security-enhanced communication architecture, Proceedings of the 2005 international conference on Compilers, architectures and synthesis for embedded systems, September 24-27, 2005, San Francisco, California, USA
David Wagner , Paolo Soto, Mimicry attacks on host-based intrusion detection systems, Proceedings of the 9th ACM conference on Computer and communications security, November 18-22, 2002, Washington, DC, USA
Stephen McCamant , Michael D. Ernst, A simulation-based proof technique for dynamic information flow, Proceedings of the 2007 workshop on Programming languages and analysis for security, June 14-14, 2007, San Diego, California, USA
Fabio Martinelli , Ilaria Matteucci, An Approach for the Specification, Verification and Synthesis of Secure Systems, Electronic Notes in Theoretical Computer Science (ENTCS), 168, p.29-43, February, 2007
K. Altisen , F. Maraninchi , D. Stauch, Aspect-oriented programming for reactive systems: Larissa, a proposal in the synchronous framework, Science of Computer Programming, v.63 n.3, p.297-320, 15 December 2006
Anderson Santana de Oliveira, Rewriting-Based Access Control Policies, Electronic Notes in Theoretical Computer Science (ENTCS), v.171 n.4, p.59-72, July, 2007
Michael McDougall , Rajeev Alur , Carl A. Gunter, A model-based approach to integrating security policies for embedded devices, Proceedings of the 4th ACM international conference on Embedded software, September 27-29, 2004, Pisa, Italy
Martin Sulzmann , Rzvan Voicu, Language-Based Program Verification via Expressive Types, Electronic Notes in Theoretical Computer Science (ENTCS), v.174 n.7, p.129-147, June, 2007
Anish Arora , Marvin Theimer, On modeling and tolerating incorrect software, Journal of High Speed Networks, v.14 n.2, p.109-134, April 2005
Christian Skalka, Trace effects and object orientation, Proceedings of the 7th ACM SIGPLAN international conference on Principles and practice of declarative programming, p.139-150, July 11-13, 2005, Lisbon, Portugal
Arnab Ray, Security check: a formal yet practical framework for secure software architecture, Proceedings of the workshop on New security paradigms, August 18-21, 2003, Ascona, Switzerland
Patrick Cousot , Radhia Cousot, Systematic design of program transformation frameworks by abstract interpretation, ACM SIGPLAN Notices, v.37 n.1, p.178-190, Jan. 2002
Karl Krukow , Mogens Nielsen , Vladimiro Sassone, A framework for concrete reputation-systems with applications to history-based access control, Proceedings of the 12th ACM conference on Computer and communications security, November 07-11, 2005, Alexandria, VA, USA
Kevin W. Hamlen , Greg Morrisett , Fred B. Schneider, Computability classes for enforcement mechanisms, ACM Transactions on Programming Languages and Systems (TOPLAS), v.28 n.1, p.175-205, January 2006
Ian Welch , Robert J. Stroud, Using reflection as a mechanism for enforcing security policies on compiled code, Journal of Computer Security, v.10 n.4, p.399-432, December 2002
Tom Chothia , Dominic Duggan, Capability passing processes, Science of Computer Programming, v.66 n.3, p.184-204, May, 2007
James Rose , Nikhil Swamy , Michael Hicks, Dynamic inference of polymorphic lock types, Science of Computer Programming, v.58 n.3, p.366-383, December 2005
Kevin W. Hamlen , Greg Morrisett , Fred B. Schneider, Certified In-lined Reference Monitoring on .NET, Proceedings of the 2006 workshop on Programming languages and analysis for security, June 10-10, 2006, Ottawa, Ontario, Canada
Thomas Ball , Sriram K. Rajamani, The S
Ilaria Matteucci, Automated Synthesis of Enforcing Mechanisms for Security Properties in a Timed Setting, Electronic Notes in Theoretical Computer Science (ENTCS), 186, p.101-120, July, 2007
Trent Jaeger , Reiner Sailer , Yogesh Sreenivasan, Managing the risk of covert information flows in virtual machine systems, Proceedings of the 12th ACM symposium on Access control models and technologies, June 20-22, 2007, Sophia Antipolis, France
David Brumley , Dawn Song, Privtrans: automatically partitioning programs for privilege separation, Proceedings of the 13th conference on USENIX Security Symposium, p.5-5, August 09-13, 2004, San Diego, CA
Therrezinha Fernandes , Jules Desharnais, Describing data flow analysis techniques with Kleene algebra, Science of Computer Programming, v.65 n.2, p.173-194, March, 2007
F. Y. Huang , C. B. Jay , D. B. Skillicorn, Adaptiveness in well-typed Java bytecode verification, Proceedings of the 2006 conference of the Center for Advanced Studies on Collaborative research, October 16-19, 2006, Toronto, Ontario, Canada
Franois Pottier , Christian Skalka , Scott Smith, A systematic approach to static access control, ACM Transactions on Programming Languages and Systems (TOPLAS), v.27 n.2, p.344-382, March 2005
Shih-Chien Chou, Providing flexible access control to an information flow control model, Journal of Systems and Software, v.73 n.3, p.425-439, November-December 2004
R. Sekar , V.N. Venkatakrishnan , Samik Basu , Sandeep Bhatkar , Daniel C. DuVarney, Model-carrying code: a practical approach for safe execution of untrusted applications, Proceedings of the nineteenth ACM symposium on Operating systems principles, October 19-22, 2003, Bolton Landing, NY, USA
Shih-Chien Chou , Chin-Yi Chang, An information flow control model for C applications based on access control lists, Journal of Systems and Software, v.78 n.1, p.84-100, October 2005
Ran Shaham , Eran Yahav , Elliot K. Kolodner , Mooly Sagiv, Establishing local temporal heap safety properties with applications to compile-time memory management, Science of Computer Programming, v.58 n.1-2, p.264-289, October 2005
David Basin , Ernst-Ruediger Olderog , Paul E. Sevinc, Specifying and analyzing security automata using CSP-OZ, Proceedings of the 2nd ACM symposium on Information, computer and communications security, March 20-22, 2007, Singapore
Charles E. Phillips, Jr. , T.C. Ting , Steven A. Demurjian, Information sharing and security in dynamic coalitions, Proceedings of the seventh ACM symposium on Access control models and technologies, June 03-04, 2002, Monterey, California, USA
Frdric Besson , Thomas De Grenier DeLatour , Thomas Jensen, Interfaces for stack inspection, Journal of Functional Programming, v.15 n.2, p.179-217, March 2005
Shih-Chien Chou, Embedding role-based access control model in object-oriented systems to protect privacy, Journal of Systems and Software, v.71 n.1-2, p.143-161, April 2004
Gianluigi Ferrari , Eugenio Moggi , Rosario Pugliese, MetaKlaim: a type safe multi-stage language for global computing, Mathematical Structures in Computer Science, v.14 n.3, p.367-395, June 2004
Steve Zdancewic , Lantian Zheng , Nathaniel Nystrom , Andrew C. Myers, Untrusted hosts and confidentiality: secure program partitioning, ACM SIGOPS Operating Systems Review, v.35 n.5, Dec. 2001
Philip W. L. Fong, Reasoning about safety properties in a JVM-like environment, Science of Computer Programming, v.67 n.2-3, p.278-300, July, 2007
Thomas Ball , Ella Bounimova , Byron Cook , Vladimir Levin , Jakob Lichtenberg , Con McGarvey , Bohus Ondrusek , Sriram K. Rajamani , Abdullah Ustuner, Thorough static analysis of device drivers, ACM SIGOPS Operating Systems Review, v.40 n.4, October 2006
Susan J. Chinburg , Ramesh Sharda , Mark Weiser, Establishing the business value of network security using analytical hierarchy process, Creating business value with information technology: challenges and solutions, Idea Group Publishing, Hershey, PA,
David Walker , Karl Crary , Greg Morrisett, Typed memory management via static capabilities, ACM Transactions on Programming Languages and Systems (TOPLAS), v.22 n.4, p.701-771, July 2000
Peter Thiemann, Program specialization for execution monitoring, Journal of Functional Programming, v.13 n.3, p.573-600, May
Robert Grimm , Brian N. Bershad, Separating access control policy, enforcement, and functionality in extensible systems, ACM Transactions on Computer Systems (TOCS), v.19 n.1, p.36-70, Feb. 2001
Yao-Wen Huang , Fang Yu , Christian Hang , Chung-Hung Tsai , Der-Tsai Lee , Sy-Yen Kuo, Securing web application code by static analysis and runtime protection, Proceedings of the 13th international conference on World Wide Web, May 17-20, 2004, New York, NY, USA | security automata;security policies;safety properties;proof carrying code;SASI;EM security policies;inlined reference monitors |
353938 | Integrating object-oriented programming and protected objects in Ada 95. | Integrating concurrent and object-oriented programming has been an active research topic since the late 1980's. There is a now a plethora of methods for achieving this integration. The majority of approaches have taken a sequential object-oriented language and made it concurrent. A few approaches have taken a concurrent language and made it object-oriented. The most important of this latter class is the Ada 95 language, which is an extension to the object-based concurrent programming language Ada 83. Arguably, Ada 95 does not fully integrate its models of concurrency and object-oriented programming. For example, neither tasks nor protected objects are extensible. This article discusses ways in which protected objects can be made more extensible. | Introduction
Arguably, Ada 95 does not fully integrate its models of concurrent and object-oriented
programming (Atkinson and Weller, 1993; Wellings et al., 1996; Burns
and Wellings, 1998). For example, neither tasks nor protected objects are exten-
sible. When Ada 95 was designed, the extensions to Ada 83 for object-oriented
programming were, for the most part, considered separate to extensions to the
concurrency model. Although some consideration was given to abandoning protected
types and instead using Java-like synchronised methods in their place,
there was no public debate of this issue. Similarly, there was no public debate
on the issues associated with allowing protected types or tasks to be extended.
The purpose of this paper is to discuss ways in which the Ada 95 concurrency
model can be better integrated with object-oriented programming. The paper
is structured as follows. Section 2 introduces the main problems associated with
the integration of object-oriented and concurrent programming. Section 3 then
describes the main features of the Ada 95 language that are relevant to this work.
Section 4 argues that Ada 95 does not have a well-integrated object-oriented
concurrency model. To achieve better integration, Section 5 proposes that Ada's
protected type mechanism be made extensible and discusses the main syntactic
and semantic issues. Sections 6 then considers how extensible protected types
integrate with Ada's general model of abstraction and inheritance. Sections
7 and 8 discuss how the proposals address the inheritance anomaly and how
they can be used in conjunction with the current object-oriented mechanisms.
Section 9 presents some extended examples and Section 10 draws conclusions
from this work.
Concurrent Object-Oriented Programming
Integrating concurrent and object-oriented programming has been an active
research topic since the late 1980s. There is now a plethora of methods for
achieving this integration (see (Wyatt et al., 1992) for a review). The majority
of approaches have taken a sequential object-oriented language and made it
concurrent (for example, the various versions of concurrent Eiffel (Meyer, 1993;
Caromel, 1993; Karaorman and Bruno, 1993)). A few approaches have taken a
concurrent language and made it object-oriented. The most important of this
latter class is the Ada 95 language which is an extension to the object-based
concurrent programming language Ada 83. A full discussion of this language
will be given in the next section.
In general, there are two main issues for concurrent object-oriented programming
ffl the relationship between concurrent activities and objects - here the distinction
is often between the concept of an active object (which by definition
will execute concurrently with other active objects, for example
(Maio et al., 1989; Mitchell and Wellings, 1996; Newman, 1998)) and where
concurrent execution is created by the use of asynchronous method calls
(or early returns from method calls) ((Yonezawa et al., 1986; Yokote and
Tororo, 1987; Corradi and Leonardi, 1990))
ffl the way in which concurrent activities communicate and synchronise (and
yet avoid the so-called inheritance anomaly (Matsuoka and Yonezawa,
see (Mitchell and Wellings, 1996) for a summary of the various
proposals.
Perhaps the most interesting recent development in concurrent object-oriented
programming is Java (Lea, 1997; Oaks and Wong, 1997). Here we have, notion-
ally, a new language which is able to design a concurrency model within an
object-oriented framework without worrying about backward compatibility is-
sues. The Java model integrates concurrency into the object-oriented framework
by the combination of the active object concept and asynchronous method calls.
All descendants of the pre-defined class Thread have the pre-defined methods
run and start. When start is called, a new thread is created, which executes
run. Subclassing Thread and overriding the run method allows an application
to express active objects. (It is also possible to obtain start and run by implementing
the interface Runnable.) Other methods available on the Thread class
allow for a wide range of thread control. Communication and synchronisation is
achieved by allowing any method of any object to be specified as 'synchronised'.
Synchronised methods execute with a mutual exclusion lock associated with the
object. All classes in Java are derived from the Object class that has methods
which implement a simple form of condition synchronisation. A thread can,
therefore, wait for notification of a single event. When used in conjunction with
synchronised methods, the language provides the functionality similar to that
of a simple monitor (Hoare, 1974).
Arguably, Java provides an elegant, although simplistic, model of object-oriented
concurrency.
3 The Ada 95 Programming Language
The Ada 83 language allowed programs to be constructed from several basic
building blocks: packages, subprograms (procedures and functions), and tasks.
Of these, only tasks were considered to be types and integrated with the typing
model of the language. Just as with any other type in Ada, many instances of a
task type can be declared, tasks can be placed in arrays and records, and pointers
to tasks can be declared and created. Ada 83 fully integrated its concurrency
model into the sequential components of the language. Tasks can encapsulate
data objects as well as other tasks. They are built using a consistent underlying
type model.
3.1 Data-Oriented Synchronization: Protected Types
Ada 95 extends the facilities of Ada 83 in areas of the language where weaknesses
were perceived. One of the innovations was the introduction of data-oriented
communication and synchronization through protected types. Instances of a
protected type are called protected objects; they are basically monitors (Hoare,
but avoid the disadvantages associated with the use of low-level condition
variables. Instead, protected types may have guarded entries similar to those
provided by conditional critical regions (Brinch-Hansen, 1972).
protected type in Ada 95 encapsulates some data items, which can only
be accessed through the protected type's operations. It is declared as shown in
the following example:
protected type Shared_Int is
- Public operations
procedure
function Get return
entry Wait_Until_Zero;
private
Encapsulated data
Current
- Private operations might follow here
The operations of this protected type are implemented in a corresponding body:
protected body Shared_Int is
procedure in Integer) is
begin
Current := Value;
function Get return Integer is
begin
return Current;
entry Wait_Until_Zero
when Current = 0 is - Entry barrier (guard)
begin
Instances of this protected type, i.e. protected objects, can be declared just like
any other variable:
protected object named 'X'
Operations on this shared object can be invoked in the following way:
Some_Variable := X.Get;
Calls to the operations of a protected type are so-called protected actions and
guarantee mutually exclusive access to a protected object with the usual semantics
of multiple readers (function calls, which are read-only) or one writer
(procedure and entry calls).
When an entry is called and its barrier is false, the call is queued and the
calling task is blocked until the call has been finally executed. Otherwise, the call
is accepted and executed in a protected action. At the end of each procedure or
entry call, the barriers of all entries are examined. If a barrier has become true,
a possibly queued call is then executed as part of the same protected action,
i.e. without relinquishing the mutual exclusion in between. This servicing of
entry queues is repeated until either there are no more queued calls or all their
barriers are false. The protected action then terminates.
The following example illustrates the use of entries with a simple bounded
buffer, where items can only be taken from the buffer when it is not empty, and
items can be put into it only when it is not full.
protected type Integer_Bounded_Buffer is
entry
entry Get
private
array (1 . 10) of Integer;
First, Last : Natural :=
Nof_Items : Natural := 0;
protected body Integer_Bounded_Buffer is
entry
when Nof_Items < Buffer'Length is
begin
Buffer (Last) := I;
Last := Last mod Buffer'Length
Nof_Items
entry Get
when Nof_Items > 0 is
begin
I := Buffer(First);
First := First mod Buffer'Length
Nof_Items := Nof_Items -
If Get is called when Nof Items is zero, the call is queued. When another task
calls Put, Nof Items will be incremented. When the entry queues are serviced
after the call to Put has finished, the barrier of Get is now true and the queued
call is allowed to proceed, thus unblocking the task that made that call.
A requeue statement of the form
requeue Target_Entry;
allows an entry to put a call, which it has already begun processing, back on
the same or some other entry queue again. A requeue immediately leaves the
current entry, requeues the call and then initiates entry queue servicing. Once
the requeued call has been executed, control is returned to the task that made
the original call. An example of the requeue statement can be found in section
9.2.
Within the operations of a protected type, the attribute E'Count represents
the number of calls in the queue of entry E.
Potentially blocking calls, in particular entry calls, are forbidden within a
protected action. This language rule helps avoid deadlocks due to the nested
monitors problem and also avoids a possible unbounded priority inversion that
might otherwise occur. This means that a procedure of a protected type may call
other procedures or functions of the same or some other protected object, but
not entries. Functions of a protected type may only call other protected functions
of the same protected object to avoid that they circumvent the read-only
restriction. However, they may call both protected functions and procedures of
other protected objects. Entries may call procedures or functions, but not other
entries; they may only requeue to another entry.
3.2 Object-Orientation: Tagged Types
One of the other main extensions to Ada 83 was the introduction of object-oriented
programming facilities. Here the designers of Ada 95 were faced with
a dilemma. Ada 83's facility for encapsulation was the package. Unfortunately,
packages (unlike tasks) were not fully integrated into the typing model: there
were no package types. Rather than introduce a class-like construct into the
language (as had been done by almost all other object-oriented languages), Ada
95 followed the Oberon (Wirth, 1988) approach and achieved object-orientation
by type extension. The designers argued that Ada 83 already had the ability
to derive types from other types and override their operations. Consequently,
object-orientation was achieved via the introduction of "tagged types".
Tagged types in Ada 95 are record types that can be extended. Thus a class
in Ada is represented by the following:
package Objects is
type Class is tagged record
data attributes of the class
- the following are the primitive operations of the type
procedure Method1 (O: in Class; Params: Some_Type);
procedure Method2 (O: in out Class; Params : Some_Type);
Objects of the class can be created and used by:
with Objects; use Objects;
Params: Some_Type;
begin
Contrast this to a call to an object's method in the more typical object-oriented
paradigm where the call is of the form: Object.Method1(Params). The difference
is purely syntactical; both forms have the same expressive power and
denote the same language construct, namely, a call to a primitive operation of
a tagged type or a call of a method of a class, respectively.
Inheritance in Ada 95 is achieved by extending the parent type and overriding
the primitive operations.
with Objects; use Objects;
package Extended_Objects is
type Extended_Class is new Class with
new data attributes
overridden primitive operations
procedure Method1 (O: in Extended_Class; Params: Some_Type);
procedure Method2 (O: in out Extended_Class; Params : Some_Type);
new primitive operation
procedure Method3 (O: in out Extended_Class; Params : Some_Type);
Polymorphism in Ada 95 is achieved by the use of class-wide types or pointers to
class-wide types. It is possible, for example, to declare a pointer to a hierarchy
of tagged types rooted at a place in the tree of type extensions. This pointer
can then reference any object in the type hierarchy. When a primitive method
is called passing the de-referenced pointer, run-time dispatching occurs to the
correct operation:
type Pointer is access Object.Class'Class;
'Class indicates a class-wide type
Ap: Pointer := new .; - some object derived from Object.Class;
dispatches to appropriate method
In Ada 95, dispatching only occurs when the actual parameter of a call to a
primitive operation is of a class-wide type. This contrasts with some other
object-oriented programming languages where dispatching is the default (e.g.
Java). In order to force dispatching in Ada, the parameter must be explicitly
converted to a class-wide type when invoking the primitive operation. This
situation often occurs when one primitive operation of an object wants to dispatch
to some other primitive operation of the same object. This is called
re-dispatching, and can be achieved by converting the operand to a class-wide
type, as shown in the following example:
type T is tagged record .;
procedure P (X: T) is .;
procedure Q (X: T) is
begin
re-dispatch
type new T with record .;
procedure P (X: T1);
Here, procedure Q does a re-dispatch, by explicitly converting the parameter X
to a class-wide type before invoking P. If this conversion had been omitted and
just called P(X), then the call would be statically bound to the procedure P
of T, regardless of what actual parameter was passed to Q.
It should be noted that Ada allows calls to overridden operations to be
statically bound from outside the defining tagged type. For example, although
the Extended Objects package (defined earlier) has extended the Class tagged
type and overridden Method1, it is possible for a client to write the following:
Eo: Extended_Class;
and call the overridden method explicitly. Arguably this has now broken the
Extended Class abstraction, and perhaps should be disallowed. Such explicit
conversions can only be safely done from within the overridden method itself
when it wishes to call its parent method.
3.3 Object-Oriented Programming and Concurrency
Although task types and protected types are fully integrated into the typing
model of Ada 95, it is not possible to create a tagged protected type or a tagged
task type. The designers shied away from this possibility partly because they
felt that fully integrating object-oriented programming and concurrency was
not a well-understood topic and, therefore, not suitable for an ISO standard
professional programming language. Also, there were inevitable concerns that
the scope of potential language changes being proposed was too large for the
Ada community to accept.
In spite of this, there is some level of integration between tagged types and
tasks and protected objects. Tagged types after all are just part of the typing
mechanism and therefore can be used by protected types and tasks types in the
same way as other types. Indeed paradigms for their use have been developed
(see (Burns and Wellings, 1998) chapter 13). However, these approaches cannot
get around the basic limitation that protected types and task types cannot be
extended.
Making Concurrent Programming in Ada 95
more Object-Oriented
Now that the dust is beginning to settle around the Ada 95 standard, it is
important to begin to look to the future. The object-oriented paradigm has
largely been welcomed by the Ada community. Even the real-time community,
which was originally sceptical of the facilities and worried about the impact
they would have on predictability, is beginning to see some of the advantages.
Furthermore, as people become more proficient in the use of the language they
begin to realise that better integration between the concurrency and object-oriented
features would be beneficial. The goal of this paper is to continue
the debate on how best to achieve full integration in any future version of the
language.
There are the following classes of basic types in Ada:
ffl scalar types - such as integer types, enumeration types, real types, etc.
ffl structured types - such as record types and array types
ffl protected types
ffl task types
ffl access types
Access types are special as they provide the mechanism by which pointers to the
other types can be created. Note that, although access types to subprograms
(procedures and functions) can be created, subprograms are not a basic type of
the language.
In providing tagged types, Ada 95 has provided a mechanism whereby a
structured type can be extended. It should be stressed, though, that only record
types can be extended, not array types. This is understandable as the record
is the primary mechanism for grouping together items which will represent the
heterogeneous attributes of the objects. Furthermore, variable length array
manipulation is already catered for in the language. Similarly, scalar types can
already be extended using subtypes and derived types.
Allowing records to be extended thus is consistent with allowing variable
length arrays, subtypes and derived types.
protected type is similar to a record in that it groups items together.
(In the case of a protected type, these items must be accessed under mutual
exclusion.) It would be consistent, then, to allow a protected type to be extended
with additional items. The following sections will discuss some of the issues in
allowing extensible protected types. The issues associated with extensible task
types are the subject of on-going research.
5 Extensible Protected Types
The requirements for extensible protected types are easy to articulate. In par-
ticular, extensible (tagged) protected types should allow:
ffl new data fields to be added,
ffl new functions, procedures and entries to be added,
ffl functions, procedures and entries to be overridden,
class-wide programming to be performed.
These simple requirements raise many complex semantic issues. Further-
more, any proposed extensions should be fully integrated with the Ada model
of object-oriented programming.
5.1 Declaration and Primitive Operations
For consistency with the usage elsewhere in Ada, the word 'tagged' indicates
that a protected type is extensible. As described in section 3.1, a protected
type encapsulates the operations that can be performed on its protected data.
Consequently, the primitive operations of a tagged protected type are, in effect,
already defined. They are, of course, similar to primitive operations of other
tagged types in spirit but not in syntax, since other primitive operations are
defined by being declared in the same package specification as a tagged type.
Consider the following example:
protected type T is tagged
procedure W (.);
function X (.) return .;
entry Y (.);
private
data attributes of T
W, X, and Y can be viewed as primitive operations on T. Interestingly, the call
O.X takes a syntactic form similar to that in most object-oriented languages.
Indeed, Ada's protected object syntax is in conflict with the language's usual
representation of an 'object' (see Section 3.2).
5.2 Inheritance
Tagged protected types can be extended in the same manner as tagged types.
Hence,
protected type T1 is new T with
procedure W (.); - override T.W
procedure Z (.); - a new method
private
new attributes of
The issue of overriding protected entries will be considered in section 5.4.
One consideration is whether or not private fields in the parent type (T) can
be seen in the child type (T1). In protected types, all data has to be declared as
private so that it can not be changed without first obtaining mutual exclusion.
There are four possible approaches to this visibility issue:
1. Prevent a child protected object from accessing the parent's data. This
would limit the child's power to modify the behaviour of its parent object,
it only being allowed to invoke operations in its parent.
2. Allow a child protected object full access to private data declared in its
parent. This would be more flexible but has the potential to compromise
the parent abstraction.
3. Provide an additional keyword to distinguish between data that is fully
private and data that is private but visible to child types. This keyword
would be used in a similar way to private (much like C++ uses its key-word
'protected' to permit descendent classes direct access to inherited
data items).
4. Allow child protected types to access private components of their parent
protected type if they are declared in a child of the package in which their
parent protected type is declared. This would be slightly inconsistent with
the way protected types currently work in Ada because protected types
do not rely on using packages to provide encapsulation.
The remainder of this paper will assume the second method, as it provides the
most flexibility and requires no new keywords. It is also consistent with normal
tagged types.
If a procedure in a child protected type calls a procedure or function in its
parent, it should not have to wait to obtain the lock on the protected object
before entering the parent, otherwise deadlock would occur. There is one lock
for each instance of a protected type and the same lock should be used when
the protected object is converted to a parent type. This is consistent with the
current Ada approach when one procedure/function calls another in the same
protected object.
5.3 Dispatching and re-dispatching
Given a hierarchy of tagged protected types, it is possible to create class-wide
types and accesses to class-wide types; for example:
type Pt is access protected type T'Class;
P: Pt := new .; - some type in the hierarchy
dispatches to the appropriate projected object.
Of course from within P.W, it should be possible to convert back to the class-wide
type and re-dispatch to another primitive operation. Unfortunately, an operation
inside a tagged protected type does not have the option of converting the
object (on which it was originally dispatched) to a class-wide type because this
object is passed implicitly to the operation. There are two possible strategies
which can be taken:
1. make all calls to other operations from within a tagged protected type
dispatching, or
2. use some form of syntactic change to make it possible to specify whether
to re-dispatch or not.
The first strategy is not ideal because it is often useful to be able to call an
operation in the same type or a parent type without re-dispatching. In addition,
the first strategy is inconsistent with ordinary tagged types where re-dispatching
is not automatic.
A solution according to the second strategy uses calls of the form type.operation,
where type is the type to which the implicit protected object should be con-
verted. The following is an example of this syntax for a re-dispatch:
protected body T is
procedure P (.) is
begin
T'Class indicates the type to which the protected object (which is in the hierarchy
of type T'Class but which is being viewed as type T) that was passed
implicitly to P should be view converted. This allows it to define which Q procedure
to call. This syntax is also necessary to allow an operation to call an
overridden operation in its parent, for example:
protected body T1 is - an extension of T
procedure W (.) is - overrides the W procedure of T
begin
calls the parent operation
This new syntax does not conflict with any other part of the language because
it is strictly only a type that precedes the period. If it could be an instance of
a protected type then the call could be mis-interpreted as an external call: the
An alternative syntactic representation might be type'operation.
Ada Reference Manual (Intermetrics, 1995) distinguishes between external and
internal calls by the use, or not, of the full protected object name (Burns and
Wellings, 1998). The call would then be a bounded error.
Requeuing can also lead to situations where re-dispatching is desirable. Just
as with procedures, re-dispatching would only occur when explicitly requested,
so for example, in a protected type T, requeue E would not dispatch whereas
requeue T'Class.E would dispatch. Requeuing to a parent entry would require
barrier re-evaluation. Requeues from other protected objects or from accept
statements in tasks could also involve dispatching to the correct operation in a
similar way.
5.4 Entry Calls
Allowing entries to be primitive operations of extensible protected types raises
many inter-related complex issues. These include:
1. Can a child entry call its parent's entry? - From an object-oriented per-
spective, it is essential to allow the child entry to call its parent. This is
how reuse is achieved. Unfortunately, from the protected object perspec-
tive, calling an entry is a potentially suspending operation and these are
not allowed within the body of a protected operation (see section 3.1). It
is clear that a compromise is required and that a child entry must be able
to extend the facilities provided by its parent.
2. What is the relationship, if any, between the parent's barrier and the
child's barrier? - There are three possibilities: no relationship, the child
can weaken the parent's barrier, or the child can strengthen the parent's
barrier. Fr-lund (Fr-lund, 1992) suggests that as the child method extends
the parent's method, the child must have more restrictive synchronisation
constraints, in order to ensure that the parent's state remains consistent
y . However, he also indicates that if the behaviour of the child method
totally redefines that of the parent, it should be possible to redefine the
synchronisation constraints. Alternatively, it can also be argued that the
synchronisation constraints of the child should weaken those of the par-
ent, not strengthen them, in order to avoid violating the substitutability
property of subtypes (Liskov and Wing, 1994).
3. How many queues does an implementation need to maintain for an over-ridden
- If there is no relationship between the parent and the child
barrier, it is necessary to maintain a separate entry queue for each over-ridden
entry. If there is more than one queue, the 'Count attribute should
reflect this. Hence 'Count might give different values when called from
the parent or when called from the child. A problem with using separate
entry queues with different barriers for overridden and overriding entries
is that it is harder to theorise about the order of entries being serviced.
Normally entries are serviced in first-in, first-out (FIFO) order but with
separate queues, each with a separate barrier, this might not be possible.
For example, a later call to an overridden entry will be accepted before an
y It perhaps should be noted that where the child has access to its parent's state, barrier
strengthening is not a sufficient condition to ensure the consistency of that state, as the child
can make the barrier false before calling the entry. See also the discussion in section 5.4.1.
earlier call to an overriding entry if the barrier for the overridden entry
becomes true with the overriding entry's barrier remaining false.
4. What happens if a parent entry requeues to another entry? - When an
entry call requeues to another entry, control is not returned to the calling
entry but to the task which originally made the entry call (see section
3.1). This means that when a child entry calls its parent and the parent
entry requeues, control is not returned to the child. Given that the code
of the parent is invisible to the child, this would effectively prohibit the
child entry from undertaking any post-processing.
In order to reduce the number of options for discussion, it is assumed that
child entries must strengthen their parent's barrier for the remainder of the
paper. The syntax and when is used to indicate this z . To avoid having the
body of a child protected object depend on the body of its parent, it is necessary
to move the declaration of the barrier from the body to the specification of the
protected type (private part). Consider
protected type T is tagged
private
I: Integer := 0;
barrier given in the private part
protected type T1 is new T with
private
entry E and when I > 0;
If a call was made to A.E, this would be statically defined as a call to T1.E and
would be subject to its barrier (E'Count ? 1 and I ? 0). The barrier would
be repeated in the entry body.
Even with barrier strengthening, the issue of barrier evaluation must be
addressed. Consider the case where a tagged protected object is converted to
its parent type (using a view conversion external to the protected type) and
then an entry is called on that type. It is not clear which barrier needs to be
passed. There are three possible strategies that can be taken:
1. Use the barrier associated with the exact entry which is being called,
ignoring any barrier associated with an entry which overrides this exact
entry. As the parent type does not know about new data added in the
child, it could be argued that allowing an entry in the parent to execute
when the child has strengthened the barrier for that entry should be safe.
Unfortunately, this is not the case. Consider a bounded buffer which
has been extended so that the Put and Get operations can be locked.
Here, if the lockable buffer is viewed converted to a normal buffer and
Get/Put called with only the buffer barriers evaluated, a locked buffer
will be accessible even if it is locked. Furthermore, this approach would
z Short circuit control forms such as and then when could also be made available.
also mean that there would be separate entry queues for overridden entries.
The problems associated with maintaining more than one entry queue per
overridden entry have already been mentioned.
2. Use the barrier associated with the entry to which dispatching would occur
if the object was converted to a class wide type (i.e., the barrier of the
entry of the object's actual type). This is the strongest barrier and would
allow safe re-dispatching in the entry body. This method results in one
entry queue per entry instead of one for each entry and every overridden
entry. However, it is perhaps misleading as it is the parent's code which
is executed but the child's barrier expression that is evaluated.
3. Allow view conversions from inside the protected object but require that
all external calls are dispatching calls. Hence, there is only one entry
queue, and all external calls would always invoke the primitive operations
of the object's actual type. The problem with this approach is that currently
Ada does not dispatch by default. Consequently, this approach
would introduce an inconsistency between the way tagged types and extensible
protected types are treated.
For the remainder of this paper, it is assumed that external calls to protected
objects always dispatch x .
5.4.1 Calling the Parent Entry and Parent Requeues
So far this section has discussed the various issues associated with overridden
entry calls. However, details of how the child entry actually calls its parent
have been left unspecified. The main problem is that Ada forbids an entry from
explicitly calling another entry (see section 3.1). There are several approaches
to this problem.
1. Use requeue. - Although Ada forbids nested entry calls, it does allow an
entry call to be requeued. Hence, the child can only requeue to the parent.
Requeue gives the impression of calling the parent but it is not possible for
the child to do any post-processing once the parent entry has executed (as
the call returns to the caller of the child entry). As a requeue, the parent's
barrier would have to be re-evaluated. Given that the child barrier has
strengthened the parent's barrier, the parent's barrier would normally be
open. If this is not the case, an exception is raised (to queue the call would
require more than one entry queue). - Furthermore, if atomicity is to be
maintained and the parent requeue is to be part of the same protected
action, the parent entry must be serviced before any other entries whose
barriers also happen to be open. Hence, this requeue has slightly different
semantics from a requeue between unrelated entries.
2. Allow the child entry to call the parent entry and treat that call as a procedure
call. - It is clear that calling the parent entry is different from a
x To harmonize with regular tagged types a new pragma could be introduced called "Ex-
ternal Calls Always Dispatch" which can be applied to regular tagged types.
- With the requeue approach and multiple entry queues, there need not be any relationship
between the parent and the child barriers. Such an approach has already been ruled out in
the previous subsection.
normal entry call; special syntax has already been introduced to facilitate
it (see section 5.3). In this approach, the parent call is viewed as a procedure
call and therefore not a potentially suspending operation. However,
the parent's barrier is still a potential cause for concern. One option is to
view the barrier as an assertion and raise an exception if it is not true.
k The other option is not to test the barrier at all, based on the premise
that the barrier was true when the child was called and, therefore, need
not be re-evaluated until the whole protected action is completed.
With either of these approaches, there is still the problem that control is
not returned to the child if the parent entry requeues requests to other entries
for servicing. This, of course, could be made illegal and an exception raised.
However, requeue is an essential part of the Ada 95 model and to effectively
forbid its use with extensible protected types would be a severe restriction.
The remainder of this paper will assume a model where parent calls are
treated as procedure calls (the issue of the assertion is left open) and requeue
in the parent is allowed. A consequence of this is that no post-processing is
allowed after a parent call.
6 Integration into the Full Ada 95 Model
The above section has considered the basic extensible protected type model. Of
course, any proposal for the introduction of such a facility must also consider
the full implications of its introduction. This section considers the following
topics:
ffl private types,
abstract types, and
ffl generics and mix-in inheritance
6.1 Private Types
The encapsulation mechanism of Ada 95, the package, gives the programmer
great control over the visibility of the entities declared in a package. In par-
ticular, Ada 95 supports the notion of private and limited private types, i.e.
types whose internal structure is hidden for clients of the packages (where the
types are declared) and that can be modified only through the primitive operations
declared in these packages (for these types). A protected type is a limited
type, hence it is necessary to show how extensible protected types integrate into
limited private types. The following illustrates how this is easily achieved.
In order to make a type private, its full definition is moved to the private
part of the package. This can also be done for extensible protected types:
package Example1 is
protected type Pt0 is tagged private;
private
k Special consideration would need to be given to barriers which use the 'Count attribute
in the parent, since these will clearly change when the child begins execution.
protected type Pt0 is tagged
primitive operations.
private
data items etc.
Note that in this example, the primitive operations of type Pt0 are all declared
in the private part of the package and are thus visible only in child packages of
package Example1. Other packages cannot do anything with type Pt0, because
they do not have access to the type's primitive operations. Nevertheless, this
construct can be useful for class-wide programming using access types, e.g.
through
type Pt_Ref is access Pt0'Class;
Private types can also give a finer control over visibility. One might declare
a type and make some of its primitive operations publicly visible while other
primitive operations would be private (and thus visible only to child packages).
For example:
package Example2 is
protected type Pt1 is tagged
primitive operations, visible anywhere
with private
data items etc.
private
protected type Pt1 is tagged
private primitive operations, visible only in child packages
private
data items etc.
Note that the public declaration of type Pt1 uses "with private" instead of only
"private" to start its private section. This is supposed to give a syntactical indication
that the public view of Pt1 is an incomplete type that must be completed
later on in the private part of the package.
Alternatively a protected type can be declared to have a private extension.
Given a protected type Pt2:
package Base is
protected type Pt2 is tagged
private
A private extension can then be written as:
with Base;
package Example3 is
protected type Pt3 is new Base.Pt2 with private;
private
protected type Pt3 is new Base.Pt2 with
Additional primitive operations
private
Additional data items
Here, only the features inherited from Pt2 are publicly visible, the additional
features introduced in the private part of the package are private and hence
visible only in child packages of package Example3.
Private types can be used in Ada 95 to implement hidden and semi-hidden
inheritance, two forms of implementation inheritance (as opposed to interface in-
heritance, i.e. subtyping). For instance, one may declare a tagged type publicly
as a root type (i.e., not derived from any other type) while privately deriving
it from another tagged type to reuse the latter's implementation. This hidden
inheritance is also possible with extended protected types. Given the above
package Base, hidden inheritance from Pt2 can be implemented as follows:
with Base;
package Example4 is
- the public view of Pt4 is a root type
protected type Pt4 is tagged
primitive operations, visible anywhere
with private
data items etc.
private
- the private view of Pt4 is derived from Pt2
protected type Pt4 is new Base.Pt2 with
additional primitive operations, visible only in child packages
with private
additional data items etc.
The derivation of Pt4 from Pt2 is not publicly visible: operations and data
items inherited from Pt2 cannot be accessed by other packages. If some of
the primitive operations inherited from Pt2 should in fact be visible in the
public view of Pt4, too, Pt4 must re-declare them and implement them as
call-throughs to the privately inherited primitive operations of Pt2. In child
packages of package Example4, the derivation relationship is exposed and hence
these inherited features are accessible in child packages.
Semi-hidden inheritance is similar in spirit, but exposes part of the inheritance
relation. Given an existing hierarchy of extensible protected types:
package Example5_Base is
protected type Pt5 is tagged
private
protected type Pt6 is new Pt5 with
private
One can now declare a new type Pt7 that uses interface inheritance from Pt5,
but implementation inheritance from some type derived from Pt5, e.g. from
with Example5_Base; use Example5_Base;
package Example5 is
protected type Pt7 is new Pt5 with
with private
private
protected type Pt7 is new Pt6 with
private
As these examples show, extensible protected types offer the same expressive
power concerning private types as ordinary tagged types. In fact, because protected
types are an encapsulation unit in their own right (in addition to the
encapsulation provided by packages), extensible protected types offer an even
greater visibility control than ordinary tagged types. Primitive operations of
an extensible protected type declared in the type's private section are visible
only within that type itself or within a child extension of that type. Combining
this kind of visibility (which is similar to Java's `protected' declarator) with the
visibility rules for packages gives some visibility specifications that do not exist
for ordinary tagged types.
There is one difficulty with this scheme, though. It is currently possible in
Ada 95 to define a limited private type that is implemented as a protected type.
This raises the question whether the following should be legal:
package Example6 is
type T is tagged limited private;
private
protected type T is tagged
private
Here, although child packages could treat T as an extensible protected type,
other client packages could do very little with the type. Furthermore, the mixture
of protected and non-protected views of one and the same type may give
rise to incalculable implementation problems because in some cases accesses to
an object would have to be done under mutual exclusion even if the view of
the object's type was not protected, simply because its full view was a protected
type. Consequently, the kind of private completion shown in Example6
is probably best disallowed.
6.2
Abstract
Extensible Protected Types
Ada 95 allows tagged types and their primitive operations to be abstract. This
means that instances of the type cannot be created. An abstract type can be an
extension of another abstract type. A concrete tagged type can be an extension
from an abstract type. An abstract primitive operation can only be declared for
an abstract type. However, an abstract type can have non-abstract primitive
operations.
The Ada 95 model can easily be applied to extensible protected types. The
following examples illustrate the integration:
protected type Ept is abstract tagged
- Concrete operations:
function F (.) return .;
procedure P (.);
Abstract operations:
function F1 (.) return . is abstract;
procedure P1 (.) is abstract;
entry E1 (.) is abstract;
private
The one issue that is perhaps not obvious concerns whether an abstract entry
can have a barrier. On the one hand, an abstract entry cannot be called so any
barrier is superfluous. On the other hand, the programmer may want to define
an abstraction where it is appropriate to guard an abstract entry. For example:
protected type Lockable_Operation is abstract tagged
procedure Lock;
procedure Unlock;
entry Operation (.) is abstract;
private
entry Operation (.) when not Locked;
The bodies of Lock and Unlock set the Locked variable to the corresponding
values. Now because of the barrier strengthening rule, the when not Locked
barrier will automatically be enforced on any concrete implementation of the
operation.
The remainder of the paper will assume that abstract entries do not have bar-
riers. The above example can be rewritten with a concrete entry for Operation
that has a null body. It should be noted, however, that with a concrete null-
operation, one cannot force concrete children to supply an implementation for
the entry. With an abstract entry, one can.
6.3 Generics and Mix-in Inheritance
Ada 95 does not support multiple inheritance. However, it does support various
approaches which can be used to achieve the desired affect. One such approach
is mix-in inheritance where a generic package which can take a parameter of a
tagged type is declared. A version of Ada with extensible protected types must
also allow them to be parameters to generics and hence take part in mix-in
inheritance.
As with normal tagged types, two kinds of generic formal parameters can be
defined:
type Base_Type is [abstract] protected tagged private;
type Derived_From is [abstract] new protected Derived [with private];
In the former, the generic body has no knowledge of the extensible protected
type actual parameter. In the latter, the actual type must be a type in the tree
of extensible protected types rooted at Derived.
Unfortunately, these facilities are not enough to cope with situations involving
entries. One of the causes of the inheritance anomaly (Matsuoka and
Yonezawa, 1993)(see also section 7) is that adding code in a child object affects
the synchronisation code in the parent. Consider the case of a predefined lock
which can be mixed in with any other protected object to define a lockable
version. Without extra functionality, there is no way to express this. For these
reasons, the generic modifier entry !? is used to mean all the entries of the
actual parameter. The lockable mix-in type can now be achieved:
type Base_Type is [abstract] protected tagged private;
package Lockable_G is
protected type Lockable_Type is new Base_Type with
procedure Lock;
procedure Unlock;
private
entry <> and when not Locked;
Lockable_G;
The code entry !? and when not Locked indicates that all entries in the
parent protected type should have their barriers strengthened by the boolean
expression not Locked.
The entry !? feature makes it possible to modify the barriers of entries that
are unknown at the time the generic unit is written. At the time the generic
unit is instantiated, the entries of the actual generic parameter supplied for
Base Type are known, and entry !? then denotes a well-defined set of primitive
operations.
This generic barrier modifier is similar to Fr-lund's ``all-except'' specifier (Fr-lund,
1992), except that the latter also applies to primitive operations that are added
later on in further derivations, whereas entry !? does not. If new primitive
operations are added in further derivations, it is the programmer's responsibility
to make sure that these new entries get the right barriers (i.e., include when
not Locked).
Clearly, the effect is limited to entries while procedures are unaffected. This
gives rise to the following anomaly: If all the barriers need to be strengthened
by adding the condition not Locked, it may well be that the inherited procedures
need to be similarly guarded. This cannot be done without introducing
a mechanism for overriding procedures with entries. This is an Ada-specific
inheritance anomaly, which is discussed in the next section.
7 Inheritance Anomaly
The combination of the object-oriented paradigm with mechanisms for concurrent
programming may give rise to the so-called "inheritance anomaly" (Matsuoka
and Yonezawa, 1993). An inheritance anomaly exists if the synchronization between
operations of a class is not local but may depend on the whole set of
operations present for the class. When a subclass adds new operations, it may
therefore become necessary to change the synchronization defined in the parent
class to account for these new operations. This section examines how extensible
protected types can deal with this inheritance anomaly.
Synchronization for extensible protected types is done via entry barriers. An
entry barrier can be interpreted in two slightly different ways:
ffl as a precondition (which must become a guard when concurrency is introduced
in an object-oriented programming language, as (Meyer, 1997) ar-
gues). In this sense, entries are the equivalent of partial operations (Herlihy
and Wing, 1994).
ffl as a synchronization constraint.
The use of entry barriers (i.e., guards) for synchronization makes extended protected
types immune against one of the kinds of inheritance anomalies identified
by (Matsuoka and Yonezawa, 1993): guards are not subject to inheritance
anomalies caused by a partitioning of states.
To avoid a major break of encapsulation, it is mandatory for a concurrent
object-oriented programming language to have a way to re-use existing synchronization
code defined for a parent class and to incrementally modify this
inherited synchronization in a child class. In our proposal, this is given by the
and when clause, which incrementally modifies an inherited entry barrier
and hence the inherited synchronization code.
Inheritance anomalies in Ada 95 with extended protected types can still
occur, though. As (Mitchell and Wellings, 1996) argue, the root cause of inheritance
anomalies lies in a lack of expressive power of concurrent object-oriented
programming languages: if not all five criteria identified by (Bloom, 1979) are
fulfilled, inheritance anomalies may occur. Ada 95 satisfies only three of these
criteria; synchronization based on history information cannot be expressed directly
using entry barriers (local state must instead be used to record execution
history), and synchronization based on request parameter values also is not
possible directly in Ada 95. The example for the resource controller shown in
section 9.2 exhibits both of these inheritance anomalies. Because the barrier
of entry Allocate N cannot depend on the parameter N itself, an internal re-
queue to Wait For N must be used instead. The synchronization constraint for
Wait For N itself is history-sensitive: the operation should be allowed only after
a call to Deallocate has freed some resources. As a result, Deallocate must
be overridden to record this history information in local state, although both
the synchronization constraints for Deallocate itself as well as its functionality
remain unchanged.
In addition to that, extensible protected types may suffer from an Ada-
specific inheritance anomaly. As synchronization is done via barriers, only
entries can be synchronised, but not procedures. If the synchronization constraints
of a subtype should restrict an inherited primitive operation that was
implemented as a procedure in the parent type, the subtype would have to over-ride
this procedure by an entry. However, when using class-wide programming,
a task may assume that a protected operation is implemented as a procedure
(as that is what the base type indicates) and is therefore non-blocking. At
run-time the call might dispatch to an entry and block on the barrier, which
would make the call illegal if it occurred within a protected action. For these
reasons, overriding procedures with entries should not be allowed for extensible
protected types.
As discussed in section 6.3, further Ada-specific inheritance anomalies that
might arise when mix-in inheritance is used can be avoided by providing additional
functionality for generics. The new generic barrier modifier entry !?
alone is not sufficient to avoid the introduction of new Ada-specific inheritance
anomalies. Because the generic mix-in class must define the synchronization for
the complete class resulting from the combination of the mix-in class with some
a priori unknown base class, the entry !? barrier modifier was introduced. It
allows the mix-in class to impose its own synchronization constraints on an
unknown set of inherited operations. However, it is also necessary to have a
way for the mix-in class to adapt the synchronization of its additional primitive
operations to the synchronization constraints imposed by an actual base type.
or and then when
When the generic mix-in is instantiated with some base type to create a new result
type, it must be possible to parametrise the mix-in's synchronization based
upon the base type in order to obtain the correct synchronization for the new
result type. How such a parametrisation could be obtained is still a topic of
on-going research.
8 Interaction with Tagged Types
So far, the discussion has focused on how protected types can be extended.
This section now considers the interaction between tagged types and protected
tagged types.
Consider the following which defines a simple buffer:
package Simple_Buffer is
type Data_T is tagged private;
procedure Write (M : in out
procedure Read (M : in
private
type Data_T is tagged
record
Such a buffer can only be used safely in a sequential environment. To make a
pre-written buffer safe for concurrent access requires it to be encapsulated in a
protected type. The following illustrates how this can easily be achieved.
protected type Buffer is tagged
procedure Write
procedure Read (X : out Integer);
private
The buffer can now only be accessed through its protected interface.
Of course if the Buffer protected type is extended, the following will dispatch
on the buffer.
type B is access Buffer'Class;
new .;
Alternatively, Simple Buffer.Data T can be made protected but not encapsulated
by the following:
protected type Buffer is tagged
procedure Write (M : in out
procedure Read (M : in out
private
This would allow the buffer to be accessed directly (without the protection
overheads) where the situation dictates that it is safe to do so.
Combining extensible protected types with class-wide tagged types allow for
even more powerful paradigms. Consider
protected type Buffer is tagged
procedure Write (M : in out Simple_Buffer.Data_T'Class;
procedure Read (M : in out Single_Data_T'Class;
private
Here, both the protected type and the tagged type can be easily extended.
The program can arrange for dispatching on the Buffer and from within the
Write/Read routines. Further, by using access discriminants the data can be
encapsulated and protected from any concurrent use.
type Ad is access Simple_Buffer.Data_T'Class;
protected type tagged - a normal discriminant
procedure Write
procedure Read (X : out Integer);
private
type B is access Buffer'Class;
new Buffer(new Simple_Buffer.Data_T).;
Here, B1 will dispatch to the correct buffer and Write/Read will dispatch to the
correct data which will be encapsulated.
9 Examples
This section presents two examples illustrating the principles discussed in this
paper. They assume all external calls dispatch, there is no post-processing after
parent calls, no checking of parents' barriers, and that the child has access to
the parent's state.
9.1 Signals
In concurrent programming, signals are often used to inform tasks that events
have occurred. Signals often have different forms: there are transient and persistent
signals, those that wake up only a single task and those that wake up
all tasks. This sections illustrates how these abstractions can be built using
extensible protected types.
Consider first, an abstract definition of a signal.
package Signals is
protected type Signal is abstract
procedure Send;
entry Wait is abstract;
private
Signal_Arrived
type All_Signals is access Signal'Class;
package body Signals is
protected body Signal is abstract
procedure Send is
begin
Signal_Arrived := True;
Now to create a persistent signal:
with Signals; use Signals;
package Persistent_Signals is
protected type Persistent_Signal is new Signal with
entry Wait;
private
entry Wait when Signal_Arrived;
package body Persistent_Signals is
protected body type Persistent_Signal is
entry Wait when Signal_Arrived is
begin
Signal_Arrived := False;
To create a transient signal
with Signals; use Signals;
package Transient_Signals is
protected type Transient_Signal is new Signal with
procedure Send;
entry Wait;
private
entry Wait when Signal_Arrived;
package body Transient_Signals is
protected body type Transient_Signal is
procedure Send is
begin
return;
entry Wait when Signal_Arrived is
begin
Signal_Arrived := False;
To create a signal which will release all tasks.
type Base_Signal is new protected Signal;
package Release_All_Signals is
protected type Release_All_Signal is new Base_Signal with
entry Wait;
private
entry Wait and when True;
package body Release_All_Signals is
protected body Release_All_Signal
entry Wait and when True is
begin
if Wait'Count /= 0 then
return;
Base_Signal.Wait;
Now, of course,
My_Signal : All_Signals := .;
will dispatch to the appropriate signal handler.
9.2 Advanced Resource Control
Resource allocation is a fundamental problem in all aspects of concurrent pro-
gramming. Its consideration exercises all Bloom's criteria (see section 7) and
forms an appropriate basis for assessing the synchronisation mechanisms of concurrent
languages, such as Ada.
Consider the problem of constructing a resource controller that allocates
some resource to a group of client agents. There are a number of instances of
the resource but the number is bounded; contention is possible and must be
catered for in the design of the program. (Mitchell and Wellings, 1996) propose
the following resource controller problem as a benchmark for concurrent object-oriented
programming languages.
Implement a resource controller with 4 operations:
ffl Allocate: to allocate one resource,
ffl Deallocate: to deallocate a resource (which thus becomes
available again for allocation)
ffl Hold: to inhibit allocation until a call to
ffl Resume: which allows allocation again.
There are the following constraints on these operations:
1. Allocate is accepted when resources are available and the controller
is not held (synchronization on local state and history)
2. Deallocate is accepted when resources have been allocated
(synchronization on local state)
3. calls to Hold must be serviced before calls to Allocate (syn-
chronization on type of request)
4. calls to Resume are accepted only when the controller is held
(synchronization on history information).
As Ada 95 has no deontic logic operators, not all history information can be
expressed directly in barriers. However, it is possible to use local state variables
to record execution history.
The following solution simplifies the presentation by modelling the resources
by a counter indicating the number of free resources. Requirement 2 is interpreted
as meaning that an exception can be raised if an attempt is made to
deallocate resources which have not yet been allocated.
package Rsc_Controller is
Max_Resources_Available : constant Natural := 100; - For example
No_Resources_Allocated raised by deallocate
protected type Simple_Resource_Controller is tagged
entry Allocate;
procedure Deallocate;
entry Hold;
entry Resume;
private
entry Allocate when Free > 0 and not Locked and - req. 1
entry Hold when not Locked;
entry Resume when Locked; - req. 4
The body of this package simply keeps track of the resources taken and freed,
and sets and resets the Locked variable.
package body Rsc_Controller is
protected body Simple_Resource_Controller is
entry Allocate when Free > 0 and not Locked and
begin
Free := Free -1; - allocate resource
Taken
procedure Deallocate is
begin
raise No_Resources_Allocated;
Free return resource
Taken := Taken -
entry Hold when not Locked is
begin
Locked := True;
entry Resume when Locked is
begin
Locked := False;
Rsc_Controller
(Mitchell and Wellings, 1996) then extend the problem to consider the impact
of inheritance:
Extend this resource controller to add a method: Allocate N which
takes an integer parameter N and then allocates N resources. The
extension is subject to the following additional requirements:
5. Calls to Allocate N are accepted only when there are at least
available resources.
6. Calls to Deallocate must be serviced before calls to Allocate
or Allocate N.
The additional constraint that calls must be serviced in a FIFO Within Priorities
fashion is ignored here. (Mitchell and Wellings, 1996) also do not implement
this, and in Ada 95, it would be done through pragmas.
Note that this specification is flawed, and the implementation shown in
(Mitchell and Wellings, 1996) also exhibits this flaw: if Deallocate is called
when no resources are allocated, the resource controller will deadlock and not
service any calls to Deallocate, Allocate, or Allocate N. In this implemen-
tation, this has been corrected implicitly, because calling Deallocate when no
resources are allocated is viewed as an error and an exception is raised.
Requirement 5 is implemented by requeueing to Wait For N if not enough
resources are available.
Requirement 6 is implicitly fulfilled because calls to Deallocate are never
queued since Deallocate is implemented as a procedure.
with use Rsc_Controller;
package Advanced_Controller is
protected type Advanced_Resource_Controller is
new Simple_Resource_Controller with
entry Allocate_N (N : in Natural);
procedure Deallocate;
- Ada-specific anomaly: because barriers cannot access parameters,
- we must also override this method so that we can set 'Changed'
- (see below).
private
entry Allocate_N when
Free > 0 and not Locked and - req. 1
- Note: Ada does not allow access to parameters in a barrier (purely
- for efficiency reasons). Such cases must in Ada always be imple-
- mented by using internal suspension of the method through a
statement. Everything below is just necessary overhead
- in Ada 95 to implement the equivalent of having access to
- parameters in barriers.
- Indicates which of the two 'Wait_For_N' entry queues is the one
- that currently shall be used. (Two queues are used: one queue
- is used when trying to satisfy requests, requests that cannot
- be satisfied are requeued to the other. Then, the roles of the
- two queues are swapped. This avoids problems when the calling
tasks have different priorities.)
Changed
something is deallocated. Needed for correct
- implementation of 'Allocate_N' and 'Wait_For_N'. Reset each
outstanding calls to these routines have been serviced.
actually encodes the history information "Wait_For_N"
- is only accepted after a call to 'Deallocate'.
entry Wait_For_N (for Queue in Boolean) (N : in Natural);
entry Wait_For_N (for Queue in Boolean) when
not Locked and Hold'Count = 0 and
This private entry is used by 'Allocate_N' to requeue to if
- less than N resources are currently available.
package body Advanced_Controller
protected body Advanced_Resource_Controller is
procedure Deallocate is
- Overridden to account for new history information encoding
- needed for access to parameter in the barrier of Allocate_N.
begin
Changed := True;
entry Allocate_N (N : in Natural) when
Free > 0 and
not Locked and
begin
if Free >= N then
Free := Free - N;
Taken
else
requeue Wait_For_N(Current_Queue);
entry Wait_For_N (for Queue in Boolean)(N : in Natural) when
not Locked and Hold'Count = 0 and
Changed is
begin
Current_Queue := not Current_Queue;
Changed := False;
if Free >= N then
Free := Free - N;
Taken
else
requeue Wait_For_N(not Queue);
Conclusions
This paper has argued that Ada 95's model of concurrency is not well integrated
with its object-oriented model. It has focussed on the issue of how to make
protected types extensible and yet avoid the pitfalls of the inheritance anomaly.
The approach adopted has been to introduce the notion of a tagged protected
type which has the same underlying philosophy as normal tagged types.
Although the requirements for extensible protected types are easily articu-
lated, there are many potential solutions. The paper has explored the major
issues and, where appropriate, has made concrete proposals. Ada is an extremely
expressive language with many orthogonal features. The paper has
shown that the introduction of extensible protected types does not undermine
that orthogonality, and that the proposal fits in well with limited private types,
generics and normal tagged types.
The work presented here, however, has not been without its difficulties. The
major one is associated with overridden entries. It is a fundamental principle
of object-oriented programming that a child object can build upon the functionality
provided by its parent. The child can call its parent to access that
functionality, and therefore extend it. In Ada, calling an entry is a potentially
suspending operation and this is not allowed from within a protected object.
Hence, overriding entries gives a conflict between the object-oriented and the
protected type models. Furthermore, Ada allows an entry to requeue a call to
another entry. When the requeued entry is serviced, control is not returned
to the entry which issued the requeue request. Consequently, if a parent entry
issues a requeue, control is never returned to the child. This again causes a
conflict with the object-oriented programming model, where a child is allowed
to undertake post-processing after a parent call. The paper has discussed these
conflicts in detail and has proposed a range of potential compromise solutions.
Ada 95 is an important language - the only international standard for object-oriented
real-time distributed programming. It is important that it continues to
evolve. This paper has tried to contribute to the growing debate of how best to
fully integrate the protected type model of Ada into the object-oriented model.
It is clear that introducing extensible protected types is a large change to Ada
and one that is only acceptable at the next major revision of the language. Many
of the complications come from the ability to override entries. One possible
major simplification of the proposal made here would be not to allow these
facilities. Entries would be considered 'final' (using Java terminology). Such a
simplification might lead to an early transition path between current Ada and
a more fully integrated version.
Acknowledgements
The authors gratefully acknowledge the contributions of Oliver Kiddle and
Kristina Lundqvist to the ideas discussed in this paper. We also would like to
acknowledge the participants at the 9th International Workshop on Real-Time
Ada Issues who gave us some feedback on some of our initial ideas.
--R
Integrating Inheritance and Synchronisation in Ada9X
Evaluating synchronisation mechanisms
Structured multiprogramming
Concurrency in Ada
Toward a method of object-oriented concurrent program- ming
Parellism in object-oriented programming languages
Inheritance of synchronization constraints in cocur- rent object-oriented programming languages
Linearizability: A correctness criterion for concurrent objects
Extended protected types
Concurrent Programming in Java
A behavioral notion of subtyping
DRAGOON: An Ada-based object oriented language for concurrent
Analysis of inheritance anomaly in object-oriented concurrent programming languages
Systematic concurrent object-oriented programming
Extendable dispatchable task communication mechanisms
The classiC programming language and design of synchronous concurrent object oriented languages
Java Thread
The programming language Oberon
Parallelism in object-oriented languages: a survey
Concurrent programming in concurrents- malltalk
--TR
Object-oriented concurrent programming ABCL/1
Concurrent programming in concurrent Smalltalk
The programming language Oberon
Linearizability: a correctness condition for concurrent objects
Systematic concurrent object-oriented programming
Toward a method of object-oriented concurrent programming
Introducing concurrency to a sequential language
Analysis of inheritance anomaly in object-oriented concurrent programming languages
Integrating inheritance and synchronization in Ada9X
A behavioral notion of subtyping
Concurrency in Ada
Java Threads
Object-oriented software construction (2nd ed.)
Extensible protected types
Concurrency and distribution in object-oriented programming
The ClassiC programming language and design of synchronous concurrent object oriented languages
Extendable, dispatchable task communication mechanisms
Monitors
Structured multiprogramming
Parallelism in Object-Oriented Languages
Inheritance of Synchronization Constraints in Concurrent Object-Oriented Programming Languages
Evaluating synchronization mechanisms
--CTR
Rodrigo Garca Garca , Alfred Strohmeier, Experiences report on the implementation of EPTs for GNAT, ACM SIGAda Ada Letters, v.XXII n.4, December 2002
Albert M. K. Cheng , James Ras, The implementation of the Priority Ceiling Protocol in Ada-2005, ACM SIGAda Ada Letters, v.XXVII n.1, p.24-39, April 2007
Knut H. Pedersen , Constantinos Constantinides, AspectAda: aspect oriented programming for ada95, ACM SIGAda Ada Letters, v.XXV n.4, p.79-92, December 2005
Gustaf Naeser , Kristina Lundqvist , Lars Asplund, Temporal skeletons for verifying time, ACM SIGAda Ada Letters, v.XXV n.4, p.49-56, December 2005
Aaron W. Keen , Tingjian Ge , Justin T. Maris , Ronald A. Olsson, JR: Flexible distributed programming in an extended Java, ACM Transactions on Programming Languages and Systems (TOPLAS), v.26 n.3, p.578-608, May 2004 | concurrency;concurrent object-oriented programming;ada 95;inheritance anomaly |
353944 | Data Dependence Analysis of Assembly Code. | Determination of data dependences is a task typically performed with high-level language source code in today's optimizing and parallelizing compilers. Very little work has been done in the field of data dependence analysis on assembly language code, but this area will be of growing importance, e.g., for increasing instruction-level parallelism. A central element of a data dependence analysis in this case is a method for memory reference disambiguation which decides whether two memory operations may access (or definitely access) the same memory location. In this paper we describe a new approach for the determination of data dependences in assembly code. Our method is based on a sophisticated algorithm for symbolic value propagation, and it can derive value-based dependences between memory operations instead of just address-based dependences. We have integrated our method into the Salto system for assembly language optimization. Experimental results show that our approach greatly improves the precision of the dependence analysis in many cases. | Introduction
The determination of data dependences is nowadays
most often done by parallelizing and optimizing compiler
systems on the level of source code, e.g. C or FORTRAN
90, or some intermediate code, e.g. RTL [21]. Data
dependence analysis on the level of assembly code aims
at increasing instruction level parallelism. Using various
scheduling techniques like list scheduling [6], trace
scheduling [9], or percolation scheduling [17], a new sequence
of instructions is constructed with regard to data and
control dependences, and properties of the target processor.
Most of today's instruction schedulers only determine data
dependences between register accesses and consider memory
to be one cell, so that every two memory accesses must
be assumed as data dependent. Thus, analyzing memory accesses
becomes more important while doing global instruction
scheduling [3]. In this paper, we describe an intraprocedural
value-based data dependence analysis, (see Maslov
[14] for details about address-based and value-based data
dependences), implemented in the context of the SALTO
tool [19]. SALTO is a framework to develop optimization
and transformation techniques for various processors. The
user describes the target processor using a mixture of RTL
and C language. A program written in assembly code can
then be analyzed and modified using an interface in C++.
SALTO has already implemented some kind of conflict analysis
[12], but their approach only determines address-based
dependences between register accesses and assumes memory
to be one cell.
When analyzing data dependences in assembly code we
must distinguish between accesses to registers and those to
memory. In both cases we derive data dependence from
reaching definitions and reaching uses information that we
obtain by a monotone data flow analysis. Register analysis
makes no complications: the set of used and defined
registers in one instruction can be established easily, because
registers do not have aliases. Therefore, determination
of data dependences between register accesses is not in
the scope of this paper. For memory references we have to
solve the aliasing problem [22]: whether two memory references
access the same location. See Landi and Ryder [11]
for more details on aliasing.
We have to prove that two references always point to the
same location (must-alias) or must show that they never refer
to the same location. If we cannot prove this, we would
like to have a conservative approximation of all alias pairs
(may-alias), i.e., memory references that might refer to the
same location. To derive all possible addresses that might
be accessed by one memory instruction, we use a symbolic
value propagation algorithm. To compare memory
addresses we use a modification of the GCD test [23].
Experimental results indicate that in many cases our
method can be more accurate in the determination of data
dependences than other previous methods.
2. Programming Model and Assumptions
In the following we assume a RISC instruction set.
Memory is only accessed through load (ld) and store (st)
instructions. Memory references can only have the following
offset. Use of a scaling factor is not provided in this
model, but an addition would not be difficult. Memory accesses
normally read or write a word of four bytes. For
global memory access, the address (which is a label) first
has to be moved to a register. Then it can be read or written
using a memory instruction. Initialization of registers
or copying the contents of one register to another can be
done using the mv instruction. All logic and arithmetic
operators have the following dest.
The operation op is executed on operand src 1 and operand
the result is written to register dest. An operand can
be a register or an integer constant. Control flow is modeled
using unconditional (b) or conditional (bcc) branch
instructions. Runtime-memory can be divided into three
classes [1]: static or global memory, stack, and heap mem-
ory. When an address unequivocally references one of these
classes, some simple memory reference disambiguation is
feasible (see section 3). Unfortunately it is not easy to prove
that an address always references the stack, when no inter-procedural
analysis is done from which one can obtain information
about the frame pointer. In our approach we do
not make such assumptions.
3. Alias Analysis of Assembly Code
In this section we briefly review techniques for alias
analysis of memory references. Doing no alias analysis
leads to the assumption that a store instruction is always
dependent on a load or store instruction. A common technique
in compile-time instruction schedulers is alias analysis
by instruction inspection, where the scheduler looks at
two instructions to see if it is obvious that different memory
are referenced. With this technique independence
of the memory references in Fig. 1 (a) and (b) can be
proved, because the same base register but different offsets
are used (a), or different memory classes are referenced (b).
Fig. 1 (c) shows an example where this technique fails. By
looking only at register %o1 it must be assumed that register
can point to any memory location, and therefore we
have to determine that S3 is data dependent on S2. This local
analysis disables notice of the definition of register %o1
in the first statement. This example makes it clear that a
two-fold improvement is needed. First, we need to save information
about address arithmetic, and secondly we need
some kind of copy-propagation. Provided that we have such
an algorithm, it would be easy to show that in statement S2
register %o1 has the value %fp \Gamma 20 and therefore there
is no overlap between the 4 byte memory blocks starting at
4. Symbolic Value Set Propagation
In this section we present an extension of the well-known
constant propagation algorithm [23]. Our target is the determination
of possible symbolic value sets (contents) for
each register and each program statement. In a subsequent
step of the analysis this information will be used for the determination
of data dependences between storage memory
accesses-meaning store and load instructions. The calculation
of symbolic value sets is performed by a data flow
analysis [10]. Therefore, we have to model our problem
as a data flow framework (L; -; F ), where L is called the
data flow information set, - is the union operator, and F
is the set of semantic functions. If the semantic functions
are monotone and (L; -) forms a bounded semi-lattice with
a one element and a zero element, we can use a general iterative
algorithm [10] that always terminates and yields the
least fix-point of the data flow system.
4.1. Data Flow Information Set
Our method describes the content of a register in the
form of symbolic values. Therefore, we have to define the
initialization points of program P. A statement j is called an
initialization point R i;j of P if j is a load instruction that
defines the content of r i , a call node, or an entry node of a
procedure. The finite set of all initialization points of P is
given by init(P ). The finite set SV of all symbolic values
consists of the symbol ? and all proper symbolic values that
are polynomials:
a i;j
A variable R i;j of a symbolic value represents the value
that is stored in register r i at initialization point R i;j . We
use the value ? when we cannot make any assumptions on
the content of a register.
As we are performing a static analysis, we are not able to
infer the direction of branches taken during program execu-
tion. Therefore, it could happen that for a register r i , more
than one symbolic value is valid at a specific program point.
As a consequence, we must then describe possible register
contents by the so-called k-bounded symbolic value sets.
The limitation of the sets is to ensure the termination of
the analysis. Let k 2 N be arbitrary, but fixed. Then a
k-bounded symbolic value set is a set
1: ld [%fp-4],%o1
2: st %o2,[%fp-8]
1: ld [%fp-4],%o1
2: sethi %hi(.LLC0),%o2
3: st %o3,[%o2+%lo(.LLC0)]
1: add %fp,-20,%o1
2: st %o2,[%o1-4]
3: ld [%fp-20],%o3
(a) (b) (c)
Figure
1. Sample code for different techniques of alias detection: (a) and (b) can be solved by
instruction inspection, whereas (c) needs a sophisticated analysis.
In the following let REGS stand for the set of all registers.
We call a total map ff
a state. By this
means the data flow information set we use for the calculation
of symbolic value sets is given by the set of possible
states SVS.
4.2. Union Operator
If a node in a control flow graph has more than one pre-
decessor, we must integrate all information stemming from
these predecessors. In data flow frameworks, joining paths
in the flow graph is implemented by the union operator. Let
then the union operator - of our data flow problem
is defined as shown in Fig. 2. The union operator - is
a simple componentwise union of sets. Additionally, to ensure
the well-definition of the operator, we map arising sets
with cardinality greater than k to the special value ?. We
have proven, that for a fixed k 2 N, the set of states SVS in
conjunction with this union operator constitutes a bounded
semi-lattice with a one element and a zero element.
4.3. Semantic Functions
In the control flow graph chosen for the analysis, each
node stands for a uniquely labeled program statement.
Therefore we can unambiguously assign a semantic function
to each of the nodes; this semantic function will be
used to update the symbolic value sets assigned to each reg-
ister. In Fig. 3 we specify some semantic functions used by
our method. In this specification, ff stands for a state before
the execution of the semantic function, and ff 0 for the corresponding
state after the execution of the semantic function.
After the execution of an initialization point R i;j , we
have no knowledge about the defined value of register r i .
The main idea of our method is to describe the register content
of r i after such a definition as a symbolic value. As
mentioned before, entry nodes of a procedure as well as load
instructions are initialization points. The semantic function
of an entry node n initializes the symbolic value set for each
register r i with its corresponding initialization point R i;n .
By doing so, after the execution of n, the symbolic value
R i;n stands for the value which is stored in r i before the
execution of any procedure code.
The semantic function assigned to a load instruction initializes
the symbolic value set of the register, whose value
will be defined by the operation, similar to the description
above of the corresponding initialization point. As opposed
to entry nodes, such an initialization is only valid if the initialization
point is safe. We call an initialization point safe
if the corresponding statement is not part of a loop. In con-
trast, an initialization point inside a loop is called unsafe.
The problem with unsafe initialization points is that the
value of the affected register may change at each loop itera-
tion. Therefore, we cannot make a safe assumption about its
initialization value. To obtain a safe approximation in such
a case the symbolic value set of the register is set to the special
value ?. In Fig. 3 we use the operator \Phi which is an
extension of the add operator for polynomials. The result of
an application A \Phi is a pairwise addition
of the terms of A and B. To ensure the well-definition of
the operator, resulting sets with cardinality greater than k
will be mapped to ?. Further, if one of the operands has
the value ?, the operator returns ?. As we have proven that
the semantic functions are monotone, the general iterative
algorithm [10] can be used to solve our data flow problem.
5. Improvement of Value Set Propagation
Without limiting the cardinality of symbolic value sets
our propagation algorithm may lead to infinite sets. Registers
whose contents could change at each loop iteration
are responsible for this phenomenon. The calculated symbolic
value set for these registers comprises only the special
value ?. Such an inaccuracy in the analysis cannot be accepted
in practice. Therefore, we propose an improvement
of the symbolic value set propagation algorithm by using
registers.
5.1. NSV Registers
In this section we introduce the concept of non-symbolic
value registers, hereafter called NSV registers. A NSV register
of a loop G 0 is a register r i used in G 0 , which content
Figure
2. The union operator for symbolic value sets.
n: entry
n: mv a,%rj (copy a 2 Z into register r j )
2. ff 0 (r j ) := fag
n: add %ri,%rj,%rm (add value of r i and r j and store the result in r m )
2. ff
n: ld [mem],%rj (load value from address mem into register r j )
2. ff 0 (r j ) :=
ae fR j;n g if R j;n is a safe initialization point
Figure
3. Semantic functions for some instructions.
can change. The modified propagation algorithm works as
follows:
1. First, we have to determine the NSV registers for all
loops of the program. The sets of NSV registers contain
among other things induction registers and registers
which will be defined by a load instruction in G 0 .
2. Thereafter, for each NSV register r i we insert additional
nodes into the control flow graph. At the beginning
of the loop body we attach a statement
where n 0 is a unique and unused statement number. At
the end of the loop body, and before each node of the
control flow graph that can be reached after execution
of the loop we insert a statement
3. After this, we perform the symbolic value set propagation
on the modified control flow graph. Further, for
the inserted nodes we have defined semantic functions
which set the symbolic value set of r i to the initialization
point R i;n 0
(init) resp. to ? (setbot). Now we
consider every initialization point as safe.
The improved version of our algorithm has two advantages:
The number of iterations of the general iterative algorithm,
which we use for data flow analysis, will be reduced. Ad-
ditionally, we can compare memory addresses even though
they depend on NSV registers.
5.2. Determination of NSV Registers
In the following let G 0 be a loop and S a statement inside
of G 0 . A statement S is called loop invariant if the destination
register r i is defined with the same value in each loop
iteration. The determination of loop invariant statements of
G 0 can be performed in two steps [1]:
1. Mark all statements as loop invariant, which only use
constants as operands or operands defined outside of
G 0 .
2. Iteratively, mark all untagged statements of G 0 as loop
invariant which only use operands that are defined only
by a loop invariant statement. The algorithm terminates
if no further statement can be marked.
By using the concept of loop invariants we can determine
the NSV registers of a loop G 0 in a simple way. For this,
a register r i is a NSV register in G 0 iff r i is defined by a
statement in G 0 that is not a loop invariant statement in G 0 .
Fig. 4 shows the results of an improved symbolic value
set propagation for a simple program. The NSV registers
of the loop are %r1, %r2, %r3, and %r4. For each NSV
register an init instruction resp. setbot instructions is
inserted into the program. As a consequence, the data flow
algorithm terminates after the third iteration. The concept of
registers allows a more accurate analysis of memory
references inside the loop. Without NSV registers the value
of register %r1 would have been set to ? eventually. In
contrast, an improved symbolic value propagation always
leads to proper values.
6. Data Dependence Analysis
The determination of data dependences can be achieved
by different means. The most commonly used is the calculation
of reaching definitions resp. reaching uses for all
statements. This can be described as the problem of de-
termining, for a specific statement and memory location,
all statements where the value of this memory location has
been written last resp. has been used last. Once the reaching
definitions and uses have been determined, we are able
to infer def-use, def-def, and use-def associations; a def-use
pair of statements indicates a true dependence between
them, a def-def pair an output dependence, and an use-def
pair an anti-dependence. For scalar variables the determination
of reaching definitions can be performed by a well-known
standard algorithm described in [1]. To use this algorithm
for data dependence analysis of assembly code we
have to derive the may-alias information, i.e., we have to
check whether two storage accesses could refer to the same
storage object. To improve the accuracy of the data dependence
analysis the must-alias information is needed, i.e., we
have to check whether two storage accesses refer always to
the same storage object.
To achieve all this information we need a mechanism
which checks whether the index expressions of two storage
could represent the same value. We
solve this problem by applying a modified GCD test [23].
Therefore, we replace the appearances of registers in X and
Y with elements of their corresponding symbolic value sets,
and check for all possible combinations whether the equation
has a solution.
For an example, we refer to Fig. 4. Obviously, instruction
5 is a reaching use of memory in instruction 8. The
derived memory addresses are R
respectively. With the assumption that both instructions are
executed in the same loop iteration, we can prove that different
memory addresses will be accessed. This means, there
is no loop-independent data dependence between these two
instructions.
When the instructions are executed in different loop it-
erations, R 1;12 may have different values. The modified
GCD test shows that both instructions may reference the
same memory location. Therefore, we have to assume a
loop-carried data dependence between instructions 5 and 8.
7. Implementation and Results
The method for determining of data dependences in assembly
code presented in the last sections was implemented
as a user function in SALTO on a Sun SPARC 10 workstation
running Solaris 2.5. Presently, only the assembly code
for the SPARC V7 processor can be analyzed, but an extension
to other processors will require minimal technical
effort. Results of our analysis can be used by other tools in
SALTO.
For evaluation of our method we have taken a closer look
at two aspects:
1. Comparison of the number of data dependences using
our method against the method implemented in
SALTO; this shows the difference between address-based
and value-based dependence analysis concerning
register accesses.
2. Comparison between the number of data dependences
using address-based and value-based dependence analysis
for memory accesses.
As a sample we chose 160 procedures out of the sixth public
release of the Independent JPEG Group's free JPEG
software, a package for compression and decompression of
JPEG images. We distinguish between the following four
levels of accuracy: In level 1 we determine address-based
dependences between register accesses, memory is modeled
as one cell, so that every pair of memory accesses is
assumed to be data dependent. Level 2 models the memory
the same way as in level 1, and does value-based dependence
analysis for register accesses. From level 3 on,
register accesses are determined the same way as in level 2,
and we analyze memory accesses with our symbolic value
set propagation, but in level 3 the derivation of dependence
is address-based. In level 4 we perform value-based dependence
analysis. Level 1 analysis is performed by SALTO
[19], but SALTO does not consider control flow. Two instructions
are assumed to be data dependent, even if they
cannot be executed one after another. Level 2 is a common
technique used by today's instruction schedulers, e.g. the
1. iteration 2. iteration
14 init %r3
3 ld [%r1-40],%r3
5 ld [%r1-80],%r4
8 st %r3,[%r1-40]
9 add %r1,4,%r1
19 setbot %r4
ble .LL11
22 setbot %r4
Figure
4. Symbolic value set propagation. Registers r i that are not mentioned have the value fR i;0 g.
one in gcc [21] or the one used by Larus et. al. [20]. Systems
that do some kind of value propagation, but only determine
address-based dependences, are classified in level
3. In section 8 we will have a closer look at other techniques
for value propagation. Our method is classified in
level 4. As yet, we know of no other method which also determines
value-based dependences. The table only contains
those 39 procedures in which an improvement, i.e., less de-
pendences, was noticeable from level 3 to level 4. Fig. 5
shows the number of dependences (sum of true, anti-, and
output dependences), where we distinguish different levels
of accuracy, as well as register and memory accesses. Fig.
5 also shows in the two rightmost columns the effect of a
value-based analysis against an address-based analysis. For
every procedure it is clear to see the proportion of data dependences
that our method disproves.
8. Related Work
So far, only some work has been done in the field of
memory reference disambiguation. Ellis [8] presented a
method to derive symbolic expressions for memory addresses
by chasing back all reaching definitions of a symbolic
register, the expression is simplified using rules of al-
gebra, and two expressions are compared using the GCD
test. The method is implemented in the Bulldog compiler,
but it works on an intermediate level close to high-level lan-
guage. Other authors were inspired by Ellis, e.g. Lowney et.
al. [13], B-ockle [4], and Ebcio-glu et. al. [15]. The approach
presented by Ebcio-glu is implemented in the Chameleon
compiler [16] and works on assembly code. First, a procedure
is transformed into SSA form [5], and loops are nor-
malized. For gathering possible register values the same
Procedure Name LOC Level 1 Level 2 Level 3 Level 4 Improvement
Reg. Mem. Reg. Mem. Reg. Mem. Reg. Mem. Reg. Mem.
test3function
is shifting signed 33 178 38 87 38 87 31 87 22 51% 29%
jpeg CreateCompress 126 4273 1945 423 1945 423 1664 423 1619 90% 3%
jpeg suppress tables 74 1127 396 143 396 143 229 143 184 87% 20%
jpeg finish compress 144 10432 2333 1121 2333 1121 2210 1121 2197 89% 1%
emit byte 42 433 214 119 214 119 189 119 184 73% 3%
emit dqt 125 4794 1097 575 1097 575 771 575 726 88% 6%
emit dht 134 5219 1461 589 1461 589 980 589 870 89% 11%
emit sof 100 6389 1282 661 1282 661 1087 661 1077 90% 1%
emit sos 100 5252 1285 574 1285 574 873 574 840 89% 4%
any marker 41 561 175 184 175 184 110 184 106 67% 4%
write frame header 142 4309 1368 679 1368 679 870 679 744 84% 14%
scan header 86 3656 626 934 626 934 486 934 459 74% 6%
tables only 83 2495 390 716 390 716 324 716 267 71% 18%
jpeg abort 38 268 84 93 84 93 67 93 63 65% 6%
jpeg CreateDecompress 124 4878 1972 507 1972 507 1716 507 1659 90% 3 %
jpeg start decompress 135 4097 902 674 902 674 860 674 856 84% 1%
post process 2pass 111 2583 1385 278 1385 278 907 278 878 89% 3%
jpeg read coefficients 113 3783 897 538 897 538 853 538 851 86% 1%
select file name 104 5631 1146 473 1146 473 714 473 644 92% 10%
jround up 20
jcopy sample rows
read 1 byte 48 653 84 186 84 186 70 186 67 72% 4%
read 2 bytes 93 3115 360 555 360 555 297 555 285 82% 4%
next marker 42 567 137 305 137 305 112 305 98 46% 12%
first marker 84 1989 259 360 259 360 197 360 187 82% 5%
process COM 107 6901 979 1147 979 1147 697 1147 592 83% 15%
process SOFn 75 4545 729 670 729 670 601 670 598 85% 1%
scan JPEG header 34 804 82 306 82 306 78 306 77 62% 1%
read byte 43 415 105 129 105 129 102 129 97 69% 5%
read colormap 67 2221 668 305 668 305 583 305 568 86% 3%
read non rle pixel 40 368 93 125 93 125 84 125 83 66% 1%
read rle pixel 80 976 289 268 289 268 280 268 279 73% 1%
jcopy sample rows
flush packet 44 468 187 131 187 131 187 131 182 72% 3%
start output tga 215 12870 3272 974 3272 974 2937 974 2876 92% 2%
Figure
5. Number of dependences (sum of true, anti-, and output dependences) found in four levels
of accuracy. The results are divided into register-based and memory-based dependences. The two
rightmost columns show the improvement of a value-based dependence analysis on an address-based
dependence analysis.
technique as in the Bulldog compiler is used. If a register
has multiple definitions, the algorithm described in [15] can
chase all reaching definitions, whereas the concrete implementation
in the Chameleon compiler seems to not support
this. Comparing memory addresses makes use of the GCD
test and the Banerjee inequalities [2, 23]. The results of
their method are alias information. Debray et. al. [7] present
an approach close to ours. They use address descriptors to
represent abstract addresses, i.e., addresses containing symbolic
registers. An address descriptor is a
where I is an instruction and M is a set of mod \Gamma k residues.
M denotes a set of offsets relative to the register defined in
instruction I . Note that an address descriptor only depends
on one symbolic register. A data flow system is used to
propagate values through the control flow graph. mod \Gamma k
sets are used as a bounded semi-lattice is needed (in the
tests it is 64). However this leads to an approximation
of address representation that makes it impossible to derive
must-alias information. The second drawback is that definitions
of the same register in different control flow paths
are not joined in a set, but mapped to ?. Comparing address
descriptors can be reduced to a comparison of mod\Gammak
sets, using some dominator information to handle loops cor-
rectly. They do not derive data dependence information.
9. Conclusions
In this paper we presented a new method to detect data
dependences in assembly code. It works in two steps: First
we perform a symbolic value set propagation using a monotone
data flow system. Then we compute reaching definitions
and reaching uses for register and memory access, and
derive value-based data dependences. For comparing memory
references we use a modification of the GCD test. All
known approaches for memory reference disambiguation
do not propagate values through memory cells. Remember
that loading from memory causes the destination register to
have a symbolic value. When we compare two memory references
we must have in mind that registers defined in different
instructions may have different values, even if they
were loaded from the same memory address. To handle this
situation we plan to extend our method to propagate values
through memory cells.
Software pipelining will be one major application of the
present work in the near future; this family of techniques
overlaps the execution of different iterations from an original
loop, and therefore requires a very precise dependence
analysis with additional information about the distance of
the dependence. Development of this work entails in particular
discovering induction variables, which is possible as a
post-pass, as soon as loop invariants are known. Then coupling
with known dependence tests, such as Banerjee test or
Omega test [18] can be considered.
Finally, extending our method to interprocedural analysis
would lead to a more accurate dependence analysis.
Presently we have to assume that the contents of almost all
registers and all memory cells may have changed after the
evaluation of a procedure call. As a first step, we could
make assumptions about the use of global memory loca-
tions, and we could derive exact dependences.
Acknowledgments
We thank the referees for their comments which helped
in improving this paper.
--R
Dependence analysis for supercomputing.
Global instruction scheduling for superscalar machines.
Exploitation of Fine-Grain Parallelism
An efficient method of computing static single assignment form.
Some experiments in local microcode compaction for horizontal machines.
Alias analysis in executable code.
A Compiler for VLIW Architectures.
Trace scheduling: A technique for global microcode compaction.
Monotone data flow analysis frameworks.
Detecting conflicts between structure accesses.
Lazy array data-flow dependence analysis
A study on the number of memory ports in multiple instruction issue machines.
Compiler/architecture interaction in a tree-based VLIW processor
Percolation scheduling: A parallel compilation technique.
The Omega test: a fast and practical integer programming algorithm for dependence analysis.
SALTO: System for assembly-language transformation and optimization
Instruction scheduling and executable editing.
The GNU instruction scheduler.
Limits of instruction-level parallelism
Supercompilers for parallel and vector computers.
--TR
Compilers: principles, techniques, and tools
Detecting conflicts between structure accesses
Array expansion
An efficient method of computing static single assignment form
Dependence flow graphs: an algebraic approach to program dependencies
Pointer-induced aliasing: a problem taxonomy
Limits of instruction-level parallelism
A practical algorithm for exact array dependence analysis
Abstract interpretation and application to logic programs
Binary translation
Instruction-level parallel processing
The multiflow trace scheduling compiler
A hierarchical approach to instruction-level parallelization
Abstract interpretation
Instruction scheduling and executable editing
A study on the number of memory ports in multiple instruction issue machines
Alias analysis of executable code
Path-sensitive value-flow analysis
Advanced compiler design and implementation
Dependence Analysis for Supercomputing
The Design and Analysis of Computer Algorithms
Walk-Time Techniques
An Exact Method for Analysis of Value-based Array Data Dependences
Data Dependence Analysis of Assembly Code
Percolation Scheduling: A Parallel Compilation Technique
Bulldog
--CTR
Thomas Reps , Gogul Balakrishnan , Junghee Lim, Intermediate-representation recovery from low-level code, Proceedings of the 2006 ACM SIGPLAN symposium on Partial evaluation and semantics-based program manipulation, January 09-10, 2006, Charleston, South Carolina
Saurabh Chheda , Osman Unsal , Israel Koren , C. Mani Krishna , Csaba Andras Moritz, Combining compiler and runtime IPC predictions to reduce energy in next generation architectures, Proceedings of the 1st conference on Computing frontiers, April 14-16, 2004, Ischia, Italy
Patricio Buli , Veselko Gutin, An extended ANSI C for processors with a multimedia extension, International Journal of Parallel Programming, v.31 n.2, p.107-136, April | assembly code;memory reference disambiguation;value-based dependences;monotone data flow frameworks;data dependence analysis |
353947 | Loop Shifting for Loop Compaction. | The idea of decomposed software pipelining is to decouple the software pipelining problem into a cyclic scheduling problem without resource constraints and an acyclic scheduling problem with resource constraints. In terms of loop transformation and code motion, the technique can be formulated as a combination of loop shifting and loop compaction. Loop shifting amounts to moving statements between iterations thereby changing some loop independent dependences into loop carried dependences and vice versa. Then, loop compaction schedules the body of the loop considering only loop independent dependences, but taking into account the details of the target architecture. In this paper, we show how loop shifting can be optimized so as to minimize both the length of the critical path and the number of dependences for loop compaction. The first problem is well-known and can be solved by an algorithm due to Leiserson and Saxe. We show that the second optimization (and the combination with the first one) is also polynomially solvable with a fast graph algorithm, variant of minimum-cost flow algorithms. Finally, we analyze the improvements obtained on loop compaction by experiments on random graphs. | Introduction
Modern computers now exploit parallelism at the instruction level in the microprocessor itself. A
sequential microprocessor is not anymore a simple unit that processes instructions following a
unique stream. The processor may have multiple independent functional units, some pipelined
and possibly some other non-pipelined. To take advantage of these parallel functionalities in the
processor, it is not sucient to exploit instruction-level parallelism only inside basic blocks. To feed
the functional units, it may be necessary to consider, for the schedule, instructions from more than
one basic block. Finding ways to extract more instruction-level parallelism (ILP) has led to a large
amount of research from both a hardware and a software perspective (see for example [12, Chap. 4]
for an overview).
A hardware solution to this problem is to provide support for speculative execution on control
as it is done on superscalar architectures and/or support for predicated execution as for example in
the IA-64 architecture [8]. A software solution is to schedule statements across conditional branches
whose behavior is fairly predictable. Loop unrolling and trace scheduling have this eect. This
is also what the software pipelining technique does for loop branches: loops are scheduled so
that each iteration in the software-pipelined code is made from instructions that belong to dierent
iterations of the original loop.
Software pipelining is a NP-hard problem when resource are limited. For this reason, a huge
number of heuristic algorithms has been proposed, following various strategies. A comprehensive
survey is available in the paper by Allan et al. [2]. They classify these algorithms roughly in three
dierent categories: modulo scheduling [16, 24] and its variations ([23, 13, 18] to quote but a few),
kernel recognition algorithms such as [1, 21], and move-then-schedule algorithms [14, 19, 5]. Briey
speaking, the ideas of these dierent types of algorithms are the following. Modulo scheduling
algorithms look for a solution with a cyclic allocation of resources: every clock cycles, the resource
usage will repeat. The algorithm thus looks for a schedule compatible with an allocation modulo
, for a given (called the initiation interval): the value is incremented until a solution is
found. Kernel recognition algorithms simulate loop unrolling and scheduling until a pattern
appears, that means a point where the schedule would be cyclic. This pattern will form the kernel
of the software-pipelined code. Move-then-schedule algorithms use an iterative scheme that
alternatively schedules the body of the loop (loop compaction) and moves instructions across the
back-edge of the loop as long as this improves the schedule.
The goal of this paper is to explore more deeply the concept of move-then-schedule algorithms,
in particular to see how moving instructions can help for loop compaction. As explained by B. Rau
in [22], although such code motion can yield improvements in the schedule, it is not always clear
which operations should be moved around the back edge, in which direction and how many times to
get the best results.] How close it gets, in practice, to the optimal has not been studied, and,
in fact, for this approach, even the notion of optimal has not been dened. Following the ideas
developed in decomposed software pipelining [9, 27, 4], we show how we can nd directly in
one pre-processing step a good 1 loop shifting (i.e. how to move statements across iterations) so
that the loop compaction is more likely to be improved. The general idea of this two-step heuristic
is the same as for decomposed software pipelining: we decouple the software pipelining problem
into a cyclic scheduling problem without resource constraints (nding the loop shifting) and an
acyclic scheduling problem with resource constraints (the loop compaction).
The rest of the paper is organized as follows. In Section 2, we recall the software pipelining
problem and well-known results such as problem complexity, lower bounds for the initiation interval,
etc. In Section 3, we explain why combining loop shifting with loop compaction can give better
performance than loop compaction alone. Loop shifting changes some loop independent dependences
into loop carried dependences and vice versa. Then, loop compaction schedules the body of the
loop considering only loop independent dependences, but taking into account the details of the
target architecture. A rst optimization is to shift statements so as to minimize the critical path
for loop compaction as it was done in [4] using an algorithm due to Leiserson and Saxe. A second
optimization is to minimize the number of constraints for loop compaction, i.e. shift statements
so that there remain as few loop independent edges as possible. This optimization is the main
contribution of the paper and is presented with full details in Section 4. Section 5 discusses some
limitations of our technique and how we think we could overcome them in the future.
2 The software pipelining problem
As mentioned in Section 1, our goal is to determine, in the context of move-then-schedule software
pipelining algorithms, how to move statements so as to make loop compaction more ecient. We
thus need to be able to discuss about optimality and about performances relative to an optimum.
For that, we need a model for the loops we are considering and a model for the architecture we are
targeting that are both simple enough so that optimality can be discussed. These simplied models
are presented in Section 2.1. To summarize, from a theoretical point of view, we assume a simple
loop (i.e. with no conditional branches 2 ), with constant dependence distances, and a nite number
We put good in quotation marks because the technique remains of course a heuristic. Loop compaction itself
is NP-complete in the case of resource constraints
Optimality for arbitrary loops is in general not easy to discuss as shown by Schwiegelshohn et al. [25].
of non pipelined homogeneous functional units.
Despite this simplied model, our algorithm can still be used in practice for more sophisticated
resource models (even if the theoretical guarantee that we give is no longer true). Indeed, the
loop shifting technique that we develop is the rst phase of the process and does not depend on
the architecture model but only on dependence constraints. It just shifts statements so that the
critical path and the number of constraints for loop compaction are minimized. Resource constraints
are taken into account only in the second phase of the algorithm, when compacting the loop, and
a specic and aggressive instruction scheduler can be used for this task. Handling conditional
branches however has not been considered yet, although move-then-schedule algorithms usually
have this capability [19, 27]. We will thus explore in the future how this feature can be integrated in
our shifting technique. We could also rely on predicated execution as modulo scheduling algorithms
do.
2.1 Problem formulation
We consider the problem of scheduling a loop with a possibly very large number of iterations. The
loop is represented by a nite, vertex-weighted, edge-weighted directed multigraph w).
The vertices V model the statements of the loop body: each v 2 V represents a set of operations
one for each iteration of the loop. Each statement v has a delay (or latency)
. The directed edges E model dependence constraints: each edge
weight w(e) 2 N, the dependence distance, that expresses the fact that the operation (u;
instance of statement u at iteration k) must be completed before the execution of the operation
(the instance of statement v at iteration k
In terms of loops, an edge e corresponds to a loop independent dependence if
to a loop carried dependence otherwise. A loop independent dependence is always directed from
a statement u to a statement v that is textually after in the loop body. Thus, if G corresponds to
a loop, it has no circuit C of zero weight (w(C) 6= 0).
The goal is to determine a schedule for all operations (v; k), i.e. a function
respects the dependence constraints
and the resource constraints: if p non pipelined homogeneous resources are available, no more
than p operations should be being processed at any clock cycle. The performance of a schedule
is measured by its average cycle time dened by:
Among all schedules, schedules that exhibit a cyclic pattern are particularly interesting. A cyclic
schedule is a schedule such that (v; N. The schedule
has period : the same pattern of computations occurs every units of time. Within each period,
one and only one instance of each statement is initiated: is, for this reason, called the initiation
interval in the literature. It is also equal to the average cycle time of the schedule.
2.2 Lower bounds for the average cycle time and complexity results
The average cycle time of any schedule (cyclic or not) is limited both by the resource and the
dependence constraints. We denote by 1 (resp. p ) the minimal average cycle time achievable by
a schedule (cyclic or not) with innitely many resources (resp. p resources). Of course, p 1 .
For a circuit C, we denote by (C) the duration to distance ratio
w(C) and we let
We have the following well-known lower
bounds:
Dependence constraints: 1 max
Resource constraints: p
resource constraints, the scheduling problem is polynomially solvable. Indeed,
max and there is an optimal cyclic schedule (possibly with a fractional initiation interval if 1 is
not integral). Such a schedule can be found with standard minimum ratio algorithms [10, pp. 636-
641]. With d w(e), the complexity is O(jV jjEj log(jV j)),
we look for max , and we look for d max e. When d
Karp's minimum mean-weight cycle algorithm [15] can be used with complexity O(jV jjEj).
With resource constraints however, the decision problem associated to the problem of determining
a schedule with minimal average cycle time is NP-hard. It is open whether it belongs to NP or
not. When restricting to cyclic schedules, the problem is NP-complete. See [11] for an overview of
the cyclic scheduling problem.
3 Loop shifting and loop compaction
In this section, we explain how loop shifting can be used to improve the performances of loop
compaction. We rst formalize loop compaction and study the performances of cyclic schedules
obtained by loop compaction alone.
3.1 Performances of loop compaction alone
Loop compaction consists in scheduling the body of the loop without trying to mix up iterations.
The general principle is the following. We consider the directed graph A(G) that captures the
dependences lying within the loop body, in other words the loop independent dependences. These
correspond to edges e such that The graph A(G) is acyclic since G has no circuit C
such that can thus be scheduled using techniques for directed acyclic graphs, for
example list scheduling. Then, the new pattern built for the loop body is repeated to dene a cyclic
schedule for the whole loop. Resource constraints and dependence constraints are respected inside
the body by the list scheduling, while resource constraints and dependence constraints between
dierent iterations are respected by the fact that the patterns do not overlap. The algorithm is the
following.
Algorithm 1 (Loop compaction)
w) be a dependence graph.
1. Dene 0g.
2. Perform a list scheduling a on A(G).
3. Compute the makespan of a :
4. Dene the cyclic schedule by: 8v 2 V; 8k 2 N; (v;
Because of dependence constraints, loop compaction is limited by critical paths, i.e. paths P
of maximal delay d(P ). We denote by (G) the maximal delay of a path in A(G). Whatever the
schedule chosen for loop compaction, the cyclic schedule satises (G). Furthermore, if
a list scheduling is used, Coman's technique [6] shows that there is a path P in A(G) such that
How is this related to optimal initiation intervals 1 and p ? We
know that
For the acyclic scheduling problem, (G) is a lower bound for the makespan of any schedule. Thus,
list scheduling is a heuristic with a worst-case performance ratio 2 1=p. Here unfortunately, (G)
has - a priori - nothing to do with the minimal average cycle time p . This is the reason why loop
compaction alone can be arbitrarily bad.
Our goal is now to mix up iterations (through loop shifting, see the following section) so that
the resulting acyclic graph A(G) the subgraph of loop independent dependences is more likely
to be optimized by loop compaction.
3.2 Loop shifting
We rst dene loop shifting formally. Loop shifting consists in the following transformation. We
dene for each statement v a shift r(v) that means that we delay operation (v; by r(v) iterations.
In other words, instead of considering that the vertex in the graph G represents all the
operations of the form (v; k), we consider that it represents all the operations of the form (v; k r(v)).
The new dependence distance w r (e) for an edge
since the dependence is from (u; k r(u)) to (v; k This denes
a transformed graph G w r ). Note that the shift does not change the weight of circuits:
for all circuits C,
the two operations in dependence are computed in dierent iterations in
the transformed code: if w r (e) > 0, the two operations are computed in the original order and
the dependence is now a loop carried dependence. If w r both operations are computed in
the same iteration, and we place the statement corresponding to u textually before the statement
corresponding to v so as to preserve the dependence as a loop independent dependence. This
reordering is always possible since the transformed graph G
circuit (G and G r have the same circuit weights). If w r (e) < 0, the loop shifting is not legal.
Note that G r and G are two representations of the same problem. Indeed, there is a one-to-one
correspondence between the schedules for G and the schedules for G r : r is a schedule for G r if
and only if the function , dened by (v; r(v)), is a schedule for G. In other words,
reasoning on G r is just a change of representation. It cannot prevent us to nd a schedule. The
only dierence between the original code and the shifted code is due to loop bounds: the shifted
code is typically a standard loop plus a prologue and an epilogue (see the example in Section 4.3).
Such a function r is called a legal
retiming in the context of synchronous VLSI circuits [17]. Each vertex v represents an operator,
with a delay d(v). The weight w(e) of an edge e is interpreted as a number of registers. Retiming
amounts to suppress r(u) registers to the weight of each edge leaving u and to add r(v) registers
to each edge entering v. The constraint w(e) means that a negative number
of registers is not allowed for a legal retiming. The graph A(G) that we used in loop compaction
(Algorithm 1) is the graph of edges without register. What we called (G) is now the largest delay
of a path without register, called the clock period of the circuit. This link between loop shifting
and circuit retiming is not new. It has been used in several algorithms on loop transformations (see
for example [5, 3, 4, 7]), including software pipelining.
3.3 Selecting loop shifting for loop compaction
How can we select a good shifting for loop compaction? Let us rst consider the strategies followed
by the dierent move-then-schedule algorithms.
Enhanced software pipelining and its extensions [19], circular software pipelining [14], and rotation
software pipelining [5] use similar approaches: they do loop compaction, then they shift
backwards (or forwards) the vertices that appear at the beginning (resp. end) of the loop body
schedule. In other words, candidates for backwards shifting (i.e. are the sources of
A(G) and candidates for forwards shifting (i.e. are the sinks of A(G). This rotation is
performed as long as there are some benets for the schedule, but no guarantee is given for such a
technique.
In decomposed software pipelining, the principle is slightly dierent. The algorithm is not an iterative
process that uses loop shifting and loop compaction alternately. Loop shifting is chosen once,
following a mathematically well-dened objective, and then the loop is scheduled. Two intuitive
objectives may be to shift statements so as to minimize:
the maximal delay (G) of a path in A(G) since it is tightly linked to the guaranteed bound for
the list scheduling. As shown in Section 3.1, it is a lower bound for the performances of loop
compaction and reducing (G) reduces the performance upper bound when list scheduling is
used.
the number of edges in the acyclic graph A(G), so as to reduce the number of dependence
constraints for loop compaction. Intuitively, the fewer constraints, the more freedom for
exploiting resources.
Until now, all eort has been put into the rst objective. In [9] and [27], the loop is rst software-
pipelined assuming unlimited resources. The cyclic schedule obtained corresponds to a particular
retiming r which is then used for compacting the loop. In can be shown that, with this technique,
the maximal critical path in A(G r ) is less than d In [4], the shift is chosen so that
the critical path for loop compaction is minimal, using the retiming algorithm due to Leiserson
and Saxe [17] for clock-period minimization (see Algorithm 2 below). The derived retiming r is
such that (G r opt the minimum achievable clock period for G. It can also be shown that
Both techniques lead to similar guaranteed performances for non pipelined resources. Indeed,
the performances of loop compaction (see Equation 2) now become:
Unlike for loop compaction alone, the critical path in A(G r ) is now related to p . The performances
are not arbitrarily bad as for loop compaction alone.
Both techniques however are limited by the fact that they optimize the retiming only in the
critical parts of the graph (for an example of this situation, see Section 4.1). For this reason, they
cannot address the second objective, trying to retime the graph so that as few dependences as
possible are loop independent. In [4], an integer linear programming formulation is proposed to
solve this problem. We show in the next section that a pure graph-theoretic approach is possible,
as Calland et al. suspected.
Before, let us recall the algorithm of Leiserson and Saxe. The technique is to use a binary search
for determining the minimal achievable clock period opt
. To test whether each potential clock
period is feasible, the following O(jV jjEj) algorithm is used. It produces a legal retiming r of G
such that G r is a synchronous circuit with clock period (G r ) , if such a retiming exists. The
overall complexity is O(jV jjEj log jV j).
Algorithm 2 (Feasible clock period)
1. For each vertex set r(v) to 0.
2. Repeat the following times:
(a) Compute the graph G r with the existing values for r.
(b) for any vertex v 2 V compute (v) the maximum sum d(P ) of vertex delays along any
zero-weight directed path P in G r leading to v.
(c) For each vertex v such that (v) > , set r(v) to r(v) + 1.
3. Run the same algorithm used for Step (2b) to compute (G r ). If (G r ) > then no feasible
retiming exists. Otherwise, r is the desired retiming.
4 Minimizing the number of dependence constraints for loop com-
paction
We now deal with the problem of nding a loop shifting such that the number of dependence constraints
for loop compaction is minimized. We rst consider the particular case where all dependence
constraints can be removed (Section 4.1): we give an algorithm that either nds such a retiming or
proves that no such retiming exists. Section 4.2 is the heart of the paper: we give an algorithm that
minimizes by retiming the number of zero-weight edges of a graph. Then, in Section 4.3, we run a
complete example to illustrate our technique. Finally, in Section 4.4, we extend this algorithm to
minimize the number of zero-weight edges without increasing the clock period over a given constant
(that can be for instance the minimal clock period). This allows us to combine both objectives
proposed in Section 3.3: applying loop shifting so as to minimize both the number of constraints
and the critical path for loop compaction.
4.1 A particular case: the fully parallel loop body
We rst give an example that illustrates why minimizing the number of constraints for loop compaction
may be useful. This example is also a case where all constraints can be removed.
Example 1
do i=1,n
The dependence graph of this example is represented in Figure 1(a): we assume an execution time of
two cycles for the loads, one cycle for the addition, and three cycles for the multiplication. Because
of the multiplication, the minimal clock period cannot be less than 3. Figure 1(b) depicts the
retimed graph with a clock period of 3 and the retiming values found if we run Leiserson and Saxe
algorithm (Algorithm 2). Figure 1(c) represents the graph obtained when minimizing the number
of zero-weight edges still with a clock period of 3.
load
load
d=3(a)
load
load
(b)
load
load
*1+1(c)
Figure
1: (a) The dependence graph of Example 1, (b) After clock period minimization, (c) After
loop-carried dependence minimization.
(a) (b) (c)
L/S ALU L/S ALU L/S ALU
cycles
load
load
load load
load
load
Figure
2: (a) The loop compaction for Example 1, (b) After clock period minimization, (c) After
loop-carried dependence minimization.
Following the principle of decomposed software pipelining, once the retiming (shift) is found,
we schedule the subgraph generated by the zero-weight edges (compaction) in order to nd the
pattern of the loop body. We assume some limits on the resources for our schedule: we are given
one load/store unit and two ALUs.
As we can see on Figure 2(a), the simple compaction without shift gives a very bad result (with
initiation interval 8) since the constraints impose a sequential execution of the operations. After
clock period minimization (Figure 2(b)), the multiplication has no longer to be executed after the
addition and a signicant improvement is found, but we are still limited by the single load/store
resource associated with the two loop independent dependences which constrain the addition to wait
for the serial execution of the two loads (the initiation interval is 5). Finally with the minimization
of the loop-carried dependences (Figure 2(c)), there are no more constraints for loop compaction
(except resource constraints) and we get an optimal result with initiation interval equal to 4. This
can not be improved because of the two loads and the single load/store resource. In this example,
resources can be kept busy all the time.
Note that if we assume pipelined resources which can fetch one instruction each cycle (with the
same delays for latency), we get similar results (see Figure 3). The corresponding initiation interval
are respectively 7, 5 and 3, if we do not try to initiate the next iteration before the complete end
of the previous one, and 3, 4, 3 if we overlap patterns (see Section 5 for more about what we mean
by overlapping).
(a) (b) (c)
L/S ALU L/S ALU L/S ALU
load
load
load
cycles
load
load
load
Figure
3: (a) The loop compaction for Example 1 with pipelined resources, (b) After clock period
minimization, (c) After loop-carried dependence minimization.
Example 1 was an easy case to solve because it was possible to remove all constraints for loop
compaction. After retiming, the loop body was fully parallel. In this case, the compaction phase is
reduced to the problem of scheduling tasks without precedence constraints, which is, though NP-
complete, easier: guaranteed heuristics with a better performance ratio than list scheduling exist.
More formally, we say that the body of a loop is fully parallel when:
When is it possible to shift the loop such that all dependences become loop carried? Let l(C) be
the length (number of edges) of a circuit C in the graph G. The following proposition gives a simple
and ecient way to nd a retiming that makes the body fully parallel.
Proposition 1 Shifting a loop with dependence graph G into a loop with fully parallel body is possible
if and only if w(C) l(C) for all circuits C of G.
Proof: Assume rst that G can be retimed so that the loop has a fully parallel body, then there
exists a retiming r such that 8e 2 E, w r (e) 1. Summing up these inequalities on each circuit
C of G, we get w r (C) l(C), and since the weight of a circuit is unchanged by retiming, we get
Conversely, assume that for all circuits C of G, w(C) l(C). Then the graph G
dened by 8e 2 E, w no circuit of negative weight. We can thus dene for
each vertex u, (u) the minimal weight of a path leading to u. By construction,
is the weight of a path leading to v. In other words,
is the desired retiming.
From Proposition 1, we can deduce an algorithm that nds a retiming of G such that the loop
body is fully parallel, or answers that no such retiming can be found.
Algorithm 3 (Fully parallel body)
by adding a new source s to G, setting
apply the Bellman-Ford algorithm on G 0 to nd the shortest path from s to any vertex in V .
Two cases can occur:
the Bellman-Ford algorithm nds a circuit of negative weight, in this case return FALSE.
the Bellman-Ford algorithm nds some values (u) for each vertex u of G 0 , in this case
set return TRUE.
The complexity of this algorithm is dominated by the complexity of the Bellman-Ford algorithm,
which is O(jV jjEj). We can also notice that if the graph is an acyclic directed graph it can always
be retimed so that the loop has a fully parallel body since it does not contain any circuit (and
consequently, there is no circuit of negative weight in G 0 ). In this case, the algorithm can be
simplied into a simple graph traversal instead of the complete Bellman-Ford algorithm, leading to
a
4.2 Zero-weight edges minimization (general case)
Since we cannot always make the loop body fully parallel, we must nd another solution to minimize
the constraints for loop compaction. We give here a pure graph algorithm to nd a retiming for
which as few edges as possible are edges with no register (i.e. as many dependences as possible are
loop-carried after loop shifting). The algorithm is an adaptation of a minimal cost ow algorithm,
known as the out-of-kilter method ([10, pp. 178-185]), proposed by Fulkerson in 1961.
4.2.1 Problem analysis
Given a dependence graph and a retiming r of G, we dene, as in the previous sections,
the retimed graph G for each edge
want to count the number of edges e such that w r For that, we dene the cost v r (e) of an
edge e as follows:
We dene the cost of the retiming r as
the number of zero-weight edges in the retimed
graph. We say that r is optimal when
r (e) is minimal, i.e. when r minimizes the number
of zero-weight edges of G. We will rst give a lower bound for the cost of any retiming, then we
will show how we can nd a retiming that achieves this bound.
We will use ows in G dened as functions f
A nonnegative ow is a ow such that 8e 2 E, f(e) 0. A ow f corresponds to a union of cycle
(a multi-cycle) C f that traverses f(e) times each edge e: when f(e) > 0 the edge is used forwards,
when f(e) < 0 the edge is backwards. When the ow is nonnegative, the ow corresponds to a
union of circuits (a multi-circuit) [10, p. 163].
For a given legal retiming r and a given nonnegative ow f , we dene for each edge e 2 E its
kilter index ki(e) by:
It is easy to check that the kilter index is always nonnegative since v r
and since the ow is nonnegative. We will show in Proposition 4 that, when all kilter indices are
zero, we have found an optimal retiming. Before that, we need a independence property related
to ows and retimings.
We dene as
the cost of a ow f for the graph G r . Of course, the cost of a ow
depends on the edges of C f , i.e. the edges e such that f(e) 6= 0. The following proposition shows
that the cost of a ow does not depend of the retiming, i.e. the cost of f for G and for G r are equal.
Proposition 2
Proof:
f(e)A
since f is a ow.
We are now ready to give a lower bound for the cost of any retiming.
Proposition 3 For any legal retiming r:
nonnegative ow g:
Proof:
(1 f(e)w(e)) since 8e 2 E; v r (e) 0
(v r because of Prop. 2
0 since 8e 2 E; ki(e) 0:
Proposition 4 Let r be a legal retiming. If there is a ow f such that 8e 2 E,
is optimal.
Proof: if for each edge of G, then
(v r
and by Proposition 3,
Proposition 4 alone does not show that the lower bound can be achieved. It remains to show
that we can nd a retiming and a ow such that all kilter indices are zero. We now study when this
happens, by characterizing the edges in terms of their kilter index.
4.2.2 Characterization of edges
us represent the edges e by the pair (f(e); w r (e)): we get the diagram of Figure 4, called the
kilter diagram. Edges e for which correspond to the black angled line. Below and above
this line, ki(e) > 0. We call conformable the edges for which non conformable the
edges for which ki(e) > 0. We assign a type to each edge e depending on the values w r (e) and f(e),
as follows:
Type 1: w r (e) > 1 and
Type 3: w r
Type 4: w r
Type
Type 7: w r
conformable edges
Type 2: w r (e) > 0 and f(e) > 1
or w r (e) > 1 and f(e) > 0
Type 5: w r
non conformable edges
If every edge is conformable, that is if for each edge e, then the optimal is reached.
Furthermore, we can notice that for each conformable edge e, it is possible to modify by one unit,
either w r (e) or f(e), while keeping it conformable, and for each non conformable edge, it is possible
to decrease strictly its kilter index by changing either w r (e) or f(e) by one unit. More precisely, we
have the following cases:
if f(e) increases: edges of types 3, 6 and 7 become respectively of type 4, 7, and 7, and remain
conformable. An edge of type 5 becomes conformable (of type 6). The kilter index of any
other edge increases (strictly).
if f(e) decreases: edges of type 4 and 7 become respectively of type 3 and 6 or 7, and remain
conformable. An edge of type 2 becomes conformable (type 4 or 1) or its kilter index decreases
(strictly). The kilter index of edges of type 6 increases (strictly).
increases: the edges of type 1, 3 and 6 become respectively of type 1, 1 and 4, and
remain conformable. An edge of type 5 becomes conformable (type 3). The kilter index of
any other edge increases (strictly).
decreases: the edges of type 1 and 4 become respectively of type 1 or 3 and 6, and
remain conformable. An edge of type 2 becomes conformable (type 4 or 7) or its kilter index
decreases (strictly). The kilter index of edges of type 3 increases (strictly).
We are going to exploit these possibilities in order to converge towards an optimal solution, by
successive modications of the retiming or of the ow.
type 7
type 6
f(e)1r
Figure
4: Kilter diagram and edge types.
Black
Black
Black
Red
Green
Uncoloured
Green
Figure
5: Coloration of the dierent
types of edges.
4.2.3 Algorithm
The algorithm starts from a feasible initial solution and makes it evolve towards an optimal solution.
The null retiming and the null ow are respectively a legal retiming and a feasible ow: for them
we have for each edge e 2 E, which means a kilter index equal to 1 for a zero-weight
edge and 0 for any other edge. Notice that for this solution any edge is of type 1, 3 or 5. Only the
edges of type 5 are non conformable. The problem is then to make the kilter index of type 5 edges
decrease without increasing the kilter index of any other edge. To realize that, we assign to each
type of edge a color (see Figure 5) that expresses the degree of freedom it allows:
black for the edges of type 3, 5 and 6: f(e) and w r (e) can only increase.
green for the edges of type 2 and 4: f(e) and w r (e) can only decrease.
red for the edges of type 7: f(e) can increase or decrease but w r (e) should be kept constant.
uncoloured for the edges of type 1: f(e) cannot be changed while w r (e) can increase or decrease.
Note: actually, during the algorithm, we will not have to take into account the green edges of
longer. Indeed, if we start from a solution that does not involve such edges, and if we
make the kilter index of black edges of type 5 decrease without increasing the kilter index of any
other edge, we will never create any other non conformable edge. In particular no edge of type 2
will be created.
We can now use the painting lemma (due to Minty, 1966, see [10, pp. 163-165]) as in a standard
out-of-kilter algorithm.
(Painting lemma) Let E) be a graph whose edges are arbitrarily colored in
black, green, and red. (Some edges may be uncoloured.) Assume that there exists at least one black
edge e 0 . Then one and only one of the two following propositions is true:
a) there is a cycle containing e 0 , without any uncoloured edge, with all black edges oriented in
the same direction as e 0 , and all green edges oriented in the opposite direction.
b) there is a cocycle containing e 0 , without any red edge, with all black edges oriented in the same
direction as e 0 , and all green edges oriented in the opposite direction.
R
R
G
G
Uncoloured
G
Figure
The two cases of Minty's lemma
Proof: The proof is a constructive proof based on a labeling process, see [10].
We end up with the following algorithm:
Algorithm 4 (Minimize the number of zero-weight edges)
1. start with color the edges
as explained above.
2. if 8e 2 E,
(a) then return: r is an optimal retiming.
(b) else choose a non conformable edge e 0 (it is black) and apply the painting lemma:
i. if a cycle is found, then add one (respectively subtract one) to the ow of any edge
oriented in the cycle in the direction of e 0 (resp. in the opposite direction).
ii. if a cocycle is found, this cocycle determines a partition of the vertices into two sets.
Add one to the retiming of any vertex that belongs to the same set as the terminal
vertex of e 0 .
(c) update the color of edges and go back to Step 2.
Proof: By denition of the edge colors that express what ow or retiming changes are possible,
it is easy to check that at each step of the algorithm an edge, whose retiming or ow value changes,
becomes conformable if it was not, and remains conformable otherwise. Furthermore the retiming
remains legal (w r (e) 0) and the ow nonnegative (f(e) 0). So, after each application of the
painting lemma, at least one non conformable edge becomes conformable. By repeating this operation
(coloration, then variation of the ow or of the retiming) until all edges become conformable,
we end up with an optimal retiming.
4.2.4 Complexity
Looking for the cycle or the cocycle in the painting lemma can be done by a marking procedure
with complexity O(jEj) (see [10, pp. 164-165]). As, at each step, at least one non conformable edge
becomes conformable, the total number of steps is less than or equal to the number of zero-weight
edges in the initial graph, which is itself less than the total number of edges in the graph. Thus,
the complexity of Algorithm 4 is O(jEj 2 ).
Note: for the implementation, the algorithm can be optimized by rst considering each strongly
connected component independently. Indeed, once each strongly connected component has been
retimed, we can dene (as mentioned in Section 4.1) a retiming value for each strongly connected
component such that all edges between dierent strongly connected components have now a positive
weight. This can be done by a simple traversal of the directed acyclic graph dened by the strongly
connected components.
4.3 Applying the algorithm
We now run a complete example for which minimizing the number of zero-weight edges gives some
benet. The example is a toy example that computes the oating point number a n given by the
recursion:
a
The straight calculation of the powers is expensive and can be improved. A possibility is to make
the calculation as follows (after initialization of rst
Example 2
do
Assume, for the sake of illustration, that the time for a multiply is twice the time for an add (one
cycle). The dependence graph is depicted on Figure 7: the clock period is already minimal (equal
to due to the circuit of length 3). The corresponding loop compaction is given, assuming
two multipurpose units. The restricted resources impose the execution of the three multiplications
in at least 4 cycles, and because of the loop independent dependences, the second addition has to
wait for the last multiplication before starting, so we get a pattern which is 5 cycles long.
The dierent steps of the algorithm are the following (see also Figure 8):
we choose, for example, (d; a) as a non conformable edge, we nd the circuit (a; b; c; d; a) and
we change the ow accordingly.
we choose (t; a) as a non conformable edge, we nd the cocycle dened by the sets fag and
and we change the retiming accordingly.
we choose (c; t) as a non conformable edge, we nd the circuit (t; a; d; c; t) and we change the
ow accordingly.
Figure
7: Dependence graph and the associated schedule for two resources.
we choose (b; t) as a non conformable edge, we nd the cocycle dened by the two sets ftg
and V n ftg, and we change the retiming accordingly.
all edges are now conformable, the retiming is optimal:
equal
Note that, at each step, we have to choose a non conformable edge: choosing a dierent one than
ours can result in a dierent number of steps and possibly a dierent solution, but the result will
still be optimal. Note also that, on this example, the clock period remains unchanged by the
minimization, there is no need to use the technique that will be presented in Section 4.4.
G
G
G
G
G
R
U
G Green
Red
Uncoloured
Flow unit
Distance (register) unit
G
G
c
d
G
G
G
c
d
c
c
c
d
Non conformable (black)
a a
a
a
a
Figure
8: The dierent steps of the zero-weight edge minimization.
After the minimization (Figure 9), there remain two (instead of four) loop independent dependences
that form a single path. We just have to ll the second resource with the remaining tasks.
This results in a 4 cycles pattern which is (here) optimal because of the resource constraints.
Figure
9: Dependence graph and associated schedule for two resources after transformation.
The resulting shifted code is given below. Since the dierent retiming values are 1 and 0, an
extra prelude and postlude have been added compared to the original code:
do i=4,n
(computed at clock cycle
(computed at clock cycle 1)
(computed at clock cycle 2)
(computed at clock cycle
(computed at clock cycle 2)
4.4 Taking the clock period into account
We now show that the algorithm given in Section 4.2.3 can be extended so as to minimize the
number of zero-weight edges, subject to a constraint on the clock period after retiming. In other
words, given a dependence graph d), we want to retime G into a graph G r whose clock
period (G r ) is less than or equal to a given constant (which must be a feasible clock period),
and which has as few zero-weight edges as possible. We recall that the clock period (G) is dened
as the largest delay of a zero-weight path of Gg.
4.4.1 Clock edges
Following Leiserson and Saxe technique [17], we add a new constraint between each pair of vertices
that are linked by a path whose delay is greater than the desired clock period, since such a path
has to contain at least one register:
In the above equation, W (u; v) denotes the minimal number of registers on any path between u
and v, and D(u; v) denotes the maximal delay of a path from u to v with W (u; v) registers (see [17]
for a detailed discussion about the clock period minimization problem and how some redundant
constraints can be removed).
This corresponds to adding new edges of weight W (u; v) 1 between each pair of vertices (u; v)
such that D(u; v) > . We denote by E clock the set of new edges and we let
We can notice that although the new edges (that we call clock edges) must have no eect on the
retiming cost (since they are not part of the original graph, they should not be taken into account
when counting the number of zero-weight edges), they may have some inuence on the ow cost
since they constrain the register moves.
4.4.2 Changes in the algorithm
We now show how to incorporate these new edges into the algorithm, and how to dene for them
a coloration compatible with the previously dened colors. The problem is the following: given a
dependence graph w), the goal is to nd a retiming r Z such that r is legal for
and of course that minimizes the number of zero-weight edges in E.
Summarizing all previous observations, we change the denition of the kilter index in the following
way (the kilter index is still nonnegative):
This leads us to change also the proofs of Propositions 3 and 4 for taking into account the new
edges. We denote by C f (resp. C fclock ) the set of edges in E (resp. in E clock ) such that f(e) > 0,
and by C 0
the multi-circuit of G 0 dened by f .
Proposition 5
f
f(e)w(e).
Proof:
f
f
(v r
f
f(e)w r (e) by Proposition 2
(v r
ki(e) by denition of the kilter index
Proposition 5 gives a new optimality condition that takes the clock edges into account.
Proposition 6 If 8e then r is optimal and furthermore (G r ) .
Proof: The proof is similar to the proof of Proposition 4. If
(v r
clock
f(e)w r (e)
f(e)w r (e)
f
f
Thus,
f
f(e)w(e) and by Proposition 5,
Furthermore, since the retiming is legal, the inequalities (4) are satised by construction of E clock .
In other words, (G r ) .
It remains to draw the kilter diagram of E Clock and to assign types and colors. The types are:
Type 1: w r (e) > 0 and
Type 3: w r
Type 4: w r
conformable edges
Type 2: w r (e) > 0 and f(e) > 0
non conformable edges
The colors are chosen as follows: black for type 3 edges, red for type 4 edges, green for type 2 edges,
and uncoloured for type 1 edges (see Figure 10).
Black f(e)
Red
r w (e)
r
Uncoloured
Green
Figure
10: Kilter diagram and coloration of clock edges.
Note: we need to start from a legal retiming, i.e. a retiming for which (G r ) is less than ,
so that all edge weights are nonnegative (including the weights of clock edges). Such a retiming
is computed using Algorithm 2. Then, initially, all the edges of E clock are conformable. As the
algorithm never creates non conformable edges, green edges in E clock never appear.
The nal algorithm is similar to Algorithm 4. It reaches an optimal retiming while keeping a
clock period less than a given constant . The number of steps is still bounded by jEj since all
clock edges are conformable and remain conformable, but the number of edges of G 0 is O(jV
since G 0 can be the complete graph in the worst case because of the new clock edges. The marking
procedure is thus now O(jV and the overall complexity of the algorithm O(jEjjV j 2 ).
5 Conclusions, limitations and future work
As any heuristic, the shift-then-compact algorithm that we proposed can fall into traps, even
if we rely on an optimal (exponential) algorithm for loop compaction. In other words, for a given
problem instance, the shift that we select may not be the best one for loop compaction. Nevertheless,
the separation between two phases, a cyclic problem without resource constraints and an acyclic
problem with resource constraints, allows us to convert worst case performance results for the
acyclic scheduling problem into worst case performance results for the software pipelining problem
(see Equation 3). This important idea, due to Gasperoni and Schwiegelshohn [9], is one of the
interests of decomposed software pipelining. No other method (including modulo-scheduling) has
this property: how close they get to the optimal has not been proved.
This separation into two phases has several other advantages. The rst advantage that we
already mentioned is that we do not need to take the resource model into account until the loop
compaction phase. We can thus design or use very aggressive (even exponential) loop compaction
algorithms for specic architectures. A second advantage is that it is fairly easy to control how
statements move simply by incorporating additional edges. This is how we enforced the critical
path for loop compaction to be smaller than a given constant. This constant could be opt
, but
choosing a larger value can give more freedom for the second objective, the minimization of loop
independent edges. As long as this value is less than holds. Another
example is if we want to limit by a constant l(e) the maximal value w r (e) of an edge
which is linked to the number of registers needed to store the value created by u: we simply add
an edge from v to u with weight l(e) w(e). Taking these additional edges into account can be
done as we did for clock edges. A third advantage (that we still need to explore) is that the rst
phase (the loop shifting) can maybe be used alone as a pre-compilation step, even for architectures
that are dynamically scheduled: minimizing the number of dependences inside the loop body for
example will reduce the number of mispredictions for data speculation [20].
Despite the advantages mentioned above, our technique has still one important weakness, which
comes from the fact that we never try to overlap the patterns obtained by loop compaction. In
practice of course, instead of waiting for the completion of a pattern before initiating the following
one (by choosing the initiation interval equal to the makespan of loop compaction, see Algorithm 1),
we could choose a smaller initiation interval as long as resource constraints and dependence constraints
are satised. For each resource p, we can dene end(p) the last clock cycle for which p is
used in the acyclic schedule a . Then, given a , we can choose as initial interval the smallest
that satises all dependences larger than
p. This is for example how we found the initiation intervals equal to
3, 4 and 3 for Example 2 with pipelined resources (end of Section 4.3). While this overlapping
approach may work well in practice, it does not improve in general the worst case performance
bound of Equation 3. We still have to nd improvements in this direction, especially when dealing
with pipelined resources. We point out that Gasperoni and Schwiegelshohn approach [9] does not
have this weakness (at least for the worst case performance bound): by considering the shifting
and the schedule for innite resources as a whole, they can use a loop compaction algorithm with
release dates (tasks are not scheduled as soon as possible) that ensures a good overlapping. In
the resulting worst case performance bound, d max can be replaced by 1 for the case of pipelined
resources. The problem however is that it seems dicult to incorporate other objectives (such as the
zero-weight edge minimization described in this paper) in Gasperoni and Schwiegelshohn approach
since there is no real separation between the shifting and the scheduling problems. Nevertheless,
this loop compaction with release dates looks promising and we plan to explore it.
A possibility when dealing with pipelined resources is to change the graph so that d
cutting an operation into several nodes: a rst real node corresponding to the eective resource
utilization, and several virtual nodes, one for each unit of latency (virtual nodes can be scheduled
on innite virtual resources). With this approach, we will nd a shorter clock period (because the
delay of a node is now cut into several parts) equal to d 1 e. The situation is actually equivalent
to the case of unit delays and unpipelined resources (and thus the bound of Equation 3 is the same
with d max replaced by 1). The problem however is that the algorithm is now pseudo-polynomial
because the number of vertices of the graph depends linearly on d max . This can be unacceptable
if pipeline latencies are large. Another strategy is to rely, as several other heuristics do, on loop
unrolling. We can hide the preponderance of d max (in the worst case bound and in the overlapping)
by unrolling the loop a sucient number of times so that p is large compared to d max . Since
resources are limited, unrolling the loop increases the lower bound due to resources and thus the
minimal initiation interval. But, this is also not completely satisfying. We are thus working on
better optimization choices for the initial shift or on an improved compaction phase to overcome
this problem.
Currently, our algorithm has been implemented at the source level using the source-to-source
transformation library, Nestor [26], mainly to check the correctness of our strategies. But we found
dicult to completely control what compiler back-ends do. In particular, lot of work remains to
be done to better understand the link between loop shifting and low-level optimizations such as
register allocation, register renaming, strength reduction, and even the way the code is translated!
Just consider Example 2 where we write instead
simple modication reduces 1 and opt to 3! Minimizing the number of zero-weight edges still lead
to the best solution for loop compaction (with a clock period equal to 4), except if we simultaneously
want to keep the clock period less than 3. Both programs are equivalent, but the software pipelining
problem changes. How can we control this? This leads to the more general question: at which level
should software pipelining be performed?
--R
Perfect pipelining
Software pipelining.
Circuit retiming applied to decomposed software pipelining.
Rotation scheduling: a loop pipelining algorithm.
Combining retiming and scheduling techniques for loop parallelization and loop tiling.
Generating close to optimum loop schedules on parallel processors.
Graphs and Algorithms.
Cyclic scheduling on parallel processors: an overview.
Computer architecture: a quantitative approach (2nd edition).
Circular scheduling.
A characterization of the minimum cycle mean in a digraph.
Software pipelining
Retiming synchronous circuitry.
Swing modulo scheduling: a lifetime-sensitive approach
Dynamic speculation and synchronization of data dependences.
Iterative modulo scheduling: an algorithm for software pipelining.
Iterative modulo scheduling.
Some scheduling techniques and an easily schedulable horizontal architecture for high performance scienti
The Nestor library: A tool for implementing Fortran source to source transformations.
Decomposed software pipelining.
--TR
Graphs and algorithms
Software pipelining: an effective scheduling technique for VLIW machines
On optimal parallelization of arbitrary loops
Circular scheduling
An efficient resource-constrained global scheduling technique for superscalar and VLIW processors
Lifetime-sensitive modulo scheduling
Rotation scheduling
Minimum register requirements for a modulo schedule
Specification of software pipelining using Petri nets
Decomposed software pipelining
Software pipelining
Circuit Retiming Applied to Decomposed Software Pipelining
Computer architecture (2nd ed.)
The IA-64 Architecture at Work
Perfect Pipelining
Some scheduling techniques and an easily schedulable horizontal architecture for high performance scientific computing
Swing Modulo Scheduling
--CTR
Pemysl cha , Zdenk Hanzlek , Antonn Hemnek , Jan Schier, Scheduling of Iterative Algorithms with Matrix Operations for Efficient FPGA Design--Implementation of Finite Interval Constant Modulus Algorithm, Journal of VLSI Signal Processing Systems, v.46 n.1, p.35-53, January 2007 | list scheduling;software pipelining;cyclic scheduling;circuit retiming |
353953 | Efficient Rule-Based Attribute-Oriented Induction for Data Mining. | Data mining has become an important technique which has tremendous potential in many commercial and industrial applications. Attribute-oriented induction is a powerful mining technique and has been successfully implemented in the data mining system DBMiner (Han et al. Proc. 1996 Int'l Conf. on Data Mining and Knowledge Discovery (KDD'96), Portland, Oregon, 1996). However, its induction capability is limited by the unconditional concept generalization. In this paper, we extend the concept generalization to rule-based concept hierarchy, which enhances greatly its induction power. When previously proposed induction algorithm is applied to the more general rule-based case, a problem of induction anomaly occurs which impacts its efficiency. We have developed an efficient algorithm to facilitate induction on the rule-based case which can avoid the anomaly. Performance studies have shown that the algorithm is superior than a previously proposed algorithm based on backtracking. | Introduction
Data mining (also known as Knowledge Discovery in Databases) is the nontrivial extraction of implicit,
previously unknown, and potentially useful information from data [12]. Over the past twenty years, huge
amounts of data have been collected and managed in relational databases by industrial, commercial or
public organizations. The growth in size and number of existing databases has far exceeded the human
abilities to analyze such data with available technologies. This has created a need and a challenge for
extracting knowledge from these databases. Many large corporations are investing into the data mining
tools, and in many cases, they are coupled with the data warehousing technology to become an integrated
system to support management decision and business planning [27]. Within the research community, this
problem of data mining has been touted as one of the many great challenges [10, 12, 25]. Researches have
been performed with different approaches to tackle this problem [2, 3, 4, 8, 9, 15, 16, 18, 19, 21, 28].
The research of the authors were supported in part by RGC (the Hong Kong Research Grants Council) grant HKU
286/95E. Research of the fourth author was supported in part by grants from the Natural Sciences and Engineering Research
Council of Canada the Centre for Systems Science of Simon Fraser University.
In the previous studies [16], a Basic Attribute-Oriented Induction (Basic AO Induction) method has been
developed for knowledge discovery in relational databases. AO induction can discover many different types
of rules. A representative type is the characteristic rule. For example, from a database of computer science
students, it can discover a characteristic rule such as "if x is a computer science student, then there is a
45% chance that he is a foreign student and his GPA is excellent". Note that the concepts of "foreign
student" and "excellent GPA" do not exist in the database. Instead, only lower level information such
as "birthplace" and GPA value are stored there. An important feature of AO induction is that it can
generalize the values of the tuples in a relation to higher level concepts, and subsequently merge those
tuples that have become identical into generalized tuples. An important observation is that each one of
these resulted generalized tuples certainly reflects some common characteristics of the group of tuples in
the original relation from which it is generated.
Basic AO induction has been implemented in the mining system DBMiner, (whose prototype was called
DBLearn) [16, 17, 19]. Besides AO induction, DBMiner also has incorporated many interesting mining
techniques including mining multiple-level knowledge, meta-rule guided mining, and can discover many
different types of rules, patterns, trends and deviations [17]. AO induction, not only can be applied to
relation databases, but also performed on unconventional databases such as spatial, object-oriented, and
deductive databases [19].
The engine for concept generalization in basic AO induction is the concept ascension. It relies on a
concept tree or lattice to represent the background knowledge for generalization. [22]. However, concept
tree and lattice have their limitation in terms of background knowledge representation. In order to further
enhance the capability of AO induction, there is a need to replace them by a more general concept hierarchy.
In this paper, the work has been focused on the development of a rule-based concept hierarchy to support a
more general concept generalization. Rule-based concept generalization was first studied in [7]. In general,
concepts in a concept tree or lattice are generalized to higher level concepts unconditionally. For example,
in a concept tree defined for the attribute GPA of a student database, a 3.6 GPA (in a 4 points system) can
be generalized to a higher level concept, perhaps to the concept excellent. This generalization depends only
on the GPA value but not any other information (or attribute) of a student. However, in some institutions,
they may want to apply different rules to different types of students. The same 3.6 GPA may only deserve
a good, if the student is a graduate; and it may be excellent, if the student is an undergraduate. This
suggests that a more general concept generalization scheme should be conditional or rule-based.
In a Rule-Based Concept Graph, a concept can be generalized to more than one higher level concept,
and rules are used to determine which generalization path should be taken. To support AO induction
on a rule-based concept graph, we have extended the basic AO induction to Rule-Based Attribute-Oriented
Induction. However, if the technique of induction on a concept tree is applied directly to a concept graph,
a problem of induction anomaly would occur. In [7], a "backtracking" technique was proposed to solve
this problem. It is designed based on generalized relation which is proposed originally in the basic AO
induction. The backtracking algorithm has an O(n log n) complexity, where n is the number of tuples in
the induction domain. In [23], a more efficient technique for induction has been proposed. In this paper,
we apply the technique to rule-based concept graph, and propose an algorithm for rule-based induction
whose complexity is improved to O(n). The algorithm has avoided to use the data structure of generalized
relation. Instead, it uses a multi-dimensional data cube [20] or a generalized-attribute tree depending on the
sparseness of the data distribution. Extensive performance studies on the algorithm have been done and
the results show that it is more efficient than the backtracking algorithm based on generalization relation.
poor
excellent
strong
Figure
1: Concept tree table entries for a university student database
The paper is organized as follows. The primitives of knowledge discovery and the principle of basic
AO induction are briefly reviewed in Section 2. The general notions of rule-based concept generalization
and rule-based concept hierarchy are discussed in Section 3. The model of rule-based AO induction is
defined in Section 4. A new technique of using path relation and data cube instead of generalized relation
to facilitate rule-based AO induction is discussed in Section 5. An efficient rule-based AO induction
algorithm is presented in Section 6. Section 7 is the performance studies. Discussions and conclusions are
in Sections 8 and 9.
Basic Attribute-Oriented Induction
The purpose of AO induction is to discover rules from relations. The primary technique used is concept
generalization. In a database such as a university student relation with the schema
the values of attributes like Status, Age, Birthplace, GPA can be generalized according to a concept
hierarchy. For example, GPA between 0.0 and 1.99 can be generalized to "poor", those between 2.0 and
2.99 to "average", other values to "good" or "excellent". After this process, many records would have the
same values except on those un-generalizable attributes such as Name. By merging the records which have
the same generalized values, important characteristics of the data can be captured in the generalized tuples
and rules can be generated from them. Basic AO induction is proposed along this approach. Task-relevant
data, background knowledge, and expected representation of learning results are the three primitives that
specify a learning task in basic AO induction [16, 19].
2.1 Primitives in AO Induction
The first primitive is the task-relevant data. A database usually stores a large amount of data, of which
only a portion may be relevant to a specific induction task. A query that specifies an induction task can
be used to collect the task-relevant set of data from a database as the domain of an induction. In AO
induction, the retrieved task-relevant tuples are stored in a table called initial relation.
The second primitive is the background knowledge. In AO induction, background knowledge is necessary
to support generalization, and it is represented by concept hierarchies. Concept hierarchies could be
supplied by domain experts. As have been pointed out, generalization is the key engine in induction.
M.A. Ph.D.
sophomore senior
junior M.S.
Undergraduate Graduate
freshman
ANY
Figure
2: A concept tree for Status
Therefore, the structure and representation power of the concept hierarchies is an important issue in AO
induction.
Concepts hierarchies in a particular domain are often organized as a multi-level taxonomy in which
concepts are partially ordered according to a general-to-specific ordering. The most general concept is the
null description (described by a reserved word "ANY"), and the most specific concepts corresponds to the
low level data values in the database [22]. The simplest hierarchy is concept tree in which a node can only
be generalized to one higher level node at each step.
Example 1 Consider a a typical university student database with a schema
Part of the corresponding concept tree table is shown in Figure 5, where A ! B indicates that B is a
generalization of the members of A.
An example of a concept tree on the attribute status is shown in Figure 6. 2
Student records in the university relation can be generalized following the paths in the above concept
trees. For example, the status value of all students can be generalized to "undergraduate" or "graduate".
Note that the generalization should be performed iterative from lower levels to higher levels together
with the merging of tuples which have the same generalized values. The generalization should stop once
the generalized tuples resulted from the merging have reached a reasonable level in the concept trees.
Otherwise, the resulted tuples could be over generalized and the rules generated subsequently would have
no practical usage.
The third primitive is the representation of learning results. The generalized tuples at the end will be
used to generate rules by converting them to logic formula. This follows the fact that a tuple in a relation
can always be viewed as a logic formula in conjunctive normal form, and a relation can be characterized by
a large set of disjunctions of such conjunctive forms [13, 26]. Thus, both the data for learning and the rules
discovered can be represented in either relational form (tuples) or first-order predicate calculus. For exam-
ple, if one of the generalized tuples resulted from the generalization and merging of the computer science
student records in the university database is (graduate(Status), NorthAmerican(Birthplace), good(GPA)),
then the following rule in predicate calculus can be generated:
Note that the above rule is only one of the many rules that would be generated, and it is not quantified
yet. We will explain how quantification of the rules is done in AO induction. Many kinds of rules, such as
characteristic rules, and discriminant rules can be discovered by induction processes [15]. A characteristic
rule is an assertion that characterizes a concept satisfied by all or most of the examples in the class
targeted by a learning process. For example, the symptoms of a specific disease can be summarized by
a characteristic rule. A discriminant rule is an assertion which discriminates a concept of the class being
learned from other classes. For example, to distinguish one disease from others, a discriminant rule should
summarize the symptoms that discriminate this disease from others.
2.2 Concept Generalization
The most important mechanisms in AO Induction are concept generalization and rule creation. Generalization
is performed on all the tuples in the initial relation with respect to the concept hierarchy. All values
of an attribute in the relation are generalized to the same higher level. A selected set of attributes of the
tuples in the relation are generalized to possibly different higher levels synchronously and redundant tuples
are merged to become generalized tuples. The resulted relation containing these generalized tuples is called
a generalized relation, and it is smaller than the initial relation. In other words, a generalized relation is
a relation which consists of a set of generalized attributes storing generalized values of the corresponding
attributes in the original relation.
Even though, a generalized relation is smaller than the initial relation; however it may still contain too
many tuples, and is not practical to convert them to rules. Therefore, some principles are required to guide
the generalization to do further reduction. An attribute in a generalized relation is at a desirable level if it
contains only a small number of distinct values in the relation. A user of the mining system can specify a
small integer as a desirable attribute threshold to control the number of distinct values of an attribute. An
attribute is at the minimum desirable level if it would contain more distinct values than a defined desirable
attribute threshold when generalized to a level lower than the current one [16]. The minimum desirable
level for an attribute can also be specified explicitly by users or experts.
A special generalized relation R 0 of an original relation R is the prime relation [19] of R if every attribute
in R 0 is at the minimum desirable level. The first step of AO induction is to generalize the tuples in the
initial relation to proper concept levels such that the resulted relation becomes the prime relation. The
prime relation has a useful characteristic of containing minimal distinct values for each attribute. However,
it may still has many tuples, and would not be suitable for rule generation. Therefore, AO induction will
generalize and reduce the prime relation further until the final relation can satisfy the user's expectation
in terms of rule generation. This generalization can be done repetitively in order to generate rules at
different concept levels so that a user can find out the most suitable levels and rules. This is the technique
of progressive generalization (roll-up) [19]. If the rules discovered in a level is found to be too general, then
re-generalization to some lower levels can be performed and this technique is called progressive specialization
[19]. DBMiner has implemented the roll-up and drill-down techniques to support user to explore
different generalization paths until the resulted relation and rules so created can satisfy his expectation.
A discovery system can also quantify a rule generated from a generalized tuple by registering the number
of tuples from the initial relation, which are generalized to the generalized tuple, as a special attribute
count in the final relation. The attribute count carries database statistics to higher level concept rules,
supports pruning scattered data and searching for substantially weighted rules. A set of basic principles
for AO induction related to the above discussion has been proposed in [15, 16].
3 Rule-Based Concept Generalization
In Basic AO Induction, the key component to facilitate concept generalization is the concept tree. Its
generalization is unconditional and has limited generality. From this point on, we will focus our investigation
on the rule-based concept generalization which is a more general scheme. Concepts are partially ordered in
a concept hierarchy by levels from specific (lower level) to general (higher level). Generalization is achieved
by ascending concepts along the paths of a concept hierarchy. In general, a concept can ascend via more
than one path. A generalization rule can be assigned to a path in a concept hierarchy to determine whether
a concept can be generalized along that path.
For example, the generalization of GPA in Figure 5 could depend not only on the GPA of a student
but also his status. A GPA could be a good GPA for an undergrad but a poor one for a graduate. For
example, the rules to categorize a GPA in (2.0 - 2.49) may be defined by the following two conditional
generalization rules:
If a student's GPA is in the range (2.0 - 2.49) and he is an undergrad, then it is an average GPA.
If a student's GPA is in the range (2.0 - 2.49) and he is a grad, then it is a poor GPA.
Concept hierarchies whose paths have associated generalization rules are called rule-based concept hier-
archies. Concept hierarchies can be balanced or unbalanced. Unbalanced hierarchy can always be converted
to a balanced one. For easy of discussion, we will assume all hierarchies are balanced. Also, similar to
concept tree, we will assume that the concepts in a hierarchy are partially ordered into levels such that
lower level concepts will be generalized to the next higher level concepts and the concepts converge to
the null concept "ANY" in the top level (root). (The two notions of "concept" on the concept hierarchy
and "generalized attribute value" are equivalent, depending on the context, we will use the two notions
interchangeably.) In the following, three types of concept generalization, their corresponding generalization
rules and concept hierarchies are classified and discussed.
3.1 Unconditional Concept Generalization
This is the simplest type of concept generalization. The rules associated with these hierarchies are the
unconditional IS-A type rules. A concept is generalized to a higher level concept because of the subsumption
relationship indicated in the concept hierarchy. This type of hierarchies support concept climbing
generalization. The most popular unconditional concept generalization are performed on concept tree and
lattice. The hierarchies represented in both Figure 5 and Figure 6 belong to this type.
3.2 Deductive Rule Generalization
In this type of generalization, the rule associated with a generalization path is a deduction rule. For
example, the deduction rule: if a student's GPA is in the range (2.0 - 2.49) and he is a grad, then it is a
poor GPA, can be associated with the path from the concept GPA 2 (2:0 \Gamma 2:49) to the concept poor in
the GPA hierarchy.
This type of rules is conditional and can only be applied to generalize a concept if the corresponding
condition can be satisfied. A deduction generalization rule has the following form:
For a tuple x, concept (attribute value) A can be generalized to concept C if condition B can be
satisfied by x.
The condition B(x) can be a simple predicate or a general logic formula. In the simplest case, it can
be a predicate involving a single attribute. A concept hierarchy associated with deduction generalization
rules is called deduction-rule-based concept graph. This structure is suitable for induction in a database
that supports deduction.
3.3 Computational Rule Generalization
The rules for this type of generalization are computational rules. Each rule is represented by a condition
which is value-based and can be evaluated against an attribute or a tuple or the database by performing
some computation. The truth value of the condition would then determine whether a concept can be
generalized via the path.
For example, in the concept hierarchy for a spatial database, there may be three generalization paths
from regional spatial data to the concepts of small region, medium size region and large region. Conditions
like "region size - SMALL REGION SIZE", "region size ? SMALL REGION SIZE -
region size ! LARGE REGION SIZE", and "region size - LARGE REGION SIZE" can be assigned
to these paths respectively. The conditions depend on the computation of the value of region size
from the regional spatial data. In general, computation rules may involve sophisticated algorithms or
methods which are difficult to be represented as deduction rules.
The hierarchy with associated computation rules is called computation-based concept graph. This type
of hierarchy is suitable for induction in databases that involve a lot of numerical data, e.g., spatial databases
and statistical databases.
3.4 Hybrid Rule-Based Concept Generalization
A hierarchy can have paths associated with all the above three different types of rules. This type of hierarchy
is called hybrid rule-based concept graph. It has a powerful representation capability and is suitable for many
kinds of applications. For example, in many spatial databases, some generalization paths are computation
bound and are controlled by computation rules. While some symbolic attributes can be generalized by
deduction rules. And many simple attributes can be generalized by unconditional IS-A rules.
In the scope of database induction, the same technique can be used on all the different types of rule-based
hierarchies. Therefore, in the rest of this paper, we will use deduction-rule-based concept graph as
a typical concept hierarchy.
Rule-Based Attribute-Oriented Induction
In order to discuss the technique of AO induction in the rule-based case. We first define a general model for
rule-based AO induction. A rule-based AO induction system is defined by five components (DB, CH, DS,
KR, t a ). DB is the underlying extensional database. CH is a set of rule-based concept hierarchies associated
with the attributes in DB. We assume these hierarchies are deduction-rule-based concept graphs. DS is
a deduction system supporting the concept generalization. The generalization rules in CH, together with
some other deduction rules form the core of DS. In the simple case, DS may consist of only the rules in CH.
KR is a knowledge representation scheme for the learned result. It can be any one of the popular schemes
predicate calculus, frames, semantic nets, production rules, etc. Following the approach in basic AO
induction, we assume KR in the induction system is first-order predicate calculus. The last component t a
is the desirable attribute threshold defined in the basic induction. Note that all these five components are
input to the rule-based deduction system. Output of the system is the rules discovered from the database.
The generalization and rule creation processes in rule-based induction is fundamentally the same as
that in the basic induction. However, an attribute value could be generalized to different higher level
concepts depending on the concept graph. As a consequence, the techniques in basic induction have to be
modified to solve the induction problem in this case. We will describe in this section the frame work of
the rule-based induction, and explain the induction anomaly problem which occurs in this case.
Same as the basic induction, the first step of rule-based induction is to generalize and reduce the initial
relation to the prime relation. The minimum desirable level can be found in a scan of the initial relation.
Once the minimum desirable levels have been determined, the initial relation can be generalized to these
levels in a second scan, and the result would be the prime relation. This step of inducing the prime relation
is basically the same as that of the basic induction, except that the induction is performed on a general
concept graph rather than a restricted concept tree.
Once the prime relation is found, selected attributes are to be generalized further and the generalization-
comparison-merge process will be repeated to perform roll-up or drill-down. (Some selected attributes can
also be removed before the generalization starts.) In the basic induction, attributes in a prime relation
can always be generalized further by concept tree ascension because the generalization is based only on
the current generalized attribute values. However, this may not be the case in the rule-based induction
because the application of a rule on the prime relation may require additional information not available in
the prime relation. This phenomenon is called induction anomaly. Following are some cases that will cause
this anomaly to happen.
(1) A rule may depend on an attribute which has been removed.
(2) A rule may depend on an attribute whose concept level in the prime relation has been generalized
too high to match the condition of the rule.
(3) A rule may depend on a condition which can only be evaluated against the initial relation, e.g., the
number-of-tuples in the relation.
In the following, an example will be used to illustrate the rule-based induction on a concept graph and
the associated induction anomaly.
Example 2 This example is based on the induction in Example 1. We will enhance the concept tree there
to a rule-based concept graph to explain the rule-based induction.
The database DB is the same student database in Example 1, and the mining task is the same which is
to discover the characteristic rules for CS students. The modification is in the concept hierarchy CH. The
unconditional rules for the attribute GPA in Figure 5 are replaced by the set of deduction rules in Figure 7.
And the concept tree for GPA is enhanced to the rule-based concept graph in Figure 8 which has been
labeled with the corresponding deduction rules in Figure 7. For example, the GPA in the range (3:5 \Gamma 3:79)
poor
poor
R 4
R 5
excellent
excellent
strong
strong
strong.
Figure
3: Conditional Generalization Rules for GPA
strong
average good excellent
poor
ANY
R6 R7 R8
Figure
4: A rule-based concept graph for GPA
would no longer be generalized to "excellent" only, as would be the case if the concept tree in Figure 5 is
being followed. Instead, it will be checked against the two rules R 6 and R 7 in Figure 7. If it is the GPA
of a graduate student, then it will be generalized to "good"; otherwise, it must be an undergraduate, and
will be generalized to "excellent".
Suppose the tuples in the initial relation in Example 1 has been generalized according to the rule-based
concept graph in Figure 8. After comparison and merging, the prime relation resulted is the one in Table 1
1 . In performing further generalization in Table 1, some rules which reference no other information than
the generalized attribute values, such as R 9 , R 12 and R 13 in Figure 7, can be applied directly to the prime
relation to further generalize the GPA attribute. For example, the GPA value "good" in the second tuple
can be generalized to "strong" according to R 12 , and the value "poor" in the fifth row to "weak" according
to R 9 . However, for the GPA value "average" in the first tuple, it cannot be decided with the information
in the prime relation which one of the two rules R 10 and R 11 should be applied. If the student is either
a senior or graduate, then R 10 should be used to generalize the GPA to "weak"; otherwise, it should
be generalized to "strong". However, the status information (freshman=sophomore=junior=senior) has
been lost during the previous generalization, and is not available in the prime relation. In fact, the 40
1 The comparison and merging technique used here follows that proposed in [15]
Status Sex Age GPA Count
undergrad M
undergrad M 16 25 good 20
undergrad F
Table
1: A Prime Relation from the Rule-Based Generalization.
student tuples in the initial relation which are generalized and merged into the first tuple may have all the
different status. Therefore, if further generalization is performed, its value "average" will be generalized
to both "weak" and "strong", and the tuple will be split into two generalized tuples. 2
It is clear from Example 2 that further generalization from a prime relation may have difficulty for
rule-based induction. Therefore, the generalization technique has to be modified to suit the rule-based
case. In Section 5, we will describe a new method of using path relation instead of generalized relation to
solve the induction anomaly problem.
5 Path Relation
Since any generalization could introduce induction anomaly into the generalized relation, any further
generalization in the rule-based case has to be started again from the initial relation which has the full
information. However, re-applying the deduction rules all over again on the initial relation is costly and
wasteful. All the deduction that have been done previously in the generation of the prime relation is
wasted, and has to be redo. In order to solve this problem, we propose to use a path relation to capture
the generalization result from one application of the rules on the initial relation such that the result can
be reused in all subsequent generalization.
An attribute value may be generalized to the root via multiple possible paths on a concept graph.
However, for the attribute value of a given tuple in the initial relation, it can only be generalized via a
unique path to the root. Each one of the multiple paths on which an attribute value can be generalized is
a generalization path. Since the concepts on the graph are partially ordered, there are only a finite number
of distinct generalization paths from the bottom level. In general, the number of generalization paths of
an attribute should be small. Before an induction starts, a preprocessing is used to identify and label the
generalization paths of all the attributes. For example, the generalization paths of the concept graph of
GPA in
Figure
8 are identified and labeled in Figure 9.
For every attribute value of a tuple in the initial relation, its generalization path can be identified by
generalizing the tuple to the root. Therefore, each tuple in the initial relation is associated with a tuple of
generalization paths. In a scan of the initial relation, every tuple can be transformed into a tuple of ids of the
associated generalization paths. The result of the transformation is the path relation of the initial relation.
It is important to observe that the path relation has captured completely the generalization result of the
Path 4
Path 5
Figure
5: Generalization Paths for GPA
initial relation at all levels. In other words, given the generalization paths of a tuple, the generalized values
of a tuple can be determined easily from the concept graph without redoing any deduction. Furthermore,
the set of generalization paths on which some attribute values in the initial relation are generalized to the
root can be determined during the generation of the path relation. By checking the number of distinct
attribute values (concepts) on each level in the concept graph through which some paths found above have
traversed, the minimum desirable level can be found. It can be concluded at this point that the path
relation is an effective structure for capturing the generalization result in the rule-based case. By using it,
the repetitive generalization required by roll-up and drill-down can be done in an efficient way without the
problem introduced by the induction anomaly.
Name Status Sex Age GPA
J. Wong freshman M
C. Chan freshman F 19 2.8
D. Zhang senior M 21 2.7
A. Deng senior F 21 3.3
C. Ma M.A. M 28 2.3
A. Chan sophomore M 19 2.4
Table
2: Initial Relation from the student database.
Example 3 Assume that the initial relation of the induction in Example 2 is the one in Table 2. Its
path relation can be generated in one scan by using the generalization rules in Figure 7 and the path ids
specified in Figure 9. (Figure 7 only has the rules for GPA, the rules of the other attributes are simple.)
Table
3 is the path relation of Table 2. 2
Another issue in rule-based generalization is the cyclic dependency. A generalization rule may introduce
dependency between attributes. The generalization of an attribute value may depend on that of another
Status path id Sex path id Age path id GPA path id
Table
3: Path Relation from the Initial Relation.
Status GPA
Age Sex
Figure
Generalization Dependency Graph
attribute. If the dependency is cyclic, it could introduce deadlock in the generalization process. In order
to prevent cyclic dependency, rule-based induction creates a generalization dependency graph from the
generalization rules and prevents deadlock by ensuring the graph is acyclic. The nodes in the generalization
dependency graph are the levels of each attributes in the concept graph. (In the rest of the paper, we have
adopted the convention of numbering the top level (root) of a concept graph level 0, and to increase the
numbering from top to bottom. In other word, the bottom level would have the highest level number.)
We use L ij to denote the node associated with level j of an attribute A i . In the dependency graph, there
is an edge from every lower level node L ik to the next higher level node L i(k\Gamma1) for each attribute A i .
Also, if the generalization of an attribute A i from L ik to level L i(k\Gamma1) depends on another attribute A j in
level L jm , (i 6= j), then there is an edge from L jm to L i(k\Gamma1) . In Figure 10, we have the generalization
dependency graph of the generalization rules in Example 2. For example, the edges from L 22 to L 21 , from
L 12 to L 21 , and from L 11 to L 21 are introduced by the rule R 10 . If the dependency graph is acyclic, a
generalization order of the concepts can be derived from a partial ordering of the nodes. Following this
order, every attribute values of a tuple can be generalized to any level in the concept graphs. For example,
a generalization order of the graph in Figure 10 is (L 12
Moreover, any tuple in the initial relation in Table 2 can be generalization to the root following this order.
5.1 Data Structure for Generalization
poor good excellent
average
senior
freshman
sophomore
junior
0_15
GPA
Status
Age
Figure
7: A multi-dimensional data cube
In the prototype of the DBMiner system (called DBLearn), the data structure of generalization relation
is used to store the intermediate results. Both generalized tuples and their associated aggregate values such
as "count" are stored in relation tables. However, it is discovered that generalized relation is not the most
efficient structure to support insertion of new generalized tuples and comparison of identical generalized
tuples. Furthermore, the prime relation may not have enough information to support further generalization
in the rule-based case, which makes it an inappropriate choice for storing intermediate results. To facilitate
rule-based induction, we propose to use either a multi-dimensional data cube or a generalized-attribute tree.
A data cube is a multi-dimensional array, as shown in Figure 11, in which each dimension represents a
generalized attribute and each cell stores the value of some aggregate attributes, such as "count" or "sum".
For example, the data cube in Figure 11 can store the generalization result of the initial relation in Table 2
to levels 2,2,1 of the attributes Status, GPA and Age. (Please refer to the generalized attributes of the
levels in Figure 10.) Let v be a vector of desirable levels of a set of attributes, and the initial relation
is required to be generalized to the levels in v. During the generalization, for every tuple p in the initial
relation, the generalized attribute values of p with respect to the levels in v can be derived from its path
ids in the path relation and the concept graphs. Let p g be the tuple of these generalized attribute values.
In order to record the count of p and update the aggregate attribute values, p g is used as an index to a cell
in the data cube. The count and the aggregate attribute values of p are recorded in this cell. For example,
the count of the tuple (J.Wong, freshman, M, 18, 3.2) in the initial relation in Table 2 would be recorded
in the cell whose index is the generalized tuple (undergraduate, M,
Many works have been published on how to build data cubes [1, 6, 14]. In particular, how to compute
data cubes storing aggregated values efficiently from raw data in a database. In our case, we only need to
use a data cube as a data structure to store the "counts", i.e., number of tuples that have been generalized
into a higher level tuple. Therefore, the details of how to compute a cube on aggregations from a base
cube has no relevancy. For the AO induction algorithm, the cube is practically a multi-dimensional array.
In [17], the data cube has been compared with the generalized relation. It costs less to produce a data
cube, and its performance is better except when the data is extremely sparse. In that case, the data cube
may waste some storage. A more space efficient option is to use a b-tree type data structure. We propose
to use a b-tree called generalized-attribute tree to store the count and aggregate attribute values. In this
approach, the generalized tuple p g will be used as an index to a node in the generalized-attribute tree and
the count and aggregate values will be stored in the corresponding node. According to the experience in the
DBMiner system, data cube is very efficient as long as the percentage of occupied cells is reasonably dense.
Therefore, in rule-based induction, data cube is the favorable data structure; and the generalized-attribute
tree should be used only in case the sparseness is extremely high.
6 An Efficient Rule-Based AO Induction Algorithm
We have shown that in the case of rule-based induction, it is more efficient to capture the generalization
results in the path relation, and to use the data cube to store the intermediate results. In the following,
we present the path relation algorithm for rule-based induction.
Algorithm 1 Path Relation Algorithm for Rule-Based Induction
task specification is input into a Rule-Based AO Induction System (DB, CH, DS, KR, t a )
/* the initial relation R, whose attributes are A i , (1 - i - n), is retrieved from DB */
Step One: Inducing the prime relation Rpm from R
1: transform R into the path relation R
2: compute the minimum desirable level L i for each A i (1 - i - n);
3: create a data cube C with respect to the levels L i
4: scan R p ; for each tuple p 2 R p , compute the generalized tuple p g with respect to the levels L i ,
update the count and aggregate values in the cell indexed by p
5: convert C into the prime relation Rpm ;
Step Two: Perform progressive generalization to create rules
1: Select a set of attributes A j and corresponding desirable levels L j for further generalization;
2: create a data cube C for the attributes A j with respect to the levels L
3: scan the path relation R p ; compute the generalized tuple p g for each tuple p 2 R p with respect to
the desirable levels update the count and aggregate values in the cell indexed by p
4: convert all non-empty cells of C into rules.
Two will be repeated until the rules generated satisfy the discovery
goal 2 . 2
Explanation of the algorithm: In the first step, the path relation R p is first created from the initial
relation. After that, the minimum desirable levels L i , (1 - i - n), are computed by scanning R p once, and
a data cube C is created for the generalized attributes at levels L i . In 4, R p is scanned again and every
tuple in R p is generalized to levels L i . For every p in R p , its generalized tuple p g is used as an index to
2 The meaning of "discovery goal" follows those defined in [15], and will be discussed in the following paragraph.
locate a cell in C in which the count and other aggregate values are updated. At the end of the first step,
the non-empty cells in C are converted to tuples of the prime relation Rpm .
The second step is the progressive generalization, it is repeated until the rules generated satisfy the
discovery goal. There are two ways to define the discovery goal. In [15], a threshold was defined to control
the number of rules generated. Once the number of rules generated is reduced beyond the given threshold,
then the goal has been reached. Another way is to allow the generalization to go through an interactive
and iterative process until the user is satisfied with the rules generated. In other words, no pre-defined
threshold would be given, but the goal is reached when the user is comfortable with the rules generated.
This is compatible with the roll-up and drill-down approach used in many data mining systems.
The details of step two in the algorithm is the following. In the beginning of each iteration, a set of
attributes A j and levels L j are selected for generalization, and a data cube C is created with respect to
the levels L j . Following that, the tuples in R p are generalized to the levels L j in the same way as that in
the first step. After all corresponding cells in the data cube have been updated, the non-empty cells are
converted into rules. This can be repeated until the number of rules reached a pre-defined threshold or the
user is satisfied with the rules generated.
In the above algorithm, if the rules discovered are too general and in a level which is too high, then the
generalization can be redone to a lower level. Hence, it can perform not only progressive generalization,
but also progressive specialization.
Example 4 We extend Example 2 here to provide a complete walk through of Algorithm 1. The task
is to discover the characteristic rules from the computer science students in a university database. The
initial relation has been extracted from the database, and it is presented in Table 2. In the following, we
will describe the execution of Algorithm 1 on Table 2 in detail.
The input are the same as those described in Example 2. The database D is the database of
computer science students. The concept hierarchy CH is the one described in Figure 7. The initial
relation R is the one in Table 2.
Step One: Inducing the prime relation Rpm from R
1: Scan R and generalize each tuple in R to the root with respect to the concept graph (Figure 7) to
identify the associated path ids. For example, the 2.8 GPA of the second student in Table 2 can
be generalized to "average" by R 4
, and then to "strong" by R 11
Figure
7). (Note that R 11
is
applied instead of R 10
, because the student is a "freshman".) Therefore, the path for the GPA
attribute associated with this tuple is Path 6
Figure
9), and the associated path id tuple is the
second one in Table 3. Following this mechanism, every tuple in R is transformed into a tuple
of path ids, and R is transformed into the path relation R p in Table 3.
2: Compute the minimum desirable levels for each attribute in Table 2 by checking the number of
concepts at each level through which some generalization paths identified in the previous step
have traversed.
3: Create a data cube C with respect to the minimum desirable levels found.
4: Scan the path relation R p (Table 3). For each tuple p 2 R p , by using the corresponding path ids,
find out the generalized values p g of p on the concept graphs with respect to the minimum levels.
For example, assume the minimum level for GPA is found to be level 2, since the path id for
Status Age GPA Count
undergrad
undergrad
grad
grad
Table
4: A Final Relation from the Rule-Based Generalization.
GPA of the first tuple in R p (Table 3) is "7", it can be identified from Path 7 (Figure that the
generalized value at level 2 is "good". Once p g is found, update the count and aggregate values
in the cell indexed by p g . For example, the first tuple of Table 3 is generalized to (undergrad,
average), the count in the corresponding cell is updated.
5: For every nonempty cell in C, a corresponding tuple can be created in a generalized relation, and
the result is the prime relation Rpm in Table 1. For example, the count in the cell in C indexed
by (undergrad, M, 16 25, average) has count equal to 40 and it is converted to the first tuple in
Table
1).
Step Two: Perform progressive generalization to create rules
1: In order to perform further generalization on the prime relation Rpm (Table 1), the attributes Sex
and Birthplace are removed and GPA is generalized one level higher from level 2 to level 1.
2: A data cube C is created for the remaining attributes Status, Age and GPA with respect to the
new levels. (In fact, only GPA is moved one level higher.)
3: Scan the path relation R p to compute the generalized tuple p g for each tuple p 2 R p with respect
to the new levels, and update the count and aggregate values in the cell.
4: Convert all non-empty cells of C into a generalized relation, and the result is the final relation
in
Table
4. If the final relation can satisfy the user's expectation, it is then converted to the
us analyze the complexity of the path relation algorithm. The cost of the algorithm can be
decomposed into the cost of induction and that of deduction. The deduction portion is the one time cost
of generalizing the attribute values to the root in building the path relation. The induction portion covers
those spent on inducing the prime and the final relations. The induction portion of the algorithm is very
efficient, which will be discussed in the following theorem. However, the cost of the deduction portion will
depend on the complexity of the rules and the efficiency of the deduction system DS. A general deductive
database system may consist of complex rules involving multiple levels of deduction, recursion, negation,
aggregation, etc. and thus may not exist an efficient algorithm to evaluate such rules [26]. However, the
deduction rules in the algorithm are the conditional rules associated with a concept graph, which in most
cases, are very simple conditional rules. Therefore, it is practical to assume each generalization in the
deduction process is bounded by a constant in the analysis of the algorithm. When the concept graphs
involve more complex deduction rules, the complexity of the algorithm will depend on the complexity of
the deduction system.
The following theorem shows the complexity of the path relation algorithm under the assumption of
the bounded cost of the deduction processes.
Theorem 1 If the cost of generalizing an attribute to any level is bounded, the complexity of the path
relation algorithm for rule-based induction is O(n), where n is the number of tuples in the initial data
relation.
Proof. In the first step, the initial relation and the path relation are scanned once in steps 1 and 4. The
time to access a cell in the data cube is constant, therefore, the complexity of this step is O(n).
In the subsequent progressive generalization, assume that the number of rounds of generalization is k,
which is much smaller than n. In each round, the path relation will be scanned once only. Therefore, the
complexity is bounded by n \Theta k. Adding the costs of the two steps together, the complexity of the entire
induction process is O(n). 2
7 Performance Study
Our analysis in Section 6 has shown that the complexity of the relation path algorithm is O(n), which
is as good as that of the algorithms proposed for the more restricted non-rule-based case. Moreover, the
path relation algorithm proposed here is more efficient than a previously proposed backtracking algorithm
[7] which has a complexity of O(n log n). To confirm this analysis, an experiment has been conducted to
compare the performance between the path relation algorithm and the backtracking algorithm.
There are two main differences between the two algorithms. (1) The generalization in the backtracking
algorithm uses generalized relation as the data structure. (2) All further generalization after the prime
relation has been generated is based on the information in the prime relation. As has been explained,
the prime relation will introduce the induction anomaly in rule-based induction. Because of that, the
generalized tuples in the prime relation have to be backtracked to the initial relation and split according
to the multiple possible generalization paths in the further generalization. The backtracking and splitting
has to be performed in every round of progressive generalization. This impacts the performance of the
backtracking algorithm when comparing with the path relation algorithm. The path relation algorithm is
more efficient because the path relation has captured all the necessary induction information in the path
ids and its tuples can be generalized to any level in the rule-based case.
For comparison purpose, both algorithms were implemented and executed on a synthesized student
database similar to the one in Example 1. The records in the database have the following attributes f
Name, Status, Sex, Age, GPAg. The records are generated such that values in each attribute are random
within the range of possible values and satisfy some conditions. The following conditions are observed so
that the data will contain some interesting patterns rather than being completely random.
1. Graduate students are at least 22 years old.
2. Ph.D. students are at least 25 years old.
3. Graduate students' GPAs are at least 3.00.
For each attribute, there is a corresponding rule-based concept graph same as that in Example 1. In
order to compare the two algorithms, we assume the attributes Status, Sex, Age, GPA are generalized to
level 1,1,1,3 respectively in the prime relation, (the levels are those indicated in Figure 10). Following that,
GPA is further generalized to level 2 and then level 1. Hence the the generalization order is:
Number of No. of tuples in Path Backtracking Backtracking
of records final relation relation Alg Alg. (case 1) Alg. (case 2)
10000 6 192 4028 24033
10000 12 191 4028 24033
10000
1000 6 21 395 2396
1000 12 20 395 2396
1000
Figure
8: Number of pages read
Since the cost in the algorithms is dominated by the scanning in the database, the metric in the
comparison is the number of pages required to read and write.
For the path relation algorithm, the initial relation is first transformed into the path relation, and the
algorithm use the path relation to generalize the records. For each generalization step, we use a data cube
to store the counts and aggregate values of the generalized tuples. The number of pages written is much
smaller than the number of pages read as the path relation contains only the path id for the data of each
attribute and is much smaller than the initial relation.
For the backtracking algorithm, the initial relation is used to form the prime relation and a virtual
attribute is added to the initial relation to record the linkages between the tuples in the initial and prime
relations. (Please see [7] for details). In the generalization of the attribute GPA from level 3 to level
2, the induction anomaly occurs. The initial relation with the virtual attribute is read and the selected
attribute GPA is generalized and form the enhanced-prime relation by merging the tuples. The enhanced-
prime relation has the virtual attribute values to support splitting of the prime relation for the further
generalization. Two cases are considered for the implementation of the enhanced-prime relation. In the first
case the enhanced-prime relation is stored in the main memory. In the second case, the enhanced-prime
relation is stored in the secondary storage and the pages read and written are counted accordingly.
The results of the comparison are shown in Figures 12 and 13. It can be seen from the two figures that
the number of pages read and written by the path relation algorithm is much smaller. And the reduction
ratio between the two algorithms is comparable to the ratio between n and n log n, where n is the number
of records, which matches our analysis in the previous section. This has clearly demonstrated that the
path relation algorithm is more efficient. As an example, the final relation for generalizing 10000 records
with the final relation threshold set to 8 is shown in Figure 14.
Number of No. of tuples in Path Backtracking Backtracking
of records final relation relation Alg Alg. (case 1) Alg. (case 2)
1000 6 4 26 28
1000 12 4 26 28
1000
Figure
9: Number of pages written
Status Sex GPA Count
Undergraduate M Weak 1545
Undergraduate M Strong 1326
Undergraduate F Weak 1574
Undergraduate F Strong 1266
Graduate M Strong 2167
Graduate F Strong 2122
Figure
Final relation
The path relation has an important role in the rule-based induction proposed above. If the initial relation
is very large, it may not be cost effective to create another large table as the path relation. A better
alternative would be to encode and store the path ids in the initial relation. Also, it is useful to note that
it is not necessary to encode all the paths in a concept graph together. For example, in Figure 8, for a
given GPA value, only one bit at each level is sufficient to record the generalization path of the GPA value.
Therefore, in order not to create another large relation (path relation), generalization paths can be tagged
to tuples in the initial relation with a minimal amount of space.
On the other hand, if it is feasible to create a path relation, then another interesting observation is that
the path relation can be reduced without lossing of any information. In the generation of the path relation,
the path id tuples generated are stored in the same order as that in the initial relation and redundant
path id tuples are not compared and merged. Hence, the size of both relations are equal. If the initial
relation has m relevant attributes A i , (1 - i - m), then the number of distinct tuples in the path relation
would be less than
m), is the number of distinct generalization paths in
the concept graph of attribute A i . In some cases, ae would be much smaller than the size of the initial
relation, and it would be beneficial to merge the redundant path id tuples in the path relation and record
their counts there. By using this compact path relation, the cost of induction will be reduced significantly
in the progressive generalization.
In [17], it is suggested that mining characterization rules should be multiple-level. And the techniques of
progressive generalization and specialization should be used. The techniques rely on a minimally generalized
relation in which concepts are generalized to some minimal levels such that it can be used to perform drill-down
and roll-up from any level. The compact path relation discussed above has all necessary information
to support any generalization and its size is smaller than the initial relation. Therefore, it can be a
good candidate of minimally generalized relation for performing rule-based progressive generalization and
specialization. In fact, because of the induction anomaly problem, it would not be efficient to use other
generalized relation as a minimally generalized relation.
Many large organizations have started to use data warehouse to store valuable data from different
information sources. In some cases, the data could come from 20 to 30 legacy sources. In this type of
warehouse, the number of attributes of an object could be in the order of hundreds. One such example is the
data warehouses in large international banking corporations. In order to perform rule-based induction on
the data in large warehouses, the path relation algorithm is an excellent choice because many warehousing
systems have already implemented the data cube technique for on-line analytical processing (OLAP) and
decision support.
9 Conclusion
A rule-based attribute-oriented induction technique is proposed and studied in this paper. The technique
extends the previously developed basic attribute-oriented induction method and make the method more
generally applicable to databases containing both data and rules. Also, the given concept hierarchies
may contain unconditional and conditional rules, which enlarges the application domain and handles more
sophisticated situations.
Rule-based concept graph extends substantially the representation power of the background knowledge
in an induction system. It also has converted the discovery system into an integrated deduction system and
may eventually lead to the integration of a discovery system into an intelligent and cooperative information
system [5].
This study has developed a general model of rule-based database induction and extended the basic AO
Induction to the rule-based AO induction which handles induction on a rule-based concept graph. A new
technique of using path relation and data cube to replace the initial relation and the generalized relations
has been proposed to facilitate a more efficient induction. The proposed path relation algorithm has an
improved complexity of O(n) which is faster than the previously proposed rule-based induction algorithms.
The performance studies have also confirmed that the path relation algorithm is more efficient than the
"backtracking" algorithm.
A data mining system DBMiner has been developed for interactive mining of multiple-level knowledge
in large databases [17]. The system has implemented a wide spectrum of data mining techniques,
including generalization, characterization, association, classification, and prediction. Our proposed rule-based
induction can extend the mining capability of DBMiner. In particular, a rule-based progressive
generalization and specialization could be very useful in mining multiple-level knowledge. Furthermore,
unconditional and rule-based induction, including both generalization and specialization, in multidatabases
or data warehouses with large number of views are among the interesting topics in data mining for future
research.
--R
On the computation of multidimensional aggregates.
An interval classifier for database mining applications.
Database mining: A performance perspective.
Fast algorithms for mining association rules.
On Intelligent and Cooperative Information Systems: A Workshop Summary.
An Overview of Data Warehousing and OLAP Technology.
Knowledge discovery in databases: A rule-based attribute-oriented approach
Maintenance of Discovered Association Rules in Large Databases: An Incremental Updating Technique.
A Fast Distributed Algorithm for Mining Association Rules.
Advances in Knowledge Discovery and Data Mining.
Improving inference through conceptual clustering.
Knowledge discovery in databases: An overview.
Logic and databases: A deductive approach.
Data cube: A relational aggregation operator generalizing group-by
Knowledge discovery in databases: An attribute-oriented approach
A System for Mining Knowledge in Large Relational Databases.
Discovery of multiple-level association rules from large databases
Exploration of the power of attribute-oriented induction in data mining
Implementing Data Cubes Efficiently.
Mining for knowledge in databases: Goals and general description of the INLEN system.
Learning conjuctive concepts in structural domains.
Efficient Algorithms for Attribute-Oriented Induction
A theory and methodology of inductive learning.
Database Achievements and Opportunities Into the 21st Century.
Principles of Database and Knowledge-Base Systems
Research Problems in Data Warehousing.
Interactive mining of regularities in databases.
--TR
Principles of database and knowledge-base systems, Vol. I
Research problems in data warehousing
Implementing data cubes efficiently
An overview of data warehousing and OLAP technology
Advances in knowledge discovery and data mining
Attribute-oriented induction in data mining
Logic and Databases: A Deductive Approach
A fast distributed algorithm for mining association rules
Data-Driven Discovery of Quantitative Rules in Relational Databases
Database Mining
Maintenance of Discovered Association Rules in Large Databases
Data Cube
An Interval Classifier for Database Mining Applications
Knowledge Discovery in Databases
Fast Algorithms for Mining Association Rules in Large Databases
Discovery of Multiple-Level Association Rules from Large Databases
On the Computation of Multidimensional Aggregates
A Case-Based Reasoning Approach for Associative Query Answering
--CTR
Klaus Julisch, Clustering intrusion detection alarms to support root cause analysis, ACM Transactions on Information and System Security (TISSEC), v.6 n.4, p.443-471, November
Jeffrey Hsu, Critical and future trends in data mining: a review of key data mining technologies/applications, Data mining: opportunities and challenges, Idea Group Publishing, Hershey, PA, | data mining;rule-based concept hierarchy;rule-based concept generalization;inductive learning;learning and adaptive systems;attribute-oriented induction;knowledge discovery in databases |
354178 | The FERET Evaluation Methodology for Face-Recognition Algorithms. | AbstractTwo of the most critical requirements in support of producing reliable face-recognition systems are a large database of facial images and a testing procedure to evaluate systems. The Face Recognition Technology (FERET) program has addressed both issues through the FERET database of facial images and the establishment of the FERET tests. To date, 14,126 images from 1,199 individuals are included in the FERET database, which is divided into development and sequestered portions of the database. In September 1996, the FERET program administered the third in a series of FERET face-recognition tests. The primary objectives of the third test were to 1) assess the state of the art, 2) identify future areas of research, and | Introduction
Over the last decade, face recognition has become an active area of research in computer
vision, neuroscience, and psychology. Progress has advanced to the point that
face-recognition systems are being demonstrated in real-world settings [5]. The rapid development
of face recognition is due to a combination of factors: active development of
The work reported here is part of the Face Recognition Technology (FERET) program, which is
sponsored by the U.S. Department of Defense Counterdrug Technology Development Program. Portions
of this work was done while Jonathon Phillips was at the U.S. Army Research Laboratory (ARL). Jonathon
Phillips acknowledges the support of the National Institute of Justice.
algorithms, the availability of a large database of facial images, and a method for evaluating
the performance of face-recognition algorithms. The FERET database and evaluation
methodology address the latter two points and are de facto standards. There have been
three FERET evaluations with the most recent being the Sep96 FERET test.
The Sep96 FERET test provides a comprehensive picture of the state-of-the-art in face
recognition from still images. This was accomplished by evaluating algorithms' ability on
different scenarios, categories of images, and versions of algorithms. Performance was
computed for identification and verification scenarios. In an identification application, an
algorithm is presented with a face that it must identify the face; whereas, in a verification
application, an algorithm is presented with a face and a claimed identity, and the algorithm
must accept or reject the claim. In this paper, we describe the FERET database, the
evaluation protocol, and present identification results. Verification results
are presented in Rizvi et al. [8].
To obtain a robust assessment of performance, algorithms are evaluated against different
categories of images. The categories are broken out by lighting changes, people
wearing glasses, and the time between the acquisition date of the database image and the
image presented to the algorithm. By breaking out performance into these categories, a
better understanding of the face recognition field in general as well as the strengths and
weakness of individual algorithms is obtained. This detailed analysis helps to assess which
applications can be successfully addressed.
All face recognition algorithms known to the authors consist of two parts: (1) face
detection and normalization and (2) face identification. Algorithms that consist of both
parts are referred to as fully automatic algorithms, and those that consist of only the
second part are partially automatic algorithms. The Sep96 test evaluated both fully and
partially automatic algorithms. Partially automatic algorithms are given a facial image
and the coordinates of the center of the eyes. Fully automatic algorithms are only given
facial images.
The availability of the FERET database and evaluation methodology has made a
significant difference in the progress of development of face-recognition algorithms. Before
the FERET database was created, a large number of papers reported outstanding
recognition results (usually ? 95% correct recognition) on limited-size databases (usually
(In fact, this is still true.) Only a few of these algorithms reported
results on images utilizing a common database, let alone met the desirable goal of being
evaluated on a standard testing protocol that included separate training and testing sets.
As a consequence, there was no method to make informed comparisons among various
algorithms.
The FERET database has made it possible for researchers to develop algorithms on
a common database and to report results in the literature using this database. Results
reported in the literature do not provide a direct comparison among algorithms because
each researcher reported results using different assumptions, scoring methods, and images.
The independently administered FERET test allows for a direct quantitative assessment
of the relative strengths and weaknesses of different approaches.
More importantly, the FERET database and tests clarify the current state of the art
in face recognition and point out general directions for future research. The FERET
tests allow the computer vision community to assess overall strengths and weaknesses
in the field, not only on the basis of the performance of an individual algorithm, but
in addition on the aggregate performance of all algorithms tested. Through this type
of assessment, the community learns in an unbiased and open manner of the important
technical problems to be addressed, and how the community is progressing toward solving
these problems.
Background
The first FERET tests took place in August 1994 and March 1995 (for details of these tests
and the FERET database and program, see Phillips et al [5, 6] and Rauss et al [7]); The
FERET database collection began in September 1993 along with the FERET program.
The August 1994 test established, for the first time, a performance baseline for face-recognition
algorithms. This test was designed to measure performance on algorithms
that could automatically locate, normalize, and identify faces from a database. The test
consisted of three subtests, each with a different gallery and probe set. The gallery contains
the set of known individuals. An image of an unknown face presented to the algorithm
is called a probe, and the collection of probes is called the probe set. The first subtest
examined the ability of algorithms to recognize faces from a gallery of 316 individuals.
The second was the false-alarm test, which measured how well an algorithm rejects faces
not in the gallery. The third baselined the effects of pose changes on performance.
The second FERET test, that took place in March 1995, measured progress since
August 1994 and evaluated algorithms on larger galleries. The March 1995 evaluation
consisted of a single test with a gallery of 817 known individuals. One emphasis of the
test was on probe sets that contained duplicate images. A duplicate is defined as an image
of a person whose corresponding gallery image was taken on a different date.
The FERET database is designed to advance the state of the art in face recognition,
with the images collected directly supporting both algorithm development and the FERET
evaluation tests. The database is divided into a development set, provided to researchers,
and a set of sequestered images for testing. The images in the development set are
representative of the sequestered images.
The facial images were collected in 15 sessions between August 1993 and July 1996.
Collection sessions lasted one or two days. In an effort to maintain a degree of consistency
throughout the database, the same physical setup and location was used in each photography
session. However, because the equipment had to be reassembled for each session,
there was variation from session to session (figure 1).
Images of an individual were acquired in sets of 5 to 11 images, collected under relatively
unconstrained conditions. Two frontal views were taken (fa and fb); a different
facial expression was requested for the second frontal image. For 200 sets of images, a
third frontal image was taken with a different camera and different lighting (this is referred
to as the fc image). The remaining images were collected at various aspects between right
and left profile. To add simple variations to the database, photographers sometimes took
a second set of images, for which the subjects were asked to put on their glasses and/or
pull their hair back. Sometimes a second set of images of a person was taken on a later
date; such a set of images is referred to as a duplicate set. Such duplicates sets result in
variations in scale, pose, expression, and illumination of the face.
By July 1996, 1564 sets of images were in the database, consisting of 14,126 total
images. The database contains 1199 individuals and 365 duplicate sets of images. For
duplicate I fc duplicate II
Figure
1: Examples of different categories of probes (image). The duplicate I image was
taken within one year of the fa image and the duplicate II and fa images were taken at
least one year apart.
some people, over two years elapsed between their first and most recent sittings, with some
subjects being photographed multiple times (figure 1). The development portion of the
database consisted of 503 sets of images, and was released to researchers. The remaining
images were sequestered by the Government.
3 Test Design
3.1 Test Design Principles
The FERET Sep96 evaluation protocol was designed to assess the state of the art, advance
the state of the art, and point to future directions of research. To succeed at this, the test
design must solve the three bears problem. The test cannot be neither too hard nor too
easy. If the test is too easy, the testing process becomes an exercise in "tuning" existing
algorithms. If the test is too hard, the test is beyond the ability of existing algorithmic
techniques. The results from the test are poor and do not allow for an accurate assessment
of algorithmic capabilities.
The solution to the three bears problem is through the selection of images in the
test set and the testing protocol. Tests are administered using a testing protocol that
states the mechanics of the tests and the manner in which the test will be scored. In face
recognition, the protocol states the number of images of each person in the test, how the
output from the algorithm is recorded, and how the performance results are reported.
The characteristics and quality of the images are major factors in determining the
difficulty of the problem being evaluated. For example, if faces are in a predetermined
position in the images, the problem is different from that for images in which the faces can
be located anywhere in the image. In the FERET database, variability was introduced
by the inclusion of images taken at different dates and locations (see section 2). This
resulted in changes in lighting, scale, and background.
The testing protocol is based on a set of design principles. Stating the design principle
allows one to assess how appropriate the FERET test is for a particular face recognition
algorithm. Also, design principles assist in determining if an evaluation methodology
for testing algorithm(s) for a particular application is appropriate. Before discussing the
design principles, we state the evaluation protocol.
In the testing protocol, an algorithm is given two sets of images: the target set and the
query set. We introduce this terminology to distinguish these sets from the gallery and
probe sets that are used in computing performance statistics. The target set is given to
the algorithm as the set of known facial images. The images in the query set consists of
unknown facial images to be identified. For each image q i in the query set Q, an algorithm
reports a similarity s i (k) between q i and each image t k in the target set T . The testing
protocol is designed so that each algorithm can use a different similarity measure and we
do not compare similarity measures from different algorithms. The key property of the
new protocol, which allows for greater flexibility in scoring, is that for any two images q i
and t k , we know s i (k).
This flexibility allows the evaluation methodology to be robust and comprehensive;
it is achieved by computing scores for virtual galleries and probe sets. A gallery G is a
virtual gallery if G is a subset of the target set, i.e., G ae T . Similarly, P is a virtual probe
set if P ae Q. For a given gallery G and probe set P, the performance scores are computed
by examination of similarity measures s i (k) such that q G.
The virtual gallery and probe set technique allows us to characterize algorithm performance
by different categories of images. The different categories include (1) rotated
images, (2) duplicates taken within a week of the gallery image, (3) duplicates where the
time between the images is at least one year, (4) galleries containing one image per person,
and (5) galleries containing duplicate images of the same person. We can create a gallery of
100 people and estimate an algorithm's performance by recognizing people in this gallery.
Using this as a starting point, we can then create virtual galleries of 200;
people and determine how performance changes as the size of the gallery increases. Another
avenue of investigation is to create n different galleries of size 100, and calculate the
variation in algorithm performance with the different galleries.
To take full advantage of virtual galleries and probe sets, we selected multiple images
of the same person and placed them into the target and query sets. If such images were
marked as the same person, the algorithms being tested could use the information in the
evaluation process. To prevent this from happenning, we require that each image in the
target set be treated as an unique face. (In practice, this condition is enforced by giving
every image in the target and query set a unique random identification.) This is the first
design principle.
The second design principle is that training is completed prior to the start of the test.
This forces each algorithm to have a general representation for faces, not a representation
tuned to a specific gallery. Without this condition, virtual galleries would not be possible.
For algorithms to have a general representation for faces, they must be gallery (class)
insensitive. Examples are algorithms based on normalized correlation or principal component
analysis (PCA). An algorithm is class sensitive if the representation is tuned to
a specific gallery. Examples are straight forward implementation of Fisher discriminant
analysis [1, 9]. Fisher discriminant algorithms were adapted to class insensitive testing
methodologies by Zhao et al [13, 14], with performance results of these extensions being
reported in this paper.
The third design rule is that all algorithms tested compute a similarity measure between
two facial images; this similarity measure was computed for all pairs of images
between the target and query sets. Knowing the similarity score between all pairs of
Face Recognition
Algorithm
(run at Testee's site)
Output
File
Scoring
Code
Government
(run at
Results
Probe
Image
Name
Gallery
Image
Name
(One Image/Person)
Gallery images
List of Probes
Figure
2: Schematic of the FERET testing procedure
images from the target and query sets allows for the construction of virtual galleries and
probe sets.
3.2 Test Details
In the Sep96 FERET test, the target set contained 3323 images and the query set 3816
images. All the images in the target set were frontal images. The query set consisted
of all the images in the target set plus rotated images and digitally modified images.
We designed the digitally modified images to test the effects of illumination and scale.
(Results from the rotated and digitally modified images are not reported here.) For each
query image q i , an algorithm outputs the similarity measure s i (k) for all images t k in the
target set. For a given query image q i , the target images t k are sorted by the similarity
scores s i (\Delta). Since the target set is a subset of the query set, the test output contains the
similarity score between all images in the target set.
There were two versions of the Sep96 test. The target and query sets were the same for
each version. The first version tested partially automatic algorithms by providing them
with a list of images in the target and query sets, and the coordinates of the center of
the eyes for images in the target and query sets. In the second version of the test, the
coordinates of the eyes were not provided. By comparing the performance between the
two versions, we estimate performance of the face-locating portion of a fully automatic
algorithm at the system level.
The test was administered at each group's site under the supervision of one of the au-
thors. Each group had three days to complete the test on less than 10 UNIX workstations
(this limit was not reached). We did not record the time or number of workstations because
execution times can vary according to the type of machines used, machine and network
configuration, and the amount of time that the developers spent optimizing their code (we
wanted to encourage algorithm development, not code optimization). (We imposed the
time limit to encourage the development of algorithms that could be incorporated into
operational, fieldable systems.)
The images contained in the gallery and probe sets consisted of images from both
the developmental and sequestered portions of the FERET database. Only images from
the FERET database were included in the test; however, algorithm developers were not
prohibited from using images outside the FERET database to develop or tune parameters
in their algorithms.
The FERET test is designed to measure laboratory performance. The test is not
concerned with speed of the implementation, real-time implementation issues, and speed
and accuracy trade-offs. These issues and others, need to be addressed in an operational,
fielded system, were beyond the scope of the Sep96 FERET test.
Figure
presents a schematic of the testing procedure. To ensure that matching was
not done by file name, we gave the images random names. The nominal pose of each face
was provided to the testee.
4 Decision Theory and Performance Evaluation
The basic models for evaluating the performance of an algorithm are the closed and open
universes. In the closed universe, every probe is in the gallery. In an open universe,
some probes are not in the gallery. Both models reflect different and important aspects of
face-recognition algorithms and report different performance statistics. The open universe
models verification applications. The FERET scoring procedures for verification is given
in Rizvi et al [8].
The closed-universe model allows one to ask how good an algorithm is at identifying
a probe image; the question is not always "is the top match correct?" but "is the correct
answer in the top n matches?" This lets one know how many images have to be examined
to get a desired level of performance. The performance statistics are reported as cumulative
match scores. The rank is plotted along the horizontal axis, and the vertical axis is
the percentage of correct matches. The cumulative match score can be calculated for any
subset of the probe set. We calculated this score to evaluate an algorithm's performance
on different categories of probes, i.e., rotated or scaled probes.
The computation of an identification score is quite simple. Let P be a probe set and
jPj the size of P. We score probe set P against gallery G, where and
by comparing the similarity scores s i (\Delta) such that G. For
each probe image G. We assume that
a smaller similarity score implies a closer match. If g k and p i are the same image, then
The function id(i) gives the index of the gallery image of the person in probe
is an image of the person in g id(i) . A probe p i is correctly identified if s i (id(i))
is the smallest scores for G. A probe p i is in the top k if s i (id(i)) is one of the k-th
smallest score s i (\Delta) for gallery G. Let R k denote the number of probes in the top k. We
reported R k =jPj, the fraction of probes in the top k. As an example, let
and 100. Based on the formula, the performance score for R 5 is
In reporting identification performance results, we state the size of the gallery and the
number of probes scored. The size of the gallery is the number of different faces (people)
contained in the images that are in the gallery. For all results that we report, there is one
image per person in the gallery, thus, the size of the gallery is also the number of images
in the gallery. The number of probes scored (also, size of the probe set) is jPj. The probe
set may contain more than one image of a person and the probe set may not contain an
image of everyone in the gallery. Every image in the probe set has a corresponding image
in the gallery.
5 Latest Test Results
The Sep96 FERET test was designed to measure algorithm performance for identification
and verification tasks. Both tasks are evaluated on the same sets of images. We report
the results for 12 algorithms that includes 10 partially automatic algorithms and 2 fully
automatic algorithms. The test was administered in September 1996 and March 1997
(see table 1 for details of when the test was administered to which groups and which
version of the test was taken). Two of these algorithms were developed at the MIT
Media Laboratory. The first was the same algorithm that was tested in March 1995.
This algorithm was retested so that improvement since March 1995 could be measured.
The second algorithm was based on more recent work [2, 3]. Algorithms were also tested
from Excalibur Corp. (Carlsbad, CA), Michigan State University (MSU) [9, 14], Rutgers
University [11], University of Southern California (USC) [12], and two from University of
Maryland (UMD) [1, 13, 14]. The first algorithm from UMD was tested in September 1996
and a second version of the algorithm was tested in March 1997. For the fully automatic
version of test, algorithms from MIT and USC were evaluated.
The final two algorithms were our implementation of normalized correlation and a
principal components analysis (PCA) based algorithm [4, 10]. These algorithms provide
a performance baseline. In our implementation of the PCA-based algorithm, all images
were (1) translated, rotated, and scaled so that the center of the eyes were placed on
specific pixels, (2) faces were masked to remove background and hair, and (3) the non-masked
facial pixels were processed by a histogram equalization algorithm. The training
set consisted of 500 faces. Faces were represented by their projection onto the first 200
eigenvectors and were identified by a nearest neighbor classifier using the L 1 metric. For
normalized correlation, the images were (1) translated, rotated, and scaled so that the
center of the eyes were placed on specific pixels and (2) faces were masked to remove
background and hair.
5.1 Partially automatic algorithms
We report identification scores for four categories of probes. The first probe category was
the FB probes (fig 3). For each set of images, there were two frontal images. One of the
images was randomly placed in the gallery, and the other image was placed in the FB
probe set. (This category is denoted by FB to differentiate it from the fb images in the
FERET database.) The second probe category contained all duplicate frontal images in
the FERET database for the gallery images. We refer to this category as the duplicate I
probes. The third category was the fc (images taken the same day, but with a different
camera and lighting). The fourth consisted of duplicates where there is at least one year
between the acquisition of the probe image and corresponding gallery image. We refer
to this category as the duplicate II probes. For this category, the gallery images were
acquired before January 1995 and the probe images were acquired after January 1996.
Table
1: List of groups that took the Sept96 test broken out by versions taken and dates
administered. (The 2 by MIT indicates that two algorithms were tested.)
Test
September March
Version of test Group 1996 1997 Baseline
Fully Automatic MIT Media Lab [2, 3] ffl
U. of So. California (USC) [12] ffl
Eye Coordinates Given Baseline PCA [4, 10] ffl
Baseline Correlation ffl
Excalibur
Michigan State U. [9, 14] ffl
Rutgers U. [11] ffl
U Maryland [1, 13, 14] ffl ffl
The gallery for the FB, duplicate I, and fc probes was the same and consisted of 1196
frontal images with one image person in the gallery (thus the gallery contained 1196
individuals). Also, none of the faces in the gallery images wore glasses. The gallery for
duplicate II probes was a subset of 864 images from the gallery for the other categories.
The results for identification are reported as cumulative match scores. Table 2 shows
the categories corresponding to the figures presenting the results, type of results, and size
of the gallery and probe sets (figs 3 to 6).
In figures 7 and 8, we compare the difficulty of different probe sets. Whereas, figure 4
reports identification performance for each algorithm, figure 7 shows a single curve that
is an average of the identification performance of all algorithms for each probe category.
For example, the first ranked score for duplicate I probe sets is computed from an average
of the first ranked score for all algorithms in figure 4. In figure 8, we presented current
upper bound for performance on partially automatic algorithms for each probe category.
For each category of probe, figure 8 plots the algorithm with the highest top rank score
(R 1 ).
Figures
7 and 8 reports performance of four categories of probes, FB, duplicate I,
fc, duplicate II.
Table
2: Figures reporting results for partially automatic algorithms. Performance is
broken out by probe category.
Figure no. Probe Category Gallery size Probe set size
4 duplicate I 1196 722
6 duplicate II 864 234
Cumulative
match
score
Rank
MSU
UMD 96
MIT 95
Baseline
Baseline EF
Excalibur
Rutgers
(a)0.80.91
Cumulative
match
score
Rank
UMD 97
USC
UMD 96
Baseline
Baseline EF
(b)
Figure
3: performance against FB probes. (a) Partially automatic algo-0.95
Cumulative
match
score
Rank
Excalibur
Baseline EF
Baseline
MIT 95
MSU
UMD 96
Rutgers
(a)0.40.60.81
Cumulative
match
score
Rank
USC
UMD 97
Baseline EF
Baseline
UMD 96
(b)
Figure
4: performance against all duplicate I probes. (a) Partially automatic
Cumulative
match
score
Rank
UMD 96
MSU
Excalibur
Baseline EF
Rutgers
MIT 95
Baseline
(a)0.20.61
Cumulative
match
score
Rank
USC
UMD 97
UMD 96
Baseline EF
Baseline
(b)
Figure
5: performance against fc probes. (a) Partially automatic algorithms
Cumulative
match
score
Rank
Baseline EF
Excalibur
Rutgers
MIT 95
Baseline
UMD 96
MSU
(a)0.20.61
Cumulative
match
score
Rank
USC
Baseline EF
UMD 97
Baseline
UMD 96
(b)
Figure
performance against duplicate II probes. (a) Partially automatic
Cumulative
Match
Score
Rank
FB probes
Duplicate I probes
fc probes
Duplicate II probes
Figure
7: Average identification performance of partially automatic algorithms on each
probe category.0.20.61
Cumulative
Match
Score
Rank
FB probes
fc probes
Duplicate I probes
Duplicate II probes
Figure
8: Current upper bound identification performance of partially automatic algorithm
for each probe category.
Cumulative
Match
Score
Rank
USC partially automatic
MIT partially automatic
USC fully automatic
MIT fully automatic
Figure
9: Identification performance of fully automatic algorithms against partially automatic
algorithms for FB
Cumulative
Match
Score
Rank
USC partially automatic
USC fully automatic
MIT partially automatic
MIT fully automatic
Figure
10: Identification performance of fully automatic algorithms against partially automatic
algorithms for duplicate I probes.
5.2 Fully Automatic Performance
In this subsection, we report performance for the fully automatic algorithms of the MIT
Media Lab and USC. To allow for a comparison between the partially and fully automatic
algorithms, we plot the results for the partially and fully automatic algorithms. Figure 9
shows performance for FB probes and figure 10 shows performance for duplicate I probes.
(The gallery and probe sets are the same as in subsection 5.1.)
5.3 Variation in Performance
From a statistical point of view, a face-recognition algorithm estimates the identity of
a face. Consistent with this view, we can ask about the variance in performance of
an algorithm: "For a given category of images, how does performance change if the
algorithm is given a different gallery and probe set?" In tables 3 and 4, we show how
algorithm performance varies if the people in the galleries change. For this experiment, we
constructed six galleries of approximately 200 individuals, in which an individual was in
only one gallery (the number of people contained within each gallery versus the number of
probes scored is given in tables 3 and 4). Results are reported for the partially automatic
algorithms. For the results in this section, we order algorithms by their top rank score
on each gallery; for example, in table 3, the UMD Mar97 algorithm scored highest on
gallery 1 and the baseline PCA and correlation tied for 9th place. Also included in this
table is average performance for all algorithms. Table 3 reports results for FB probes.
Table
4 is organized in the same manner as table 3, except that duplicate I probes are
scored. Tables 3 and 4 report results for the same gallery. The galleries were constructed
by placing images within the galleries by chronological order in which the images were
collected (the first gallery contains the first images collected and the 6th gallery contains
the most recent images collected). In table 4, mean age refers to the average time between
collection of images contained in the gallery and the corresponding duplicate probes. No
scores are reported in table 4 for gallery 6 because there are no duplicates for this gallery.
6 Discussion and Conclusion
In this paper we presented the Sep96 FERET evaluation protocol for face recognition
algorithms. The protocol makes it possible to independently evaluate algorithms. The
protocol was designed to evaluate algorithms on different galleries and probe sets for different
scenarios. Using this protocol, we computed performance on identification and
verification tasks. The verification results are presented in Rizvi et al. [8], and all verification
results mentioned in this section are from that paper. In this paper we presented
detailed identification results. Because of the Sep96 FERET evaluation protocol's ability
to test algorithms performance on different tasks for multiple galleries and probe sets, it
is the de facto standard for measuring performance of face recognition algorithms. These
results show that factors effecting performance include scenario, date tested, and probe
category.
The Sep96 test was the latest FERET test (the others were the Aug94 and Mar95
tests [6]). One of the main goals of the FERET tests has been to improve the performance
of face recognition algorithms, and is seen in the Sep96 FERET test. The first case is
the improvement in performance of the MIT Media Lab September 1996 algorithm over
Table
3: Variations in identification performance on six different galleries on FB probes.
Images in each gallery do not overlap. Ranks range from 1-10.
Algorithm Ranking by Top Match
Gallery Size / Scored Probes
200/200 200/200 200/200 200/200 200/199 196/196
Algorithm gallery 1 gallery 2 gallery 3 gallery 4 gallery 5 gallery 6
Baseline
Baseline correlation 9 9 9
Excalibur
Michigan State Univ. 3 4 5 8 4 4
Rutgers Univ. 7 8 9 6 7 9
Average Score 0.935 0.857 0.904 0.918 0.843 0.804
Table
4: Variations in identification performance on five different galleries on duplicate
probes. Images in each of the gallery does not overlap. Ranks range from 1-10.
Algorithm Ranking by Top Match
Gallery Size / Scored Probes
200/143 200/64 200/194 200/277 200/44
Mean Age of Probes (months) 9.87 3.56 5.40 10.70 3.45
Algorithm gallery 1 gallery 2 gallery 3 gallery 4 gallery 5
Baseline
Baseline correlation
Excalibur
Michigan State Univ. 9
Rutgers Univ.
Average Score 0.238 0.620 0.645 0.523 0.687
the March 1995 algorithm; the second is the improvement of the UMD algorithm between
September 1996 and March 1997.
By looking at progress over the series of FERET tests, one sees that substantial
progress has been made in face recognition. The most direct method is to compare the
performance of fully automatic algorithms on fb probes (the two earlier FERET tests
only evaluated fully automatic algorithms. The best top rank score for fb probes on the
Aug94 test was 78% on a gallery of 317 individuals, and for Mar95, the top score was
93% on a gallery of 831 individuals [6]. This compares to 87% in September 1996 and
95% in March 1997 (gallery of 1196 individuals). This method shows that over the course
of the FERET tests, the absolute scores increased as the size of the database increased.
The March 1995 score was from one of the MIT Media Lab algorithms, and represents an
increase from 76% in March 1995.
On duplicate I probes, MIT Media Lab improved from 39% (March 1995) to 51%
(September 1996); USC's performance remained approximately the same at 57-58% between
March 1995 and March 1997. This improvement in performance was achieved while
the gallery size increased and the number of duplicate I probes increased from 463 to 722.
While increasing the number of probes does not necessarily increase the difficulty of identification
tasks, we argue that the Sep96 duplicate I probe set was more difficult to process
then the Mar95 set. The Sep96 duplicate I probe set contained the duplicate II probes
and the Mar95 duplicate I probe set did not contain a similar class of probes. Overall,
the duplicate II probe set was the most difficult probe set.
Another goal of the FERET tests is to identify areas of strengths and weaknesses
in the field of face recognition. We addressed this issue by computing algorithm performance
for multiple galleries and probe sets. From this evaluation, we concluded that
algorithm performance is dependent on the gallery and probe sets. We observed variation
in performance due to changing the gallery and probe set within a probe category, and
by changing probe categories. The effect of changing the gallery while keeping the probe
category constant is shown in tables 3 and 4. For fb probes, the range for performance is
80% to 94%; for duplicate I probes, the range is 24% to 69%. Equally important, tables 3
and 4 shows the variability in relative performance levels. For example, in table 4, UMD
duplicate performance varies between number three and nine. Similar results were
found in Moon and Phillips [4] in their study of principal component analysis-based face
recognition algorithms. This shows that an area of future research could measure the effect
of changing galleries and probe sets, and statistical measures that characterize these
variations.
Figures
7 and 8 shows probe categories characterized by difficulty. These figures show
that fb probes are the easiest and duplicate II probes are the most difficult. On average,
duplicate I probes are easier to identify than fc probes. However, the best performance on
fc probes is significantly better than the best performance on duplicate I and II probes.
This comparative analysis shows that future areas of research could address processing of
duplicate II probes and developing methods to compensate for changes in illumination.
The scenario being tested contributes to algorithm performance. For identification,
the MIT Media Lab algorithm was clearly the best algorithm tested in September 1996.
However, for verification, there was not an algorithm that was a top performer for all probe
categories. Also, for the algorithms tested in March 1997, the USC algorithm performed
overall better than the UMD algorithm for identification; however, for verification, UMD
overall performed better. This shows that performance on one task is not predictive of
performance on another task.
The September 1996 FERET test shows that definite progress is being made in face
recognition, and that the upper bound in performance has not been reached. The improvement
in performance documented in this paper shows directly that the FERET series
of tests have made a significant contribution to face recognition. This conclusion is indirectly
supported by (1) the improvement in performance between the algorithms tested
in September 1996 and March 1997, (2) the number of papers that use FERET images
and report experimental results using FERET images, and (3) the number of groups that
participated in the Sep96 test.
--R
Discriminant analysis for recognition of human face images.
Bayesian face recognition using deformable intensity surfaces.
Probabilistic visual learning for object detection.
Analysis of PCA-based face recognition algorithms
The face recognition technology (FERET) program.
The FERET database and evaluation procedure for face-recognition algorithms
The FERET (Face Recognition Technology) program.
The feret verification testing protocol for face recognition algorithms.
Using discriminant eigenfeatures for image retrieval.
Eigenfaces for recognition.
Face recognition using transform coding of gray scale projection projections and the neural tree network.
Discriminant analysis of principal components for face recognition.
Discriminant analysis of principal components for face recognition.
--TR
--CTR
Alice J. O'Toole , Joshua Harms , Sarah L. Snow , Dawn R. Hurst , Matthew R. Pappas , Janet H. Ayyad , Herve Abdi, A Video Database of Moving Faces and People, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.5, p.812-816, May 2005
J. Shi , A. Samal , D. Marx, How effective are landmarks and their geometry for face recognition?, Computer Vision and Image Understanding, v.102 n.2, p.117-133, May 2006
Zhang , Longbin Chen , Mingjing Li , Hongjiang Zhang, Automated annotation of human faces in family albums, Proceedings of the eleventh ACM international conference on Multimedia, November 02-08, 2003, Berkeley, CA, USA
Linlin Shen , Li Bai, Information theory for Gabor feature selection for face recognition, EURASIP Journal on Applied Signal Processing, v.2006 n.1, p.8-8, 01 January
Xiaogang Wang , Xiaoou Tang, Bayesian face recognition using Gabor features, Proceedings of the ACM SIGMM workshop on Biometrics methods and applications, November 08, 2003, Berkley, California
Florent Perronnin , Jean-Luc Dugelay , Kenneth Rose, A Probabilistic Model of Face Mapping with Local Transformations and Its Application to Person Recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.7, p.1157-1171, July 2005
Ayoub K. Al-Hamadi , Robert Niese , Axel Panning , Bernd Michaelis, Toward robust face analysis method of non-cooperative persons in stereo color image sequences, Machine Graphics & Vision International Journal, v.15 n.3, p.245-254, January 2006
Julia Vogel , Bernt Schiele, Performance evaluation and optimization for content-based image retrieval, Pattern Recognition, v.39 n.5, p.897-909, May, 2006
Haitao Zhao , Shaoyuan Sun , Zhongliang Jing , Jingyu Yang, Rapid and brief communication: Local structure based supervised feature extraction, Pattern Recognition, v.39 n.8, p.1546-1550, August, 2006
Kyong Chang , Kevin W. Bowyer , Sudeep Sarkar , Barnabas Victor, Comparison and Combination of Ear and Face Images in Appearance-Based Biometrics, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.9, p.1160-1165, September
Hochul Shin , Seong-Dae Kim , Hae-Chul Choi, Generalized elastic graph matching for face recognition, Pattern Recognition Letters, v.28 n.9, p.1077-1082, July, 2007
Wang , Xiao Wang , Jufu Feng, Subspace distance analysis with application to adaptive Bayesian algorithm for face recognition, Pattern Recognition, v.39 n.3, p.456-464, March, 2006
Xiaoyang Tan , Jun Liu , Songcan Chen, Letters: Sub-intrapersonal space analysis for face recognition, Neurocomputing, v.69 n.13-15, p.1796-1801, August, 2006
Shaohua Zhou , Volker Krueger , Rama Chellappa, Probabilistic recognition of human faces from video, Computer Vision and Image Understanding, v.91 n.1-2, p.214-245, July
LinLin Shen , Li Bai , Michael Fairhurst, Gabor wavelets and General Discriminant Analysis for face identification and verification, Image and Vision Computing, v.25 n.5, p.553-563, May, 2007
Xiaoxun Zhang , Yunde Jia, Face recognition with local steerable phase feature, Pattern Recognition Letters, v.27 n.16, p.1927-1933, December 2006
Xiaoxun Zhang , Yunde Jia, Face recognition with local steerable phase feature, Pattern Recognition Letters, v.27 n.16, p.1927-1933, December, 2006
Zhihong Pan , Glenn Healey , Manish Prasad , Bruce Tromberg, Face Recognition in Hyperspectral Images, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.12, p.1552-1560, December
Kyong I. Chang , Kevin W. Bowyer , Patrick J. Flynn, An Evaluation of Multimodal 2D+3D Face Biometrics, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.4, p.619-624, April 2005
Shumeet Baluja , Henry A. Rowley, Boosting Sex Identification Performance, International Journal of Computer Vision, v.71 n.1, p.111-119, January 2007
Ming-Hsuan Yang, Extended isomap for pattern classification, Eighteenth national conference on Artificial intelligence, p.224-229, July 28-August 01, 2002, Edmonton, Alberta, Canada
Weilong Chen , Meng Joo Er , Shiqian Wu, PCA and LDA in DCT domain, Pattern Recognition Letters, v.26 n.15, p.2474-2482, November 2005
Zhang Xiaoxun , Jia Yunde, Symmetrical null space LDA for face and ear recognition, Neurocomputing, v.70 n.4-6, p.842-848, January, 2007
Wangmeng Zuo , Kuanquan Wang , David Zhang , Hongzhi Zhang, Combination of two novel LDA-based methods for face recognition, Neurocomputing, v.70 n.4-6, p.735-742, January, 2007
QingShan Liu , Rui Huang , HanQing Lu , SongDe Ma, Kernel-based nonlinear discriminant analysis for face recognition, Journal of Computer Science and Technology, v.18 n.6, p.788-795, November
Xiaogang Wang , Xiaoou Tang, A Unified Framework for Subspace Face Recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.9, p.1222-1228, September 2004
Philip de Chazal , John Flynn , Richard B. Reilly, Automated Processing of Shoeprint Images Based on the Fourier Transform for Use in Forensic Science, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.3, p.341-350, March 2005
Wang , Xiao Wang , Xuerong Zhang , Jufu Feng, The equivalence of two-dimensional PCA to line-based PCA, Pattern Recognition Letters, v.26 n.1, p.57-60, 1 January 2005
T. Martiriggiano , M. Leo , T. D'Orazio , A. Distante, Face recognition by Kernel independent component analysis, Proceedings of the 18th international conference on Innovations in Applied Artificial Intelligence, p.55-58, June 22-24, 2005, Bari, Italy
Xiaoyan Mu , Mehmet Artiklar , Paul Watta , Mohamad H. Hassoun, An RCE-based Associative Memory with Application to Human Face Recognition, Neural Processing Letters, v.23 n.3, p.257-271, June 2006
Jiatao Song , Zheru Chi , Jilin Liu, A robust eye detection method using combined binary edge and intensity information, Pattern Recognition, v.39 n.6, p.1110-1125, June, 2006
A. N. Rajagopalan , K. Srinivasa Rao , Y. Anoop Kumar, Face recognition using multiple facial features, Pattern Recognition Letters, v.28 n.3, p.335-341, February, 2007
Simon Lucey , Iain Matthews, Face refinement through a gradient descent alignment approach, Proceedings of the HCSNet workshop on Use of vision in human-computer interaction, p.43-49, November 01-01, 2006, Canberra, Australia
Ofer Melnik , Yehuda Vardi , Cun-Hui Zhang, Concave Learners for Rankboost, The Journal of Machine Learning Research, 8, p.791-812, 5/1/2007
Jianguo Lee , Jingdong Wang , Changshui Zhang , Zhaoqi Bian, Probabilistic tangent subspace: a unified view, Proceedings of the twenty-first international conference on Machine learning, p.67, July 04-08, 2004, Banff, Alberta, Canada
Zhong , Irek Defe, DCT Histogram optimization for image database retrieval, Pattern Recognition Letters, v.26 n.14, p.2272-2281, 15 October 2005
Zhong Jin , Zhen Lou , Jingyu Yang , Quansen Sun, Face detection using template matching and skin-color information, Neurocomputing, v.70 n.4-6, p.794-800, January, 2007
Nikolaos V. Boulgouris , Zhiwei X. Chi, Human gait recognition based on matching of body components, Pattern Recognition, v.40 n.6, p.1763-1770, June, 2007
Wang , Yan Zhang , Jufu Feng, On the Euclidean Distance of Images, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.8, p.1334-1339, August 2005
John Daugman, Statistical Richness of Visual Phase Information: Update on Recognizing Persons by Iris Patterns, International Journal of Computer Vision, v.45 n.1, p.25-38, October 2001
Elaine M. Newton , Latanya Sweeney , Bradley Malin, Preserving Privacy by De-Identifying Face Images, IEEE Transactions on Knowledge and Data Engineering, v.17 n.2, p.232-243, February 2005
Xin Chen , Patrick J. Flynn , Kevin W. Bowyer, IR and visible light face recognition, Computer Vision and Image Understanding, v.99 n.3, p.332-358, September 2005
Simon Lucey , Tsuhan Chen, Integrating monolithic and free-parts representations for improved face verification in the presence of pose mismatch, Pattern Recognition Letters, v.28 n.8, p.895-903, June, 2007
Loris Nanni , Dario Maio, Weighted Sub-Gabor for face recognition, Pattern Recognition Letters, v.28 n.4, p.487-492, March, 2007
Todd A. Stephenson , Tsuhan Chen, Adaptive Markov random fields for example-based super-resolution of faces, EURASIP Journal on Applied Signal Processing, v.2006 n.1, p.225-225, 01 January
Dario Maio , Davide Maltoni , Raffaele Cappelli , J. L. Wayman , Anil K. Jain, FVC2000: Fingerprint Verification Competition, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.3, p.402-412, March 2002
Jian-Gang Wang , Hui Kong , Eric Sung , Wei-Yun Yau , Eam Khwang Teoh, Fusion of appearance image and passive stereo depth map for face recognition based on the bilateral 2DLDA, Journal on Image and Video Processing, v.2007 n.2, p.6-6, August 2007
Xiaoxun Zhang , Yunde Jia, A linear discriminant analysis framework based on random subspace for face recognition, Pattern Recognition, v.40 n.9, p.2585-2591, September, 2007
Juwei Lu , K. N. Plataniotis , A. N. Venetsanopoulos, Regularized discriminant analysis for the small sample size problem in face recognition, Pattern Recognition Letters, v.24 n.16, p.3079-3087, December
Rein-Lien Hsu , Anil K. Jain, Generating Discriminating Cartoon Faces Using Interacting Snakes, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.11, p.1388-1398, November
Linlin Shen , Li Bai, MutualBoost learning for selecting Gabor features for face recognition, Pattern Recognition Letters, v.27 n.15, p.1758-1767, November 2006
Nikolaos V. Boulgouris , Konstantinos N. Plataniotis , Dimitrios Hatzinakos, Gait recognition using linear time normalization, Pattern Recognition, v.39 n.5, p.969-979, May, 2006
Linlin Shen , Li Bai, MutualBoost learning for selecting Gabor features for face recognition, Pattern Recognition Letters, v.27 n.15, p.1758-1767, November, 2006
Yong Ma , Shihong Lao , Erina Takikawa , Masato Kawade, Discriminant analysis in correlation similarity measure space, Proceedings of the 24th international conference on Machine learning, p.577-584, June 20-24, 2007, Corvalis, Oregon
Alice J. O'toole , Fang Jiang , Herv Abdi , James V. Haxby, Partially Distributed Representations of Objects and Faces in Ventral Temporal Cortex, Journal of Cognitive Neuroscience, v.17 n.4, p.580-590, April 2005
K. Srinivasa Rao , A. N. Rajagopalan, A probabilistic fusion methodology for face recognition, EURASIP Journal on Applied Signal Processing, v.2005 n.1, p.2772-2787, 1 January 2005
Juwei Lu , K. N. Plataniotis , A. N. Venetsanopoulos, Regularization studies of linear discriminant analysis in small sample size scenarios with application to face recognition, Pattern Recognition Letters, v.26 n.2, p.181-191, 15 January 2005
Xiao-Yuan Jing , Hau-San Wong , David Zhang, Face recognition based on discriminant fractional Fourier feature extraction, Pattern Recognition Letters, v.27 n.13, p.1465-1471, 1 October 2006
Afzel Noore , Richa Singh , Mayank Vatsa, Robust memory-efficient data level information fusion of multi-modal biometric images, Information Fusion, v.8 n.4, p.337-346, October, 2007
Jaepil Ko , Hyeran Byun, N-division output coding method applied to face recognition, Pattern Recognition Letters, v.24 n.16, p.3115-3123, December
Damon L. Woodard , Patrick J. Flynn, Finger surface as a biometric identifier, Computer Vision and Image Understanding, v.100 n.3, p.357-384, December 2005
Ethan Meyers , Lior Wolf, Using Biologically Inspired Features for Face Processing, International Journal of Computer Vision, v.76 n.1, p.93-104, January 2008
Jian Huang , Pong C. Yuen , J. H. Lai , Chun-hung Li, Face recognition using local and global features, EURASIP Journal on Applied Signal Processing, v.2004 n.1, p.530-541, 1 January 2004
Zhang , Xilin Chen , Chunli Wang , Yiqiang Chen , Wen Gao, Recognition of sign language subwords based on boosted hidden Markov models, Proceedings of the 7th international conference on Multimodal interfaces, October 04-06, 2005, Torento, Italy
Creed F. Jones, III , A. Lynn Abbott, Optimization of color conversion for face recognition, EURASIP Journal on Applied Signal Processing, v.2004 n.1, p.522-529, 1 January 2004
Martin D. Levine , Ajit Rajwade, Three-dimensional view-invariant face recognition using a hierarchical pose-normalization strategy, Machine Vision and Applications, v.17 n.5, p.309-325, September 2006
Ralph Gross , Iain Matthews , Simon Baker, Appearance-Based Face Recognition and Light-Fields, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.4, p.449-465, April 2004
nsen Toygar , Adnan Acan, Overlapping on partitioned facial images, Proceedings of the 6th WSEAS International Conference on Signal, Speech and Image Processing, p.197-202, September 22-24, 2006, Lisbon, Portugal
Chunghoon Kim , Chong-Ho Choi, Image covariance-based subspace method for face recognition, Pattern Recognition, v.40 n.5, p.1592-1604, May, 2007
Peng Wang , Qiang Ji, Multi-view face and eye detection using discriminant features, Computer Vision and Image Understanding, v.105 n.2, p.99-111, February, 2007
Vytautas Perlibakas, Distance measures for PCA-based face recognition, Pattern Recognition Letters, v.25 n.6, p.711-724, 19 April 2004
Aleix M. Martnez , Ming-Hsuan Yang , David J. Kriegman, Special issue on face recognition, Computer Vision and Image Understanding, v.91 n.1-2, p.1-5, July
Liang Wang , Tieniu Tan , Huazhong Ning , Weiming Hu, Silhouette Analysis-Based Gait Recognition for Human Transactions on Pattern Analysis and Machine Intelligence, v.25 n.12, p.1505-1518, December
Taha I. El-Arief , Khaled A. Nagaty , Ahmed S. El-Sayed, Eigenface vs. Spectroface: a comparison on the face recognition problems, Proceedings of the Fourth conference on IASTED International Conference: Signal Processing, Pattern Recognition, and Applications, p.321-327, February 14-16, 2007, Innsbruck, Austria
Ruud M. Bolle , Nalini K. Ratha , Sharath Pankanti, Error analysis of pattern recognition systems: the subsets bootstrap, Computer Vision and Image Understanding, v.93 n.1, p.1-33, January 2004
Yong Zhang , Dmitry B. Goldgof , Sudeep Sarkar , Leonid V. Tsap, A sensitivity analysis method and its application in physics-based nonrigid motion modeling, Image and Vision Computing, v.25 n.3, p.262-273, March, 2007
Patrick Courtney , Neil A. Thacker, Performance characterisation in computer vision: statistics in testing and design, Imaging and vision systems: theory, assessment and applications, Nova Science Publishers, Inc., Commack, NY, 2001
Tommy W. Chow , M. K. Rahman, Face Matching in Large Database by Self-Organizing Maps, Neural Processing Letters, v.23 n.3, p.305-323, June 2006
Chao-Chun Liu , Dao-Qing Dai , Hong Yan, Local Discriminant Wavelet Packet Coordinates for Face Recognition, The Journal of Machine Learning Research, 8, p.1165-1195, 5/1/2007
Dit-Yan Yeung , Hong Chang , Guang Dai, Learning the kernel matrix by maximizing a KFD-based class separability criterion, Pattern Recognition, v.40 n.7, p.2021-2028, July, 2007
Andrew Senior , Rein-Lien Hsu , Mohamed Abdel Mottaleb , Anil K. Jain, Face Detection in Color Images, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.5, p.696-706, May 2002
Mario E. Munich , Pietro Perona, Visual Identification by Signature Tracking, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.2, p.200-217, February
Benjamin J. Balas , Pawan Sinha, Region-based representations for face recognition, ACM Transactions on Applied Perception (TAP), v.3 n.4, p.354-375, October 2006
Song Mao , Tapas Kanungo, Empirical Performance Evaluation Methodology and Its Application to Page Segmentation Algorithms, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.3, p.242-256, March 2001
Outdoor recognition at a distance by fusing gait and face, Image and Vision Computing, v.25 n.6, p.817-832, June, 2007
Guang Dai , Dit-Yan Yeung , Yun-Tao Qian, Face recognition using a kernel fractional-step discriminant analysis algorithm, Pattern Recognition, v.40 n.1, p.229-243, January, 2007
Bruce A. Draper , Kyungim Baek , Marian Stewart Bartlett , J. Ross Beveridge, Recognizing faces with PCA and ICA, Computer Vision and Image Understanding, v.91 n.1-2, p.115-137, July
Sudeep Sarkar , P. Jonathon Phillips , Zongyi Liu , Isidro Robledo Vega , Patrick Grother , Kevin W. Bowyer, The HumanID Gait Challenge Problem: Data Sets, Performance, and Analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.2, p.162-177, February 2005
Jian Yang , Alejandro F. Frangi , Jing-yu Yang , David Zhang , Zhong Jin, KPCA Plus LDA: A Complete Kernel Fisher Discriminant Framework for Feature Extraction and Recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.2, p.230-244, February 2005
Junmei Zhu , Christoph von der Malsburg, Maplets for correspondence-based object recognition, Neural Networks, v.17 n.8-9, p.1311-1326, October/November 2004
P. Nicholl , A. Amira , D. Bouchaffra , R. H. Perrott, A statistical multiresolution approach for face recognition using structural hidden Markov models, EURASIP Journal on Advances in Signal Processing, v.2008 n.1, p.1-10, January 2008
Richa Singh , Mayank Vatsa , Afzel Noore, Improving verification accuracy by synthesis of locally enhanced biometric images and deformable model, Signal Processing, v.87 n.11, p.2746-2764, November, 2007
Christian Eckes , Jochen Triesch , Christoph von der Malsburg, Analysis of cluttered scenes using an elastic matching approach for stereo images, Neural Computation, v.18 n.6, p.1441-1471, June 2006
Jie Wang , K. N. Plataniotis , Juwei Lu , A. N. Venetsanopoulos, On solving the face recognition problem with one training sample per subject, Pattern Recognition, v.39 n.9, p.1746-1762, September, 2006
Conrad Sanderson , Samy Bengio , Yongsheng Gao, On transforming statistical models for non-frontal face verification, Pattern Recognition, v.39 n.2, p.288-302, February, 2006
Seong G. Kong , Jingu Heo , Faysal Boughorbel , Yue Zheng , Besma R. Abidi , Andreas Koschan , Mingzhong Yi , Mongi A. Abidi, Multiscale Fusion of Visible and Thermal IR Images for Illumination-Invariant Face Recognition, International Journal of Computer Vision, v.71 n.2, p.215-233, February 2007
Kevin W. Bowyer , Kyong Chang , Patrick Flynn, A survey of approaches and challenges in 3D and multi-modal 3D + 2D face recognition, Computer Vision and Image Understanding, v.101 n.1, p.1-15, January 2006
Rodrigo de Luis-Garca , Carlos Alberola-Lpez , Otman Aghzout , Juan Ruiz-Alzola, Biometric identification systems, Signal Processing, v.83 n.12, p.2539-2557, December
Xiaoyang Tan , Songcan Chen , Zhi-Hua Zhou , Fuyan Zhang, Face recognition from a single image per person: A survey, Pattern Recognition, v.39 n.9, p.1725-1745, September, 2006
Seong G. Kong , Jingu Heo , Besma R. Abidi , Joonki Paik , Mongi A. Abidi, Recent advances in visual and infrared face recognition: a review, Computer Vision and Image Understanding, v.97 n.1, p.103-135, January 2005
Ming-Hsuan Yang , David J. Kriegman , Narendra Ahuja, Detecting Faces in Images: A Survey, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.1, p.34-58, January 2002
W. Zhao , R. Chellappa , P. J. Phillips , A. Rosenfeld, Face recognition: A literature survey, ACM Computing Surveys (CSUR), v.35 n.4, p.399-458, December
Axel Pinz, Object categorization, Foundations and Trends in Computer Graphics and Vision, v.1 n.4, p.255-353, December 2005 | face recognition;algorithm evaluation;FERET database |
354183 | Completion Energies and Scale. | AbstractThe detection of smooth curves in images and their completion over gaps are two important problems in perceptual grouping. In this study, we examine the notion of completion energy of curve elements, showing, and exploiting its intrinsic dependence on length and width scales. We introduce a fast method for computing the most likely completion between two elements, by developing novel analytic approximations and a fast numerical procedure for computing the curve of least energy. We then use our newly developed energies to find the most likely completions in images through a generalized summation of induction fields. This is done through multiscale procedures, i.e., separate processing at different scales with some interscale interactions. Such procedures allow the summation of all induction fields to be done in a total of only $O(N \log N)$ operations, where $N$ is the number of pixels in the image. More important, such procedures yield a more realistic dependence of the induction field on the length and width scales: The field of a long element is very different from the sum of the fields of its composing short segments. | Introduction
The smooth completion of fragmented curve segments
is a skill of the human visual system that has been demonstrated
through many compelling examples. Due to this skill
people often are able to perceive the boundaries of objects
even in the lack of sufficient contrast or in the presence of oc-
clusions. A number of computational studies have addressed
the problem of curve completion in an attempt to both provide
a computational theory of the problem and as part of a
process of extracting the smooth curves from images. These
studies commonly obtain two or more edge elements (also
referred to as edgels) and find either the most likely completions
that connect the elements or the smoothest curves trav-
Research supported in part by Israel Ministry of Science Grant 4135-
1-93 and by the Gauss Minerva Center for Scientific Computation.
y Research supported in part by the Unites States-Israel Binational Science
Foundation, Grant No. 94-00100.
eling through them. The methods proposed for this problem
generally require massive computations, and their results
strongly depend on the energy function used to evaluate the
curves in the image. It is therefore important to develop
methods which simplify the computation involved in these
methods while providing results competitive with the existing
approaches. Below we present such a method that directly
relates to a number of recent studies of completion and
curve salience [9, 18, 5, 19, 12, 7] (see also [2, 6, 8, 14, 15]).
Along with simplifying the computations proposed in these
studies our method also takes into account the size of edge
elements, allowing for a proper computation of completion
and saliency at different scales.
A number of studies have addressed the problem of determining
the smoothest completion between pairs of edge
elements These studies seek to define
a functional that, given two edge elements defined by their
location and orientation in the image, selects the smoothest
curve that connects the two as its minimizing curve. The
most common functional is based on the notion of elas-
tica, that is, minimizing the total squared curvature of the
curve [9]. Scale invariant variations of this functional were
introduced in [18, 5]. While the definition of scale-invariant
elastica is intuitive, there exists no simple analytic expression
to calculate its shape or its energy, and existing numerical
computations are orders-of-magnitude too expensive, as
will be shown below.
In the first part of this paper we revisit the problem
of determining the smoothest completion between pairs
of edges and introduce two new analytic approximations
to the curve of least energy. The first approximation is
obtained by assuming that the deviation of the two input
edgels from the straight line connecting them is relatively
small. This assumption is valid in most of the examples
used to demonstrate perceptual completions in humans and
monkeys [10, 11]. We show that under this simplifying
assumption the Hermite spline (see, e.g., [13]) provides a
good approximation to the curve of least energy and a very
good approximation to the least energy itself. We further
develop a second expression, which directly involves the
angles formed by the edgels and the straight line connecting
them. The second expression is shown to give extremely
accurate approximations to the curve of least energy even
when the input edgels deviate significantly from the line
connecting them. We then introduce a new, fast numerical
method to compute the curve of least energy and show that
our analytic approximations are obtained at early stages of
this numerical computation.
Several recent studies view the problems of curve completion
and salience as follows. Given M edge elements,
the space of all curves connecting pairs of elements is examined
in an attempt to determine which of these completions
is most likely using smoothness and length considerations.
For this purpose [7, 19] define an affinity measure between
two edge elements that grows with the likelihood of these
elements being connected by a curve. By fixing one of the
elements and allowing the other element to vary over the
entire image an induction field representing the affinity values
induced by the fixed element on the rest of the image
is obtained. The system finds the most likely completions
for the M elements by applying a process that includes a
summation of the induction fields for all M elements.
In the second part of this paper we use our newly developed
completion energies to define an affinity measure that
encourages smoothness and penalizes for gap length. We
then use the induction fields defined by this affinity measure
to solve the problem of finding the most likely completions
for M elements. Since in practice edge elements are never
dimensionless, because they are usually obtained by applying
filters of a certain width and length to the image, we
adjust our affinity measure to take these parameters into ac-
count. We do so by relating the scale of these filters to the
range of curvatures which can be detected by them and to
the orientational resolution needed. Finally, we show that
our affinity measure is asymptotically smooth, and so can be
implemented using multigrid methods and run efficiently in
time complexity O(nm) (where n is the number of pixels
and m is the number of discrete orientations at every pixel).
The paper is divided as follows. In Section 2 we review
the notion of elastica and its scale invariant variation. In
Section 3 we introduce the two analytic approximations to
the curve of least energy. Then, in Section 4 we develop a
fast numerical method to compute the curve of least energy
and compare it to our analytic approximations. Finally,
in Section 5 we construct an affinity measure taking into
account the length and width of the edge filters applied to
the image. We then discuss a multiscale (multigrid) method
for fast summation of induction fields.
2. Elastica
Consider two edge elements positioned at
R 2 with directed orientations Y 1 and Y 2 respectively measured
from the right-hand side of the line passing through
Below we shall confine ourselves to the case
that
may conveniently assume that
This is illustrated in Fig. 1(a). Let C 12 denote the set of
curves through e 1 and e 2 . Denote such a curve by its
orientation representation Y(s), where 0 - s - L is the
arclength along the curve. That is,
R s
s
and
R s
s. Also denote the curvature of
the curve at s by dY(s)=ds.
r
(a)
Y
y
F
F
F 1-
r
(b)
Figure
1. (a) The planar relation between two edge ele-
ments, This relation is governed by
are
measured from the line
(b) The more general relation between F i and Y i .
The most common functional used to determine the
smoothest curve traveling through P 1 and P 2 with respective
orientations Y 1 and Y 2 is the elastica functional. Namely,
the smoothest curve through e 1 and e 2 is the curve Y(s)
which minimizes the functional G el (Y) def
R L
Elastica was already introduced by Euler. It was first applied
to completion by Ullman [17], and its properties were
further investigated by Horn [9].
One of the problems with the classical elastica model is
that it changes its behavior with a uniform scaling of the
image. In fact, according to this model if we increase r,
the distance between the two input elements, the energy
of the curve connecting them proportionately decreases,
as can be easily seen by rescaling s (cf. [1]). This is
somewhat counter-intuitive since psychophysical and neurobiological
evidence suggests that the affinity between a
pair of straight elements drops rapidly with the distance
between them [11]. Also, the classical elastica does not
yield circular arcs to complete cocircular elements. To solve
these problems Weiss [18, 5] proposed to modify the elastica
model to make it scale invariant. His functional is defined
as G inv (Y)
R L
We believe that a proper
adjustment of the completion energy to scale must take into
account not only the length of the curve (or equivalently the
distance between the input elements), but also the dimensions
of the input edge elements. Both the elastica functional
and its scale invariant version assume that the input elements
have no dimensions. In practice, however, edge elements
are frequently obtained by convolving the image with filters
of some specified width and length. A proper adjustment
of the completion energy as a result of scaling the distance
between the elements should also consider whether a corresponding
scaling in the width and length of the elements has
taken place. Below we first develop useful approximations
to the scale invariant functional. (These approximations can
also readily be used with slight modifications to the classical
elastica measure.) Later, in Section 5, we develop an affinity
measure between elements that also takes into account both
the distance between the elements and their dimensions.
3. Analytic simplification of G inv
Although the definition of both the classical and the scale
invariant elastica functionals is fairly intuitive, there is no
simple closed-form expression that specifies the energy or
the curve shape obtained with these functionals. In this
section we introduce two simple, closed-form approximations
to these functionals. Our first approximation is valid
when the sum of angles jF relatively small. This
assumption represents the intuition that in most psychophysical
demonstrations gap completion is perceived when the
orientations of the curve portions to be completed are nearly
collinear. With this assumption we may also restrict for now
the range of applicable orientations to Y
The second approximation will only assume that jF
is small, i.e., that the curve portions to be completed are
nearly cocircular.
Since the curve of least energy is supposed to be very
smooth, it is reasonable to assume that within the chosen
range of Y the smoothest curve will not wind much.
Consequently, it can be described as a function
in Fig. 1(a). Expressing the curvature in terms of x and y we
obtain that G inv
R L
For we get that L ! r, and that the variation
of unimportant for the comparison of
G inv (Y) over different curves Y 2 C 12 , so that G inv (Y) '
r
R r
Hence
G inv (Y) ' r min
The minimizing curve is the appropriate cubic Hermite
spline (see [1])
r
so that
Evidently, this simple approximation to E inv is scale-
independent. This leads us to define the scale-invariant
spline completion energy as: E spln
Although the spline energy provides a good
approximation to the scale invariant elastica measure for
small values of jF the measure diverges for large
values. An alternative approximation to E inv can be constructed
by noticing that for such small values tan F 1 ' F 1
and tan F 2 ' F 2 . Thus, we may define:
We refer to this functional as the scale-invariant angular
completion energy. This measure does not diverge for large
values of jF In fact, when F
In Section 4 below we show
that this angular energy is obtained in an early stage of the
numeric computation of E inv , and that it provides extremely
accurate approximations to the scale invariant least energy
functional even for relatively large values of jF 1 j+jF 2 j, especially
for small jF for the range of nearly
cocircular elements. Using the numeric computation we
can also derive the smoothest curve according to
The angular completion energy can be generalized as
follows:
where Eq. (4) is identical to Eq. (6) with 2. That
is, the angular completion energy is made of an equal sum
of two penalties. One is for the squared difference between
F 1 and F 2 , and the other is for the growth in each of them.
This suggests a possible generalization of E ang to other
weights a - 0 and b - 0. One can also think of using
energies such as E circ
more elaborate study of these types of energies and their
properties is presented in [1].
Finally, we note that the new approximations at small
angles can also be used to approximate the classical elastica
energy, since
el
G el (Y) ' 1
4. Computation of E inv
We use the scale-invariance property of G inv in order to
reformulate the minimization problem into minimizing over
all C 12 curves of length Applying
Euler-Lagrange equations (see, e.g., [13]) we get that a necessary
condition for -
s) to be an extremal curve is that it
should satisfy for some -:
Y s:t: (8)
Considering the very nature of the original minimization
problem, and also by repeatedly differentiating both sides
of the ODE equation, it can be shown that its solution must be
very smooth. Hence, we can well approximate the solution
by a polynomial of the form
where n is small. (By comparison, the discretization of the
same problem presented in [5] is far less efficient, since it
does not exploit the infinite smoothness of the solution on
the full interval (0,1). As a result the accuracy in [5] is
only second order, while here it is "1-order", i.e., the error
decreases exponentially in the number of discrete variables
:::; an from Eq. (8) and Fixing
n, as well as two other integers -
n and p, we will build the
following system of n+2 equations for the n+2 unknowns
a an ; and -
collocating the ODE, and
n) are the weights of a p-order numerical
integration. Generally, we increase n gradually
and increase -
n and p as functions of n in such a way that
the discretization error will not be governed by the discretization
error of the integration. The nonlinear system of
equations is solved by Newton iterations (also called
Newton-Raphson; see, e.g. [13].) We start the Newton iterations
from a solution previously obtained for a system with
a lower n. Actually, only one Newton iteration is needed
for each value of n if n is not incremented too fast. In
this way convergence is extremely fast. At each step, in
just several dozen computer operations, the error in solving
the differential equation can be squared. In fact, due to the
smoothness of the solution for the ODE, already for the simple
2)-system and the Simpson integration rule
very good approximation to the accurate solution
and also to E inv is obtained, as can be seen in
[1]. The good approximations obtained already for small
values of (n, -
n) suggest that E inv can be well approximated
by simple analytic expressions, as indeed we show in [1] by
comparingbetween several simple approximations to E inv .
Fig. 2 illustrates some of the completions obtained using
inv and the two analytic approximations E ang and E spln .
It can be seen that the differences between the three curves
is barely noticeable, except in large angles where E spln
diverges. Notice especially the close agreement between
the curve obtained with the angular energy (Eq. (5)) and that
obtained with the scale-invariant elastica measure even in
large angles and when the angles deviate significantly from
cocircularity.
Note that although the spline curve does not approximate
the scale invariant elastica curve for large angles jF 1 j
and jF 2 j it still produces a reasonable completion for the
elements. In fact, when the two elements deviate from co-
circularity the elastica accumulates high curvature at one of
its ends, whereas in the spline curve continues to roughly
follow the tangent to the two elements at both ends (see, e.g,
Fig. 2(b)) . This behavior is desirable especially when the
elements segments (see Section 5.2).
5. Completion field summation
Until now we have considered the problem of finding the
smoothest completion between pairs of edge elements. A
natural generalization of this problem is, given an image
from which M edge elements are extracted, find the most
likely completions connecting pairs of elements in the image
and rank them according to their likelihoods. This problem
has recently been investigated in [7, 19]. In these studies
affinity measures relating pairs of elements were defined.
The measures encourage proximity and smoothness of com-
pletion. Using the affinity measures the affinities induced
by an element over all other elements in the image (referred
to as the induction field of the element) are derived. The
likelihoods of all possible completions are then computed
simultaneously by a process which includes summation of
the induction fields for all M elements.
An important issue that was overlooked in previous ap-
proaches, however, is the issue of size of the edge elements.
Most studies of curve completion assume that the edge elements
are dimensionless. In practice, however, edge elements
are usually obtained by convolving the image with
filters of certain width and length. A proper handling of
scale must take these parameters into account. Thus, for
example, one may expect that scaling the distance between
two elements would not result in a change in the affinity of
the two elements if the elements themselves are scaled by the
same proportion. Below we first present the general type of
non-scaled induction underlying previous works. We then
modify that induction to properly account for the width and
length of the edge elements.
Finally, the process of summing the induction fields may
be computationally intensive. Nevertheless, in the third part
of this section we show that the summation kernel obtained
with our method is very smooth. Thus, the summation of
our induction fields can be speeded up considerably using a
multigrid algorithm. This result also applies to the summation
kernels in [19, 16, 7], and so an efficient implementation
of these methods can be obtained with a similar multigrid
algorithm.
5.1. Non-scaled induction
In [12, 19] a model for computing the likelihoods of curve
completions, referred to as Stochastic Completion Fields,
Figure
2. Completion curves: elastica in solid line, -
(Eq. (5)) in dotted line, and the cubic Hermite spline (Eq. (2)) in dashed line. (a)
was proposed. According to this model, the edge elements
in the image emit particles which follow the trajectories
of a Brownian motion. It was shown that the most likely
path that a particle may take between a source element and
a sink element is the curve of least energy according to
the Elastica energy function 1 . To compute the stochastic
completion fields a process of summing the affinity measures
representing the source and sink fields was used. In [1]
we show, by further analyzing the results in [16], that the
affinity measure used for the induction in [19, 16] is of
the general type: A(e 1
Fig. 1(b) ), where r 0 and oe 0 are strictly positive a-priori set
parameters. These parameters need to be adjusted properly
according to the scale involved (see Sec. 5.2). Note that
for small values of (jF 1 j,jF 2 el . Hence,
el =oe 0 .
Another method which uses summation of induction
fields to compute the salience of curves was presented in [7].
In their method the affinity between two edge elements
which are cocircular has the form: e \Gammafl r e \Gammaffi- , where fl and ffi
are strictly positive constants, - is the curvature of the circle
connecting e 1 and e 2 , and r is the distance between e 1 and
e 2 . A reasonable and straightforward definition in that spirit
is -
serves as an approximation
according to Eq. (3). Fig. 3 shows
an example of computing the "stochastic completion field,"
suggested by Williams and Jacobs in [19], while replacing
their affinity measure with the simple expression -
It can be verified by comparing the fields obtained with our
affinity measure with the fields presented in [19] that the
results are very similar although a much simpler affinity
measure was employed.
5.2. Induction and scale
Given an image, an edge element is producedby selecting
a filter of a certain length l and width w (e.g., rectangular
filters) and convolving the filter with the image at a certain
position and orientation. The result of this convolution is
a scalar value, referred to as the response of the filter. An
edge filter may, for example, measure the contrast along
its primary axis, in which case its response represents the
"edgeness level", or the likelihood of the relevant subarea
of the image to contain an edge of (l; w) scale. Similarly,
1 Actually, the path minimizes the energy functional
R L
for some predetermined constant -.
(a) (b)
Figure
3. Stochastic completion fields (128 \Theta 128 pixels, 36
orientations) with the induction e \Gamma2r e \Gamma20E spln . (a) F
. The results closely
resemble those obtained in [19].
a filter may indicate the existence of fiber-like shapes in the
image, in which case its response represents the "fiberness
level" of the relevant subarea of the image. Below we use the
term "straight responses" to refer to the responses obtained
by convolving the image with an edge or a fiber filter.
Consider now the edge elements obtained by convolving
the image with a filter of some fixed length l and width w.
Every edge element now is positioned at a certain pixel P
and is oriented in two opposite directed orientations Y and
-. The number of edge elements required to faithfully
represent the image at this scale depends on l and
Thus, long and thin elements require finer resolution
in orientation than square elements. In fact, the orientational
resolution required to sample significantly different
orientations increases linearly with l=w (see [4]). Similarly,
elements of larger size require less spatial resolution than
elements of smaller size. Brandt and Dym ([4]) use these
observations in order to introduce a very efficient computation
(O(N log N), where N is the number of pixels in the
image) of all significantly different edge elements.
Given a particular scale determined by the length l and
width w of edge elements, we would like to compute a completion
field for this scale. Note that only curves within
a relevant range of curvature radii can arouse significant
responses for our l \Theta w elements. Denote the smallest curvature
radius that will arouse a still significant response by
curvature radii will arouse significant
responses also in larger l=w scales, implying there for
a farther-reaching and more orientation-specific continua-
tion.)
By Fig. 4(a) we see that
Consequently, we have (l=2ae) 2 -
implying that ae - l 2 =8w.
Next, consider a pair of straight responses. Assuming
these elements are roughly cocircular, then, using the relations
defined in Fig. 4(b) , the differential relation Y 0
1=ae(s) can be approximated by (Y
so that W - r=ae.
Hence, for completion at a particular scale (l; w), it is reasonable
to define for every pair of points P 1 and P 2 a scale for
the turning angle W given by r=ae(l; w). That is, in the scale
(l,w) we define the completion energy between the pair of
straight responses so as to depend on the scaled turning angle
Wae=r. Since is straightforward to show
that 0:5W -
reasonable definition
for the scaled angular energy, therefore, is a monotonically
decreasing function of (ae=r)
a
(a)
rcosa
r
Y
F 1Y
F
r
(b)
Y
r
Figure
4. (a) The relation between l,w, and the curvature radius
ae. (b) The turn W that a moving particle takes in its way between
two straight responses, characterized each by a planar location and
an orientation.
Obviously, in any given scale of straight responses,(l; w),
for every F 1 and F 2 , the induction of P 1 upon P 2 should
decrease with an increase of r=ae. Hence, we define the field
induced by an element e 1 of length l and width w at location
directed orientation Y 1 on a similar element e 2 at
where u 1 denotes the strength of response at e 1 ,
some appropriate function of this response, and
ae
r
F d and F t (the distance and turning attenuation functions,
are smoothly decreasing dimensionless functions
that should be determined by further considerations
and experience. Thus, our summation kernel is a product
of the orientational and the spatial components involved in
completing a curve between e 1 and e 2 . As we shall see
below, this definition has many computational advantages.
Let fu i g denote the set of straight responses for a given
scale (l; w), where each u i is associated with two directed
edges
-). The total
field induced at any element e
is expressed by
\Theta -
The total field induced at -
is given by
G (l;w)
\Theta
\Theta
Since in general the responses obtained by convolving the
image with edge filters are bi-directional we may want to
combine these two fields into one. This can be done in
various ways. The simplest way is to take the sum fv
as the completion field. Another possibility, in the spirit of
[19], is to take the product fv j -
as the completion field.
Note that the field of a long straight response should
be very different (farther-reaching and more orientation-
specific) than the sum of the fields of shorter elements
composing it, and should strongly depend on its width (see
Fig. 5). This suggests that for a comprehensive completion
process one must practice a multiscale process, performing
a separate completion within each scale. The scaled induction
field (10)-(11), avoids a fundamental difficulty of
non-scaled fields like [7, 19, 16]. The latter exhibit so weak
a completion for far elements, that it would be completely
masked out by local noise and foreign local features.103050(a)103050(b)
Figure
5. Induction fields (200 \Theta 200 pixels) in different scales
using F d
(a) The induction field of one
long directed orientations. (b) The
sum of induction fields of the three shorter elements composing
this long element, each consist of: directed
orientations.
5.3. Fast multigrid summation of induction-fields
be the number of sites (P ), and
the number of orientations (Y) at each site,
that are required in order to describe all the l \Theta w straight
responses that are significantly different from each other.
It can be shown (see [4]) that if l and w are measured in
pixel units then, for any N-pixel picture,
so the total number of l \Theta w elements
is O(N=w). Hence, for any geometric sequence of scales
(e.g., l=1,2,4,., and w=1,3,9,.) the total number of straight
elements is O(N log N). It has been shown (in [4]) that all
the responses at all these elements can be calculated in only
O(N log N) operations, using a multiscale algorithm that
constructs longer-element responses from shorter ones.
At any given scale l \Theta w, it seems that the summations
(Eqs. (12) and (13)), summing over
for each value of would require a total
of operations (even though some of them
can be performed in parallel to each other, as in [20]).
However, using the smoothness properties of the particular
kernel (11), the summation can be reorganized in a
multiscale algorithm that totals only O(nm) operations
(and the number of unparallelizable steps grows only logarithmically
in nm). Indeed, the functions in (11) would
usually take on the typical form F d
. For such choices of the functions,
and practically for any other reasonable choice, the kernel
has the property
of "asymptotic smoothness." By this we mean that any q-
order derivative of G with respect to any of its six arguments
decays fast with r
\Theta
and the
higher q is the faster the decay is. Also, for any fixed r ij
(even the smallest, i.e., r O(l)), G is a very smooth
function of Y i and of Y j .
Due to the asymptotic smoothness, the total contribution
to v j (and -
elements far from P j is a smooth function
of need not be computed separately for
each j, but can be interpolated (q-order interpolation, with as
small an error as desired by using sufficiently high q) from
its values at a few representative points. For this and similar
reasons, multiscale algorithms, which split the summations
into various scales of farness (see details in [3]) can perform
all the summations in merely O(nm) operations.
6. Conclusion
Important problems in perceptual grouping are the detection
of smooth curves in images and their completion
over gaps. In this paper we have simplified the computation
involved in the process of completion, exploiting the
smoothness of the solution to the problem, and have defined
affinity measures for completion that take into a proper account
the scale of edge elements. In particular, we have
introduced new, closed-form approximations for the elas-
tica energy functional and presented a fast numeric method
to compute the curve of least energy. In this method the
error decreases exponentially with the number of discrete
elements. We then have used our approximations to define
an affinity measure which takes into account the width
and length of the edge elements by considering the range
of curvatures that can be detected with corresponding filters
of the same scale. Finally, we have shown that solutions
to the problem of finding the most likely completions in an
image can be implemented using a multigrid algorithm in
time that is linear in the number of discrete edge elements
in the image. This last observation applies also to recent
methods for completion and salience [7, 19]. In the future
we intend to use the multigrid algorithm to simultaneously
detect completions at different scales in order to combine
these completions into a single saliency map.
--R
"Completion energies and scale,"
"Shape Encoding and Subjective Contours,"
"Multilevel computations of integral transforms and particle interactions with oscillatory kernels,"
"Fast computation of multiple line integrals,"
"On Minimal Energy Trajec- tories,"
"The Role of Illusory Contours in Visual Segmentation,"
"Inferring Global Perceptual Contours from Local Features,"
"A Computational Model of Neural Contour Processing: Figure-Ground Segregation and Illusory Contours,"
"The Curve of Least Energy,"
"Organization in Vision,"
"Improve- ment in visual sensitivity by changes in local context: Parallel studies in human observers and in V1 of alert monkeys,"
"Elastica and Computer Vision,"
"Handbook of applied mathematics - Second Edition,"
"Shape Completion,"
"Structural Saliency: The Detection of Globally Salient Structures Using a Locally Connected Network,"
"Analytic Solution of Stochastic Completion Fields,"
"Filling-In the Gaps: The Shape of Subjective Contours and a Model for Their Generation,"
"3D Shape Representation by Contours,"
"Stochastic Completion Fields: A Neural Model of Illusory Contour Shape and Salience,"
"Local Parallel Computation of Stochastic Completion Fields,"
--TR
--CTR
Washington Mio , Anuj Srivastava , Xiuwen Liu, Contour Inferences for Image Understanding, International Journal of Computer Vision, v.69 n.1, p.137-144, August 2006
Sylvain Fischer , Pierre Bayerl , Heiko Neumann , Rafael Redondo , Gabriel Cristbal, Iterated tensor voting and curvature improvement, Signal Processing, v.87 n.11, p.2503-2515, November, 2007
Marie Rochery , Ian H. Jermyn , Josiane Zerubia, Higher-Order Active Contour Energies for Gap Closure, Journal of Mathematical Imaging and Vision, v.29 n.1, p.1-20, September 2007
Dan Kushnir , Meirav Galun , Achi Brandt, Fast multiscale clustering and manifold identification, Pattern Recognition, v.39 n.10, p.1876-1891, October, 2006
Fragment-based image completion, ACM Transactions on Graphics (TOG), v.22 n.3, July
Benjamin B. Kimia , Ilana Frankel , Ana-Maria Popescu, Euler Spiral for Shape Completion, International Journal of Computer Vision, v.54 n.1-3, p.157-180, August-September
Song Wang , Joachim S. Stahl , Adam Bailey , Michael Dropps, Global Detection of Salient Convex Boundaries, International Journal of Computer Vision, v.71 n.3, p.337-359, March 2007
Jonas August , Steven W. Zucker, Sketches with Curvature: The Curve Indicator Random Field and Markov Processes, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.4, p.387-400, April
Xiaofeng Ren , Charless C. Fowlkes , Jitendra Malik, Learning Probabilistic Models for Contour Completion in Natural Images, International Journal of Computer Vision, v.77 n.1-3, p.47-63, May 2008
Song Wang , Toshiro Kubota , Jeffrey Mark Siskind , Jun Wang, Salient Closed Boundary Extraction with Ratio Contour, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.4, p.546-561, April 2005 | elastica curve;multiscale;induction field;completion field;perceptual grouping;fast summation;scale;least-energy curve;curve completion;curve saliency |
354191 | Self-Calibration of a 1D Projective Camera and Its Application to the Self-Calibration of a 2D Projective Camera. | AbstractWe introduce the concept of self-calibration of a 1D projective camera from point correspondences, and describe a method for uniquely determining the two internal parameters of a 1D camera, based on the trifocal tensor of three 1D images. The method requires the estimation of the trifocal tensor which can be achieved linearly with no approximation unlike the trifocal tensor of 2D images and solving for the roots of a cubic polynomial in one variable. Interestingly enough, we prove that a 2D camera undergoing planar motion reduces to a 1D camera. From this observation, we deduce a new method for self-calibrating a 2D camera using planar motions. Both the self-calibration method for a 1D camera and its applications for 2D camera calibration are demonstrated on real image sequences. | Introduction
A CCD camera is commonly modeled as a 2D projective device that projects a point in P 3 (the
projective space of dimension 3) to a point in P 2 . By analogy, we can consider what we call a 1D
projective camera which projects a point in P 2 to a point in P 1 . This 1D projective camera may seem
very abstract, but many imaging systems using laser beams, infra-red or ultra-sound acting only on a
source plane can be modeled this way. What is less obvious, but more interesting for our purpose, is
that in some situations, the usual 2D camera model is also closely related to this 1D camera model.
One first example might be the case of the 2D affine camera model operating on line segments: The
direction vectors of lines in 3D space and in the image correspond to each other via this 1D projective
camera model [21]. Other cases will be discussed later.
In this paper, we first introduce the concept of self-calibration of a 1D projective camera by analogy
to that of a 2D projective camera which is a very active topic [17, 12, 7, 13, 1, 29, 20] since the
pioneering work of [18]. It turns out that the theory of self-calibration of 1D camera is considerably
simpler than the corresponding one in 2D. It is essentially determined in a unique way by a linear
algorithm using the trifocal tensor of 1D cameras. After establishing this result, we further investigate
the relationship between the usual 2D camera and the 1D camera. It turns out that a 2D camera
undergoing a planar motion can be reduced to a 1D camera on the trifocal line of the 2D cameras.
This remarkable relationship allows us to calibrate a real 2D projective camera using the theory of
self-calibration of a 1D camera. The advantage of doing so is evident. Instead of solving complicated
Kruppa equations for 2D camera self-calibration, exact linear algorithm can be used for 1D camera
self-calibration. The only constraint is that the motion of the 2D camera should be restricted to planar
motions. The other applications, including 2D affine camera calibration, are also briefly discussed.
Part of this work was also presented in [10].
The paper is organised as follows. In Section 2, we review the 1D projective camera and its
trifocal tensor. Then, an efficient estimation of the trifocal tensor is discussed in Section 3. The
theory of self-calibration of a 1D camera is introduced and developed in Section 4. After pointing
out some direct applications of the theory in Section 5, we develop in Section 6 a new method of 2D
camera self-calibration by converting a 2D camera undergoing planar motions into a 1D camera. The
experimental results on both simulated and real image sequences are presented in Section 7. Finally,
some concluding remarks and future directions are given in Section 8.
Throughout the paper, vectors are denoted in lower case boldface, matrices and tensors in upper
case boldface. Some basic tensor notation is used: Covariant indices as subscripts, contravariant
indices as superscripts and the implicit summation convention.
projective camera and its trifocal tensor
We will first review the one-dimensional camera which was abstracted from the study of the geometry
of lines under affine cameras [21]. We can also introduce it directly by analogy to a 2D projective
camera.
A 1D projective camera projects a point (projective plane) to a point
(projective line). This projection may be described by a 2 \Theta 3 homogeneous
matrix M as
x:
We now examine the geometric constraints available for points seen in multiple views similar to
the 2D camera case [23, 24, 13, 28, 9]. There is a constraint only in the case of 3 views, as there is no
any constraint for 2 views (two projective lines always intersect in a point in a projective plane).
Let the three views of the same point x be given as
(1)
These can be rewritten in matrix form as
0: The vector
cannot be zero, so
0: (2)
The expansion of this determinant produces a trifocal constraint for the three views
where T ijk is a 2 \Theta 2 \Theta 2 homogeneous tensor whose components T ijk are 3 \Theta 3 minors (involving
all three views) of the following 6 \Theta 3 joint projection matrix:
The components of the tensor can be made explicit as T where the
bracket denotes the 3 \Theta 3 minor of i-th,j 0 -th and k 00 -th row vector of the above joint projection
matrix and bar "-" in - i, - j and - k denotes the mapping (1; 2) 7! (2; \Gamma1):
It can be easily seen that any constraint obtained by adding further views reduces to a trilinearity.
This proves the uniqueness of the trilinear constraint. Moreover, the 2 \Theta 2 \Theta 2 homogeneous tensor
has so it is a minimal parametrization of three views in the uncalibrated
setting since three views have exactly 3 \Theta (2 \Theta 3 \Gamma to a projective
transformation in P 2 .
This result for the one-dimensional projective camera is very interesting. The trifocal tensor
encapsulates exactly the information needed for projective reconstruction in P 2 . Namely, it is the
unique matching constraint, it minimally parametrizes the three views and it can be estimated linearly.
Contrast this to the 2D image case in which the multilinear constraints are algebraically redundant
and the linear estimation is only an approximation based on over-parametrization.
3 Estimation of the trifocal tensor of a 1D camera
Each point correspondence in 3 views u yields one homogeneous linear equation for the
8 tensor components T ijk for 2:
With at least 7 point correspondences, we can solve for the tensor components linearly.
A careful normalisation of the measurement matrix is nevertheless necessary just like that stressed
in [11] for the linear estimation of the fundamental matrix. The points at each image are first translated
so that the centroid of the points is the origin of the image coordinates, then scaled so that the average
distance of the points from the origin is 1. This is achieved by an affine transformation of the image
coordinates in each image: -
With these normalised image coordinates, the normalised tensor components -
T ijk are linearly
estimated by SVD from -
The original tensor components T ijk are recovered by de-scaling the normalised tensor -
T ijk as
a
c
4 Self-calibration of a 1D camera from 3 views
The concept of camera self-calibration using only point correspondences became popular in computer
vision community following Maybank and Faugeras [18] by solving the so-called Kruppa equations.
The basic assumption is that the internal parameters of the camera remain invariant. In the case of
the 2D projective camera, the internal calibration (the determination of the 5 internal parameters) is
equivalent to the determination of the image ! of the absolute conic in P 3 .
4.1 The internal parameters of a 1D camera and the circular points
For a 1D camera represented by a 2 \Theta 3 projection matrix M 2\Theta3
, this projection matrix can always be
decomposed into
R 2\Theta2
where K
ff
represents the two internal parameters: ff the focal length in pixels and u 0
the position of the principal point; the external parameters are represented by a 2 \Theta 2 rotation matrix
R 2\Theta2
cos ' sin '
sin ' cos '
and the translation vector t 2\Theta1
The object space for a 1D camera is a projective plane, and any rigid motion on the plane leaves
the two circular points I and J invariant (a pair of complex conjugate points on the line at infinity) of
the plane. Similar to the 2D camera case where the knowledge of the internal parameters is equivalent
to that of the image of the absolute conic, the knowledge of the internal parameters of a 1D camera is
equivalent to that of the image points i and j of the circular points in P 2 .
The relationship between the image of the circular points and the internal parameters of the 1D
camera follows directly by projecting one of the circular points I = (i;
\Gamma1, by
the camera M 2\Theta3
ff
R 2\Theta2
@
It clearly appears that the real part of the ratio of the projective coordinates of the image of the
circular point i is the position of the principal point u
and the imaginary part is the focal length ff.
4.2 Determination of the images of the circular points
Our next task is to locate the circular points in the images. Let us consider one of the circular points,
say I . This circular point is projected onto i, i 0 and i 00 in the three views. As they should be invariant
because of our assumption that the internal parameters of the camera are constant, we have:
where
The triplet of corresponding points i satisfies the trilinear constraint (3) as all corresponding
points do, therefore, T ijk 0: This yields the following cubic
equation in the unknown
A cubic polynomial in one unknown with real coefficients has in general either three real roots or one
real root and a pair of complex conjugate roots. The latter case of one real and a pair of complex
conjugates is obviously the case of interest here. In fact, Equation (4) characterizes all the points of
the projective plane which have the same coordinates in three views. This is reminiscent of the 3D
case where one is interested in the locus of all points in space that project onto the same point in
two views (see Section 6). The result that we have just obtained is that in the case where the internal
parameters of the camera are constant, there are in general three such points: the two circular points
which are complex conjugate, and a real point with the following geometric interpretation.
Consider first the case of two views and let us ask the question, what is the set of points such
that their images in the two views are the same? This set of points can be called the 2D horopter
(h) of the set of two 1D views. Since the two cameras have the same internal parameters we can
ignore them and assume that we work with the calibrated pixel coordinates. In that case, a camera
can be identified to an orthonormal system of coordinates centered at the optical center, one axis is
parallel to the retina, the other one is the optical axis. The two views correspond to each other via a
rotation followed by a translation. This can always be described in general as a pure rotation around a
point A whose coordinates can easily be computed from the cameras' projection matrices. A simple
computation then shows that the horopter (h) is the circle going through the two optical centers and
A, as illustrated in Figure 1.a. In fact it is the circle minus the two optical centers. Note that since all
circles go through the circular points (hence their name), they also belong to the horopter curve, as
expected.
In the case of three views, the real point, when it exists, must be at the intersection of the horopter
of the first two views and the horopter (h 23 ) of the last two views. The first one is a circle going
through the optical centers C 1
and C 2
, the second one is a circle going through the optical centers C 2
and C 3
. Those two circles intersect in general at a second point C which is the real point we were
discussing, and the third circle (h 13
corresponding to the first and third views must also go through
the real point C, see Figure 1.b.
a.
First camera
Second camera
center of rotation
b.C
Second camera
C2First camera
Third camera
Figure
1: a. The two dimensional horopter which is the set of points having the same coordinates in
the 2 views (see text). b. The geometric interpretation of the real point C which has the same images
in all three views (see text).
We have therefore established the interesting result that the internal parameters of a 1D camera can
be uniquely determined through at least 7 point correspondences in 3 views: the seven points yield
the trifocal tensor and Equation (4) yields the internal parameters.
Applications
The theory of self-calibration of 1D camera is considerably simpler than the corresponding one in
2D [18] and can be directly used whenever a 1D projective camera model occurs, for instance: self-calibration
of some active systems using laser beams, infra-red [3] or ultra-sound whose imaging
system is basically reduced to a 1D camera on the source plane; and partial/full self-calibration of 2D
projective camera using planar motions.
The first type of applications is straightforward. The interesting observation is that the 1D calibration
procedure can also be used for self-calibrating a real 2D projective camera if the camera motion
is restricted to planar motions. This is discussed in detail in the remaining of this paper.
6 Calibrating a 2D projective camera using planar motions
A planar motion consists of a translation in a plane and a rotation about an axis perpendicular to that
plane. Planar motion is often performed by a vehicle moving on the ground, and has been used for
camera self-calibration by Beardsley and Zisserman [4] and by Armstrong et al. [1].
Recall that the self-calibration of a 2D projective camera [8, 18] consists of determining the 5
unchanging internal parameters of a 2D camera, represented by a 3 \Theta 3 upper triangular matrix
This is mathematically equivalent to the determination of the image of the absolute conic
!, which is a plane conic described by x Given the image
of the absolute conic x T the calibration matrix K can be found from C using the Choleski
decomposition.
Converting 2D images into 1D images
For a given planar motion, the trifocal plane-the plane through the camera centers-of the camera
is coincident with the motion plane as the camera is moving on it. Therefore the image location
of the motion plane is the same as the trifocal line which could be determined from fundamental
matrices. The determination of the image location of the motion plane has been reported in [1, 4].
Obviously, if restricting the working space to the trifocal plane, we have a perfect 1D projective camera
model which projects the points of the trifocal plane onto the trifocal line in the 2D image plane,
as the trifocal line is the image of the trifocal plane. In practice, very few or no any points at all really
lie on the trifocal plane. However, we may virtually project any 3D points onto the trifocal plane,
therefore comes the central idea of our method: the 2D images of a camera undergoing any planar
motion reduce to 1D images by projecting the 2D image points onto the trifocal line. This can be
achieved in at least two ways.
First, if the vanishing point v of the rotation axis is well-defined. This vanishing point of the
rotation axis being the direction perpendicular to the common plane of motion can be determined
from fundamental matrices by noticing that the image of the horopter for planar motion degenerates
to two lines [1], one of which goes through the vanishing point of the rotation axis, we may refer to
[1] for more details.
Given a 3D point M with image m, we mentally project it to -
M in the plane of motion, the
projection being parallel to the direction of rotation. The image -
m of this virtual point can be obtained
in the image as the intersection of the line v \Theta m with the trifocal line t, i.e. -
(v \Theta m): Since
the vanishing point v of the rotation axis and the trifocal line t are well defined, this construction
illustrated in Figure 2 is a well-defined geometric operation.
Note this is also a projective projection from P 2 (image plane) to P 1 (trifocal line): m 7! -
m; as
is illustrated in Figure 3.
Alternatively, if the vanishing point is not available, we can nonetheless create the virtual points
in the trifocal plane. Given two points M and M 0 with images m and m 0 , the line (M; M 0 ) intersects
the plane of motion in -
M . The image -
m of this virtual point can be obtained in the image as the
intersection of the line (m; m 0 ) with the trifocal line t, see Figure 4.
Another important consequence of this construction is that 2D image line segments can also be
converted into 1D image points! The construction is even simpler, as the resulting 1D image point is
just the intersection of the line segment with the trifocal line.
Direction of the axis
of rotation
Figure
2: Creating a 1D image from a 2D image from the vanishing point of the rotation axis and the
trifocal line (see text).
trifocal line
vanishing point of the rotation axis
2D image point
1D image point
Figure
3: Converting 2D image points into 1D image points in the image plane is equivalent to a
projective projection from the image plane to the trifocal line with the vanishing point of the rotation
axis as the projection center.
Figure
4: Creating a 1D image from any pairs of points or any line segments (see text).
1D Self-calibration
At this point, we have obtained the interesting result that a 1D projective camera model is obtained
by considering only the re-projected points on the trifocal line for a planar motion. The 1D self-calibration
method just described in Section 4 will allow us to locate the image of the circular points
common to all planes parallel to the motion plane.
Estimation of the image of the absolute conic for the 2D camera
Each planar motion generally gives us two points on the absolute conic, together with the vanishing
point of the rotation axes as the pole of the trifocal line w.r.t. the absolute conic. The pole/polar
relation between the vanishing point of the rotation axes and the trifocal line was introduced in [1].
As a whole, this provides 4 constraints on the absolute conic. Since a conic has 5 d.o.f., at least
different planar motions, yielding 8 linear constraints on the absolute conic, will be sufficient to
determine the full set of 5 internal parameters of a general 2D camera by fitting a general conic of
If we assume a 4-parameter model for camera calibration with no image skew (i.e.
planar motion yielding 4 constraints is generally sufficient to determine the 4 internal parameters of
the 2D camera. However this is not true for the very common planar motions such as purely horizontal
or vertical motions with the image plane perpendicular to the motion plane It can be easily proven that
there are only 3 instead of 4 independent constraints on the absolute conic in these configurations. We
need at least 2 different planar motions for determining the 4 internal parameters.
This also suggests that even if the planar motion is not purely horizontal or vertical, but close, the
vanishing point of the rotation axes only constrains loosely the absolute conic. Using only the circular
points located on the absolute conic is preferable and numerically stable, but we may need at least
3 planar motions to determine the 5 internal parameters of the 2D camera. Note that the numerical
instability of the vanishing point for nearly horizontal trifocal line was already reported by Armstrong
in [2]. Obviously, if we work with a 3-parameter model with known aspect ratio and without skew,
one planar motion is sufficient [1].
As we have mentioned at the beginning of this section the method described in this section is
related to the work of Armstrong et al. [1], but there are some important differences which we explain
now.
ffl First, our approach gives an elegant insight of the intricate relationship between 2D and 1D
cameras for a special kind of motion called planar motion.
ffl Second, it allows us to use only the fundamental matrices of the 2D images and the trifocal
tensor of 1D images to self-calibrate the camera instead of the trifocal tensor of 2D images.
It is now well known that fundamental matrices can be very efficiently and robustly estimated
[31, 27]. The same is true of the estimation of the 1D trifocal tensor [21] which is a linear
process. Armstrong et al., on the other hand, use the trifocal tensor of 2D images which so
far has been hard to estimate due to complicated algebraic constraints to our knowledge. Also,
the trifocal tensor of 2D images takes a special form in the planar motion case [1] and the new
constraints have to be included in the estimation process.
It may be worth mentioning that in the case of interest here, planar motion of the cameras,
the Kruppa equations become degenerate [30] and the recover the internal parameters is impossible
from the Kruppa equations. Since it is known that the trifocal tensor of 2D images is
algebraically equivalent to the three fundamental matrices plus the restriction of the trifocal tensor
to the trifocal plane [14, 15, 9], our method can be seen as an inexpensive way of estimating
the full trifocal tensor of 2D images: first estimate the three fundamental matrices (nonlinear
but simple and well understood), then estimate the trifocal tensor in the trifocal plane (linear).
Although it looks superficially that both the 1D and 2D trifocal tensors can be estimated linearly
with at least 7 image correspondences, this is misleading since the estimation of the 1D trifocal
tensor is exactly linear for 7 d.o.f. whereas the linear estimation of the 2D trifocal tensor is only
a rough approximation based on a set of 26 auxiliary parameters for its d.o.f. and obtained
by neglecting 8 complicated algebraic constraints.
ffl Third, but this is a minor point, our method may not require the estimation of the vanishing
point of the rotation axes.
7 Experimental results
The theoretical results for 1D camera self-calibration and its applications to 2D camera calibration
have been implemented and experimented on synthetic and real images. Due to space limitation,
we are not to present the results on synthetic data, the algorithms generally perform very well. We
only show some real examples. Here we consider a scenario of a real camera mounted on a robot's
arm. Two sequences of images are acquired by the camera moving in two different planes. The first
sequence contains 7 (indexed from 16 to 22) images (cf. Figure 5) and the second contains 8 (indexed
from 8 to 15).
The calibration grid was used to have the ground truth for the internal camera parameters which
have been measured as ff using the standard
calibration method [6].
Figure
5: Three images of the first planar motion.
We take triplets of images from the first sequence and for each triplet we estimate the trifocal
line and the vanishing point of the rotation axes by the 3 fundamental matrices of the triplet. The
1D self-calibration is applied for estimating the images of the circular points along the trifocal lines.
To evaluate the accuracy of the estimation, the images of the circular points of the trifocal plane are
re-computed in the image plane from the known internal parameters by intersecting the image of the
absolute conic with the trifocal line. Table 1 shows the results for different triplets of images of the
first sequence.
Image triplet Fixed point Circular points by self-calibration Circular points by calibration
Table
1: Table of the estimated positions of the images of the circular points by self-calibration with
different triplets of images of the first sequence. The quantities are expressed in the first image pixel
coordinate system. The location of circular points by calibration vary as the trifocal line location
varies.
Since we have more than 3 images for the same planar motion of the camera, we could also
estimate the trifocal line and the vanishing point of the rotation axes by using all available fundamental
matrices of the 7 images of the sequence. The results using redundant images are presented for
different triplets in Table 2. We note the slight improvement of the results compared with those
presented in Table 1.
Image triplet Circular points Fixed point
known position by calibration 262:1 \Sigma i2590:6
Table
2: Table of the estimated positions of the image of circular points with different triplets of
images. These quantities vary because the 1D trifocal tensor varies. The trifocal line and the vanishing
point of the rotation axes are estimated using 7 images of the sequence instead of the minimum of 3
images.
The same experiment was carried out for the other sequence of images where the camera underwent
a different planar motion. Similar results to the first image sequence are obtained, we give only
the result for one triplet of images in Table 3 for this sequence.
Image triplet Fixed point Circular points by self-calibration Circular points by calibration
Table
3: Table of the estimated position of the image of circular points with one triplet of second
image sequence.
Now two sequences of images each corresponding to a different planar motion yield four distinct
imaginary points on the image plane which must be on the image ! of the absolute conic. Assuming
that there is no camera skew, we could fit to those four points an imaginary ellipse using standard
techniques and compute the resulting internal parameters. Note that we did not use the pole/polar
constraint of the vanishing point of the ratation axes on the absolute conic as it was discussed in
Section 6 that this constraint is not numerically reliable.
To have an intuitive idea of the planar motions, the two trifocal lines together with one image are
shown in Figure 6.
-1000 -500 500 1000 150050015002500
Figure
The image of the motion planes of the two planar motions.
The ultimate goal of self-calibration is to get 3D metric reconstruction. 3D reconstruction from
two images of the sequence is performed by using the estimated internal parameters as illustrated in
Figure
7. To evaluate the reconstruction quality, we did the same reconstruction using the known
internal parameters. Two such reconstructions differ merely by a 3D similarity transformation which
could be easily estimated. The resulting relative error for normalised 3D coordinates by similarity
between the reconstruction from self-calibration and off-line calibration is 3:4 percent.
-0.4
-0.3
-0.2
-0.10.10.3
-0.23.43.84.4
Figure
7: Two views of the resulting 3D reconstruction by self-calibration.
8 Conclusions and other applications
We have first established that the 2 internal parameters of 1D camera can be uniquely determined
through the trifocal tensor of three 1D images. Since the trifocal tensor can be estimated linearly
from at least 7 points in three 1D images, the method of the 1D self-calibration is a real linear method
(modulo the fact that we have to find the roots of a third degree polynomial in 1 variable), no over-
parameterisation was introduced.
Secondly, we have proven that if a 2D camera undergoes a planar motion, the 2D camera reduces
to a 1D camera in the plane of motion. The reduction of a 2D image to a 1D image can be efficiently
performed by using only the fundamental matrices of 2D images. Based on this relation between 2D
and 1D images, the self-calibration of 1D camera can be applied for self-calibrating a 2D camera.
Our experimental results based on real image sequences show the very large stability of the solutions
yielded by the 1D self-calibration method and the accurate 3D metric reconstruction that can be
obtained from the internal parameters of the 2D camera estimated by the 1D self-calibration method.
The camera motions that may defeat the self-calibration method developed in Section 4 are described
in [26].
--R
Invariancy Methods for Points
Affine calibration of mobile vehicles.
The twisted cubic and camera calibration.
Camera Calibration for 3D Computer Vision
Stratification of three-dimensional vision: Projective
Motion from point matches: Multiplicity of solutions.
About the correspondences of points between n images.
In defence of the 8-point algorithm
Euclidean reconstruction from uncalibrated views.
A linear method for reconstruction from lines and points.
Geometry and Algebra of Multiple Projective Transformations.
Algebraic Properties of Multilinear Constraints.
A Common Framework for Multiple-View Tensors
A theory of self calibration of a moving camera.
Affine structure from line correspondences with uncalibrated affine cameras.
Algebraic Projective Geometry.
Algebraic functions for recognition.
A unified theory of structure from motion.
Critical Motion Sequences for Monocular Self-Calibration and Uncalibrated Euclidean Recon- struction
Vision 3D non calibr-ee : contributions - a la reconstruction projective et - etude des mouvements critiques pour l'auto-calibrage
Performance characterization of fundamental matrix estimation under image degradation.
Matching constraints and the joint image.
Autocalibration and the Absolute Quadric.
Camera self-calibration from video sequences: the Kruppa equations revisited
A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry.
--TR
--CTR
Mandun Zhang , Jian Yao , Bin Ding , Yangsheng Wang, Fast individual face modeling and animation, Proceedings of the second Australasian conference on Interactive entertainment, p.235-239, November 23-25, 2005, Sydney, Australia
A. C. Murillo , C. Sags , J. J. Guerrero , T. Goedem , T. Tuytelaars , L. Van Gool, From omnidirectional images to hierarchical localization, Robotics and Autonomous Systems, v.55 n.5, p.372-382, May, 2007 | self-calibration;camera model;vision geometry;planar motion;1D camera |
354357 | Look-Ahead Procedures for Lanczos-Type Product Methods Based on Three-Term Lanczos Recurrences. | Lanczos-type product methods for the solution of large sparse non-Hermitian linear systems either square the Lanczos process or combine it with a local minimization of the residual. They inherit from the underlying Lanczos process the danger of breakdown. For various Lanczos-type product methods that are based on the Lanczos three-term recurrence, look-ahead versions are presented which avoid such breakdowns or near-breakdowns at the cost of a small computational overhead. Different look-ahead strategies are discussed and their efficiency is demonstrated by several numerical examples. | Introduction
. Lanczos-type product methods (LTPMs) like (Bi)CGS [42],
BiCGStab [44], BiCGStab2 [22], and BiCGStab(') [38], [41] are among the most
efficient methods for solving large systems of linear equations
sparse system matrix A 2 C
N \ThetaN . Compared to the biconjugate gradient
method they have the advantage of converging roughly twice as fast and of
not requiring a routine for applying the adjoint system matrix A H to a vector. Nev-
ertheless, they inherit from BiCG the short recurrence formulas for generating the
approximations x k and the corresponding residuals r k := b \Gamma Ax k .
As for BiCG, where the convergence can be smoothed by applying the quasi-
minimal residual (QMR) method [16] or the local minimum residual process (MR
smoothing) [37], [47], product methods can be combined with the same techniques
[14], [47] to avoid the likely "erratic" convergence behavior [11], [36]; the benefit of
these smoothing techniques is disputed, however.
A well-known problem of all methods that make implicit use of the Lanczos polynomials
generated by a non-Hermitian matrix is the danger of breakdown. Although
exact breakdowns are very rare in practice, it has been observed that near-breakdowns
can slow down or even prevent convergence [15]. Look-ahead techniques for the Lanczos
process [15], [21], [23], [34], [35], [43] allow us to avoid this problem when we
use variants of the BiCG method with or without the mentioned smoothing tech-
niques. However, the general look-ahead procedures that have so far been proposed
for (Bi)CGS or other LTPMs are either limited to exact breakdowns [8], [9] or are
based on different look-ahead recursions: in fact, the look-ahead steps proposed by
Brezinski and Redivo Zaglia [5] to avoid near-breakdowns in CGS and those in [6],
which are applicable to all LTPMs, are based on the so-called BSMRZ algorithm,
which is itself based on a generalization of coupled recurrences (different from those
of the standard BiOMin and BiODir versions of the BiCG method) for implementing
the Lanczos method. For a theoretical comparison of the two approaches we refer to
[25, x 19]. Recently, a number of further "look-ahead-like" algorithms for Lanczos-
type solvers have been proposed by Ayachour [1], Brezinski et al. [7], Graves-Morris
Applied Mathematics, ETH Zurich, ETH-Zentrum HG, CH-8092 Zurich, Switzerland
(mhg@sam.math.ethz.ch). Formerly at the Swiss Center for Scientific Computing (CSCS/SCSC).
y German Aerospace Research Establishment (DLR), German Remote Sensing Data Center
(DFD), D-82234 Oberpfaffenhofen, Germany (kjr@dfd.dlr.de). Formerly also at CSCS/SCSC.
[19], and Ziegler [48]. Their discussion is beyond the scope of this paper, but it seems
that [1, 7, 48] are restricted to exact breakdowns.
In this paper, we start from the approach in [15] and [23, x 9] and derive an
alternative look-ahead procedure for LTPM algorithms that make use of the Lanczos
three-term recurrences. Compared to the standard coupled two-term recurrences they
have the advantage of being simpler to handle with regard to look-ahead, since they
are only affected by one type of breakdown. In contrast to the first version of this work
[26], we also capitalize upon an enhancement for the look-ahead Lanczos algorithm
pointed out by Hochbruck recently (see [28], [25]), which is here adapted to LTPMs.
Other improvements help to further reduce the overhead and stabilize the process.
Starting with an initial approximation x 0 and a corresponding initial residual
steps a basis of the 2n-dimensional Krylov space
in such a way that the even indexed basis vectors are of the form
where ae n is the nth Lanczos polynomial (see below) and n is another suitably chosen
polynomial of exact degree n. In the algorithms we discuss, these vectors will be either
the residual of the nth approximation xn or a scalar multiple of it. By allowing them
to be multiples of the residuals, that is, by considering so-called unnormalized [20]
or inconsistent [25] Krylov space solvers, we avoid the occurrence of pivot (or ghost)
breakdowns. (For the various types of breakdowns and their connection to the block
structure of the Pad'e table see [20, 24, 30, 31]. In the setting of [5, 6], they have been
addressed particularly in [4].)
More generally, we define a doubly indexed sequence of product vectors w l
n by
The aim is to find in the (n 1)th step of an LTPM an improved approximation
xn+1 by computing a new product vector w n+1
n+1 from previously determined ones in a
stable way. To visualize the progression of an algorithm and the recurrences it uses
we arrange the product vectors w l
n in a w-table 1 . Its n-axis points downwards and
its l-axis to the right. We describe then how an algorithm moves in this table from
the upper left corner downwards to the right.
This paper is organized as follows. In Section 2 we review the look-ahead Lanczos
process. Versions based on the Lanczos three-term recurrences of various LTPMs are
introduced in Section 3. In Section 4 we present for these LTPMs look-ahead procedures
and analyze their computational overhead, and in Section 5 several look-ahead
strategies are discussed. Our preferred way of applying the look-ahead procedures
to obtain the solution of a linear system is presented in Section 6. In Section 7 the
efficiency of the proposed algorithms is demonstrated by numerical examples, and in
Section 8 we draw some conclusions.
2. The Look-Ahead Lanczos Process. The primary aim of the Lanczos process
[32] is the construction of a pair of biorthogonal bases for two nested sequences of
Krylov spaces. Given A 2 C
N \ThetaN and a pair of starting vectors, (ey
Biorthogonalization (BiO) Algorithm generates a pair of finite sequences, fey n g
1 The w-table is different from the scheme introduced by Sleijpen and Fokkema [38] for
BiCGstab(').
fyng
n=0 , of left and right Lanczos vectors, such that
e
and
ae
or, equivalently,
Here, h\Delta; \Deltai denotes the inner product in C
N , which we choose to be linear in the second
argument, and ? indicates the corresponding orthogonality.
The sequence of pairs of Lanczos vectors can be constructed by the three-term
recursions
with coefficients ff n and fi n that are determined from the orthogonality condition
(2.2) and nonvanishing scale factors fl n that can be chosen arbitrarily. Choosing
would allow us to consider the right Lanczos vectors yn as residuals
and to update the iterates xn particularly simply. But this choice also introduces the
possibility of a breakdown due to ff n Therefore, we suggest in Section 6 a
different way of defining iterates.
From (2.4) it follows directly that the Lanczos vectors can be written in the form
e yn=ae n
where ae n denotes the n-th Lanczos polynomial. Since we aim here at LTPMs, we
consider for the Krylov spaces e
Kn more general basis vectors of the form
e
with arbitrary, but suitably chosen polynomials n of exact degree n and e z 0 := e y 0 . In
general, e z n+1 ? Kn+1 will no longer hold, but e
Kn+1 ? yn+1 can still be attained by
enforcing e
z means choosing the coefficients ff n and
in (2.4) in the following way: if n ? 0 we need
but since hez
When Similarly, from the orthogonality condition hez
we obtain
Equations (2.7)-(2.9) and the first recurrence of (2.4) specify what one might call the
one-sided Lanczos process, formulated in [36] to derive LTPMs.
Clearly, this recursive process terminates with y or it breaks
down with ffi
The look-ahead Lanczos process [23, x 9], [15]
overcomes such a breakdown if curable, i.e., if for some k, hez ; A k y i 6= 0. However,
its role is not restricted to treating such exact breakdowns with ffi
0: it allows us
to continue the biorthogonalization process whenever for stability reasons we choose
to enforce the orthogonality condition (2.2) only partially for a couple of steps. We
will come back later to the conditions that make us start such a look-ahead phase.
In the look-ahead Lanczos process the price we have to pay is that the Gramian
matrix D := (hez m ; yn i) becomes block-lower triangular instead of triangular (as in
the generic one-sided Lanczos algorithm) or diagonal (as in the generic two-sided
Lanczos algorithm). In other words, we replace the conditions e
by
e
and
nonsingular
is the subsequence of indices of (well-condi-
tioned) regular Lanczos vectors yn j
; for the other indices the vectors are called inner
vectors. Likewise, we refer to n as a regular index if while n is
called an inner index otherwise. Note that there is considerable freedom in choosing
the subsequence fn j g J
example, we might request that the smallest singular
value oe min (D j ) is sufficiently large.
The sequence fyn g is determined by the condition e
hence it does not depend directly on the sequence fez n g but only on the spaces e
that are spanned by the first n j elements of fez n g. On the other hand, the smallest
singular value of each D j depends on the basis chosen, a fact that one should take
into account. Forming the blocks
\Theta
\Theta
e
z
we can express relation (2.10) as
e
ae
The look-ahead step size is in the following denoted by h j := n
In the look-ahead case the Lanczos vectors fyng can be generated by the recursion
[23, x 9], [15]:
\Theta
denotes the not yet fully completed j-block if
is an inner index, while b
regular, that is, if . The
coefficient vector fi n is determined by the condition
thus,
e
The coefficient vector ff n can be chosen arbitrarily if n+1 is an inner index. Obviously,
the choice ff yields the cheapest recursion, but we may gain numerical stability
by choosing ff n 6= 0. On the other hand, if results from
the condition
that is
e
Recently, it was pointed out by Hochbruck [28] that the recursion (2.13) can
be simplified since the contribution of the older block Y j \Gamma1 can be represented by
multiples of a single auxiliary vector y 0
. This is due to the fact (noticed in [23])
that the matrix made up of the coefficient vectors fi n j
is of
rank one; see also [25, x 19]. This simplification has also been capitalized upon in
the look-ahead Hankel solver of Freund and Zha [17], which is closely related to the
look-ahead Lanczos algorithm.
In particular, due to (2.10) or (2.12) we have
Therefore, (2.14) simplifies to
e
Introducing the auxiliary vector
we have
e z H
so that the recurrence (2.13) becomes
and (2.15) changes to
e
(2.
3. Lanczos-Type Product Methods (LTPMs). LTPMs are based on two
ideas. The first one is to derive for a certain sequence of product vectors w l+1
recursion formulas that involve only previously computed product vectors, so that
there is no need of explicitly computing the vectors e z l and yn . By multiplying the
three-term recursion (2.4) for the right Lanczos vectors yn with l (A) we obtain
Aw l
This recursion can be applied to move forward in the vertical direction in the w-table.
To obtain a formula for proceeding in the horizontal direction, the recursion for the
chosen polynomials l (i) is capitalized upon in an analogous way.
The second basic idea is to rewrite the inner products that appear in the Lanczos
process in terms of product vectors:
For the coefficients ff n and fi n of (2.8) and (2.9) this results in
LTPMs still have short recurrence formulas if the polynomials l have a short
one. In addition, they have two advantages over BiCG: first, multiplications with
the adjoint system matrix A H are avoided; second, for an appropriate choice of the
polynomials l (i) smaller new residuals r l := w l
(A)y l can be expected because of
a further reduction of y l by the operator l (A). Different choices of the polynomials
l (i) lead to different LTPMs. In the following we briefly review some possible choices
for these polynomials. We start with the general class where they satisfy a three-term
recursion. Since then both the 'left' polynomials l and the `right' polynomials ae n
fulfill a three-term recursion, we say that this is the class of (3; 3)-type LTPMs.
In the following we briefly review some possible choices for these polynomials.
For a more detailed discussion and a pseudocode for the resulting algorithms we refer
to [36].
3.1. LTPMs based on a three-term recursion for f l g: BiOxCheb and
BiOxMR2. Assume the polynomials l (i) are generated by a three-term recursion
of the form [22]
l. Note that, by induction, l
has exact degree l and l l. Hence, the polynomials qualify as residual
polynomials of a Krylov space method.
Multiplying yn by l+1 (A) from the left and applying (3.3) yields the horizontal
recurrence
By applying (3.1) and (3.4) the following loop produces a new product vector w n+1
from previously calculated ones, namely w l
1), and the product Aw
also evaluated before. At the end of the loop, besides
n+1 , which will be needed in the next run through the
loop, are available.
Loop 3.1. (General (3; 3)-type LTPM)
1. Compute
n and determine fi n and ff n .
2. Use (3.1) to compute w
n+1 .
3. Compute
n+1 and determine n and jn .
4. Use (3.4) to compute w n+1
n and w n+1
n+1 .
Next we discuss two ways of choosing the polynomials l , that is, of specifying
the recurrence coefficients l and j l .
One possibility is to combine the Lanczos process with the Chebyshev method
[13], [45] by choosing l as a suitably shifted and scaled Chebyshev polynomial. This
combination was suggested in [3, 22, 44]. Let us call the resulting (3; 3)-type LTPM
BiOxCheb. It is well known that these polynomials satisfy a recurrence of the form
(3.3). After acquiring some information about the spectrum of the matrix A, for
example, by performing a few iterations with another Krylov space method, one can
determine the recursion for the scaled and shifted Chebyshev polynomials that correspond
to an ellipse surrounding the estimated spectrum [33]. However, the necessity
to provide spectral information is often seen as a drawback of this method.
Another idea is to use the coefficients l and j l of (3.3) for locally minimizing
the norm of the new residual w l+1
l+1 . This idea is borrowed from the BiCGStab and
methods [44], [22] reviewed below: l and j l are determined by solving
the two-dimensional minimization problem
min
Introducing the N \Theta 2 matrix B l+1 :=
\Theta
(w l
we can write (3.5)
as the least-squares problem
min
Therefore, l and j l can be computed as the solution of the normal equations
l
In view of the two-dimensional local residual norm minimization performed at
every step (except the first one) we call this the BiOxMR2 method 2 . A version based
on coupled two-term Lanczos recurrences of this method was introduced by the first
author in a talk in Oberwolfach (April 1994). Independently it was as well proposed by
Cao [10] and by Zhang [46], whose Technical Report is dated April 1993. Zhang also
considered two-term formulas for l and presented very favorable numerical results.
2 The letter 'x' in the name BiOxMR2 reflects the fact that the residual polynomials of this method
are the products of the Lanczos polynomials generated by the BiO process with polynomials obtained
from a successive two-dimensional minimization of the residual (MR2). Similarly, BiOxCheb means
a combination of the BiO process with a Chebyshev process.
3.2. LTPMs based on a two-term recursion for f l g: BiOStab. In Van
der Vorst's BiCGStab [44] the polynomials l (i) are built up successively as products
of polynomials of degree 1:
Inserting here for i the system matrix A and multiplying by yn from the right yields
The coefficient l is determined by minimizing the norm of w l+1
that is by solving the one-dimensional minimization problem
min 2C
which leads to
hAw l
hAw l
Recurrences (3.1) and (3.8) can be used to compute a new product vector w n+1
from w n
n and w n
as described in Loop 3.1 with
is calculated from (3.10). However, since (3.8) is only a two-term recurrence, there is
no need now to compute w
n+1 in substep 2 of the loop.
We call this algorithm BiOStab since it is a version of BiCGStab that is based on
the three-term recurrences of the Lanczos biorthogonalization process 3 .
3.3. BiOStab2. BiOStab2 is the version of BiCGStab2 [22] based on the three-term
Lanczos recurrences. The polynomials l (i) satisfy the recursions
l is even;
l is odd:
Here l may be obtained by solving the one-dimensional minimization problem (3.9),
so l is given by (3.10). However, if j l j is small, this choice is dangerous since the
vector component needed to enlarge the Krylov space becomes negligible [38]. Then
some other value of l should be chosen. Except for roundoff, the choice has no
effect on later steps, because l and j l are determined by solving the two-dimensional
minimization problem (3.5).
Multiplying yn by l+1 (A) and applying (3.11) leads to
ae
l is even
Aw l
l is odd:
3.1 applies with is even, while n and jn are chosen
as indicated above if n is odd. If n is even, there is no need to compute w
n+1 in
substep 2 of the loop.
3 Eijkhout [12] also proposed such a variant of BiCGStab; however, his way of computing the
Lanczos coefficients is much too complicated.
3.4. BiO-Squared (BiOS). BiOS is obtained by "squaring" the three-term
Lanczos process: among the basis vectors generated are those Krylov space vectors
that correspond to the squared Lanczos polynomials. By complementing BiOS with
a recursion for Galerkin iterates we will obtain BiOResS, a (3; 3)-type version of
Sonneveld's (2; 2)-type conjugate gradient squared (CGS ) method [42]. The method
fits into the framework of LTPMs if we identify
l
The vectors e
yn are then exactly the left Lanczos
vectors so that now yn ? e
Kn as well as e z n ? Kn is fulfilled. Thus, the coefficient ff n
in (3.2) simplifies to
and the w-table becomes symmetric since
Consequently, we have in analogy to (3.1)
Aw l
These two recursions lead to the following loop:
Loop 3.2. (BiOS)
1. Compute
n and determine fi n and ff n .
2. Use (3.1) to compute w
n+1 .
3. Compute
n+1 .
4. Use (3.16) to compute w n+1
n+1 .
Note that we exploit in substep 2 the symmetry of the w-table: the product vector
which is needed for the calculation of w n
n+1 by (3.1), is equal to w
n and need
not be stored. We also point out that (3.16) is not of the form (3.4); in particular,
the coefficients of w
n and w l
n need not sum up to 1.
4. Look-Ahead Procedures for LTPMs. Look-ahead steps in an LTPM serve
to stabilize the Lanczos process, the vertical movement in the w-table. Except for
BiOS, the recursion formulas for the horizontal movement remain the same. The
vertical movement is now in general based on the recurrence formula (2.21) for the
Lanczos vectors, but we need to replace these vectors by product vectors w l
We introduce the blocks of product vectors
c
l
\Theta
\Theta
and the auxiliary product vector w 0 l
defined by
Again c
l
is a regular index, while c W l
denotes the not yet
completed jth block if n+ 1 is an inner index. Now, multiplying (2.21) by l (A) from
the left, we obtain
Aw l
As in Section 3, the coefficient vectors fi 0
n and, in the regular case, ff n can be
expressed in terms of the product vectors by rewriting all inner products in such a
way that the part l
z l on the left side is transferred to the right side of the
inner product. For the diagonal blocks D j of the Gramian this yields
\Theta
and fi 0
n of (2.18) becomes
Likewise, ff n of (2.22) turns into
For advancing in the horizontal direction of the w-table we will still use (3.4), (3.8),
the combination (3.12), or (3.16). Substituting Aw
n in (4.4) according to these
formulas and letting
i be the (n n)-element of D (4.4) simplifies to
We remark that by using (4.6) instead of hez
we avoid to compute and
store the complete
block. Now only few elements of this block are needed, see
Figure
4.1 below. In this figure we display those entries w l
n and products Aw l
n in the
w-table that are needed to compute by (4.2) a new vector w l
n+1 marked by ' '.
inner case
d d d
d d d v 0
. V
A
regular case
d d d
d d d ff 0 ff
. D D V D
. D D V D
A
A
A
A
Fig. 4.1. Entries in the w-table needed for the construction of a new inner or regular vector
marked by ' ' by recurrence formula (4.2). Here, `v 0 ' indicates entries needed for w 0 l
those
for c
l
, 'd' those for those additionally required for D j . Moreover, 'ff 0 ' marks auxiliary
vectors
in the formula (4.5) for ff n , and 'A' stands for a matrix-vector
product (MV) with A in this formula. One such product also appears explicitly in (4.2).
Note that the horizontal recursions are valid for the auxiliary product vectors
example, (3.4) and (4.1) imply
while (3.8) and (4.1) provide
Thus, the auxiliary vector can be updated in the horizontal direction at the cost of
one matrix-vector product (MV) per step.
Combining the recursions for the vertical movement with those for the horizontal
movement leads to the look-ahead version of an LTPM. (Actually, we will also need
to compute approximations to the solution A \Gamma1 b of the linear system; but we defer
this till section 6). Because of the additional vectors involved in the above recurrence
formulas, it seems that such a look-ahead LTPM requires much more computational
work and storage than its unstable, standard version. However, the number of look-ahead
steps as well as the size of their blocks is small in practice, so that the overhead
is moderate. Moreover, we describe in the following for various choices of polynomials
how MVs that are needed can be computed indirectly by applying the recurrence
formulas. In the same way also the values of inner products can be obtained indirectly
at nearly no cost.
4.1. Look-ahead LTPMs based on a three-term recursion for f l g: LA-
BiOxCheb and LA-BiOxMR2. For methods incorporating a horizontal three-term
recurrence, such as BiOxCheb and BiOxMR2, applying the above principles for
look-ahead LTPMs leads to the general Loop 4.1. Note that the first four substeps are
identical in both cases. We will see in Section 5 that the decision between a regular
and an inner loop is made during substep 5.
In
Figure
4.2 we display the action of this loop in the w-table, but now we use a
different format than before, which, on the one hand specifies what has been known
before the current sweep through the loop and what is being computed in this sweep.
In particular, the following symbols and indicators are used:
ffl 'V' indicates that the corresponding product vector is already known;
ffl 'A' as `denominator' indicates that the product of the vector represented by
'V' with the matrix A was needed;
ffl a solid box around a 'fraction' means that this product by A required (or
requires) an MV;
ffl no box around a 'fraction' means that this product can be obtained by applying
a recurrence formula;
ffl a number as entry specifies in which substep of the current sweep this entry
is calculated;
ffl a prime indicates that the vector is an auxiliary one, as defined in (4.1)
(these vectors are displayed in the last row of the corresponding block of the
indicates that the product of the vector represented by
with the matrix A was needed;
ffl double primes will indicate that the vector is an auxiliary one of the type
defined in (4.11) used in LA-BiCGS (these vectors are displayed in the lower
right corner of the corresponding block of the w-table);
ffl 'S' will indicate that the vector or the MV is obtained for free due to the
symmetry of the w-table of LA-BiCGS.
Loop 4.1. (Look-ahead for (3; 3)-type LTPM.) Let := min fn
Inner Loop: (n
1. Compute
n .
2. If n ? use (4.2) to compute
indirectly Aw n
.
3. If n
n .
4. If n n j +3, use (3.4) to compute
indirectly Aw +1
n .
5. Use (4.2) to compute w n
n+1 .
6. Compute
n+1 and n ; jn .
7. Use (3.4) to compute w n+1
n+1 .
8. Compute
use (4.7) to
compute w 0 n+1
Regular Loop: (n
1. Compute
n .
2. If n ? use (4.2) to compute
indirectly Aw n
.
3. If n
n .
4. If n n j +3, use (3.4) to compute
indirectly Aw +1
n .
5. Use (4.2) to compute w n
n+1 and
n+1 .
6. Compute
n+1 and n ; jn .
7. Use (3.4) to compute w n+1
n+1 .
8. Compute
according
to definition (4.1).
Inner Loop
A
A
A
A
A
A
Regular Loop
A
A
A
A
A
Inner Loop
A
A
A
A
A
A
A
A
A
A
A
A
Regular Loop
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
Fig. 4.2. Action of Loop 4.1 (Look-ahead for (3; 3)-type LTPM) in the w-table. For blocks of
different sizes an inner step (at left) and the following regular step (at right) are shown.
In the first two w-tables of Figure 4.2 we display what happens in a first inner
step (at left) and in a regular step (at right) that follows directly a regular step (so
that 1). In the second pair of w-tables of Figure 4.2 we consider an inner and a
regular step in case of a large block size. Note that the substeps 2-4 of the loop do
not appear in the first pair, and that substep 4 would still not be active in the regular
loop at the end of a look-ahead step of length h
active.
For such a look-ahead step of length h j , the cost in terms of MVs is 4h
MVs if h j ? 1. In fact, in the first substep of the h inner and the one regular
loops h j MVs are consumed; another h are needed in substep 3, further
in substep 6, and the remaining h are used in substep 6 of the
inner loops. If no look-ahead is needed, that is if h are required,
both in our procedure and in the standard one that does not allow for look-ahead.
Hence, in a step of length 2, we have 25% overhead, in a step of length 3 there is 50%
overhead, and for even longer steps, which are very rare in practice, the overhead
grows gradually towards 100%.
4.2. Look-ahead for BiOStab and BiOStab2. Since the horizontal recurrence
(3.8) for BiOStab is only a two-term one, there is no need to compute elements
of the second subdiagonal of the w-table as long as we are not in a look-ahead step.
In case of a look-ahead step, this remains true for those of these elements that lie in
a subdiagonal block, but, of course, not for those in a diagonal block. For Loop 4.1
this means that simplifies to := n j and that in substep 5 of a regular loop there
is no need to compute w
. All the other changes refer to equation numbers or the
coefficients n and jn . In summary, we obtain Loop 4.2. Again the first four substeps
are the same in both cases, and the choice between them will be made in substep 5.
In
Figure
4.3 we display for this loop the two sections of w-tables that correspond to
the second pair in Figure 4.2.
Since those product vectors from Loop 4.1 that are no longer needed in Loop 4.2
were found without an extra MV before, the overhead in terms of MVs remains the
same here.
Look-ahead for BiOStab2 could be defined along the same lines, by alternating
between steps of Loop 4.1 and Loop 4.2. At this point it also becomes clear how to
obtain a look-ahead version of BiOStab('), an algorithm analogous to BiCGStab(')
of Sleijpen and Fokkema [38], but based on the three-term Lanczos process instead of
coupled two-term BiCG formulas.
4.3. Look-ahead (bi)conjugate gradient squared: LA-BiOS. For BiOS,
which will be the underlying process for our BiOResS version of BiCGS, the horizontal
recurrence (3.4) has to be substituted by the Lanczos recurrence given in (3.16), which
may need to be replaced by the look-ahead formula that is analogous to (4.2) with
l and n exchanged and with suitably defined auxiliary vectors and blocks of vectors.
But since the w-table is symmetric, we can build it up by vertical recursions only and
reflections at the diagonal.
We only formulate the recursion for w 0 l
as a horizontal one that replaces (4.7):
where now
\Theta
Recurrence (4.9) follows from the horizontal Lanczos look-ahead recurrences for computing
derived from (2.13) instead of (2.21), which can be gathered
Loop 4.2. (Look-ahead for BiOStab.)
Inner Loop: (n
1. Compute
n .
2. If n ? use (4.2) to compute
indirectly Aw n
.
3. If n
n .
4. If n n j +3, use (3.8) to compute
indirectly Aw n j +1
n .
5. Use (4.2) to compute w n
n+1 .
6. Compute
n+1 and n .
7. Use (3.8) to compute w n+1
n+1 .
8. Compute
use (4.8) to
compute w 0 n+1
Regular Loop: (n
1. Compute
n .
2. If n ? use (4.2) to compute
indirectly Aw n
.
3. If n
n .
4. If n n j +3, use (3.8) to compute
indirectly Aw n j +1
n .
5. Use (4.2) to compute w n
n+1 .
6. Compute
n+1 and n .
7. Use (3.8) to compute w n+1
n+1 .
8. Compute
according
to definition (4.1).
Inner Loop
A
A
A
A
A
A
A
A
A
A
A
Regular Loop
A
A
A
A
A
A
A
A
A
A
A
A
A
A
Fig. 4.3. Action of Loop 4.2 (Look-ahead for BiOStab) in the w-table. An inner step (at left)
and the following regular step (at right) are shown.
into one recurrence for these columns of W l+1
post-multiplied by
as in (4.1).
Again, (4.9) can be simplified: if we define for block and the needed values
of j the auxiliary vector
and for each l with n k l ! n k+1 the same coefficient fi 0
l := ffi nk
l nk \Gamma1 as in (4.6),
then in view of (2.17) and (4.6),
Loop 4.3. (Look-ahead for BiOS.)
Inner Loop: (n
1. Compute
n .
2. If n n j +2, use (4.2) to compute
indirectly Aw n
n\Gamma2 .
3. Use (4.2) to compute w n
n+1 .
4. Compute
n+1 and Aw 0 n
5. Use (4.13) to compute w 0 n+1
6. Use (4.2) to compute w n+1
n+1 .
Regular Loop: (n
1. Compute
n .
2. If n n j +2, use (4.2) to compute
indirectly Aw n
n\Gamma2 .
3. Use (4.2) to compute w n
n+1 .
4. Compute
n+1 and Aw 0 n
5. Use (4.13) to compute w 0 n+1
6. Use definition (4.1) to compute
j and use definition
(4.11) to compute w 00 j
.
7. Use (4.2) to compute w n+1
n+1 .
8. Use definition (4.1) to compute
.
Inner Loop
A
A
A
A
A
A
A
A
VS
Regular Loop
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
Fig. 4.4. Action of Loop 4.3 (Look-ahead for BiOS) in the w-table. An inner loop (at the top)
and the following regular loop (at the bottom) are shown.
Consequently, (4.9) simplifies to
l
A step of the resulting LA-BiOS algorithm is summarized as Loop 4.3.
Here, the decision between a regular and an inner loop is made in substep 3 (which
corresponds to substep 5 in Loops 4.1 and 4.2). Now even the first five substeps of
the two versions of the loop are identical. In Figure 4.4 we display for LA-BiOS the
same pair of loops as we did in Figure 4.3 LA-BiOStab.
For a look-ahead step of length h j , the cost in terms of MVs is now 3h j MVs if
while it is only 3h As the standard, non-look-
ahead algorithm requires 2 MVs per step, this means that the overhead is at most
50%. Here, in the first substep of the h inner and the one regular loop h j MVs
are consumed, and another 2h j MVs are needed in substep 4.
4.4. Overhead for Look-Ahead. In this subsection we further discuss the
overhead of the look-ahead process in terms of MVs, inner products (IPs) and the
necessary storage of N-vectors.
First we note that all IPs required in the look-ahead algorithms have the fixed
initial vector e z 0 as their first argument. Therefore, if the second argument of an IP
was computed by a recurrence formula, then also the IP of this vector with e z 0 can be
computed indirectly by applying the same recurrence formula. Thus, only IPs of the
n i, for which Aw l
n is computed directly, need to be computed explicitly.
This means that in all algorithms the number of required IPs is equal to the number
of required MVs. We must admit, however, that such recursively computed inner
products, as well as the recursively computed matrix-vector products, may be the
source of additional roundoff, which may cause instability.
Actually, for understanding how to compute all the inner products needed, the
reader may want to introduce a ffi-table and a oe-table with entries ffi l
and oe l
Our loops and figures about generating the w-
table then hold as well for these two tables. Note that the ffi-table just contains the
transposed of the matrix D.
Table
Cost and overhead of look-ahead LTPMs when constructing a look-ahead step of length h j ? 1.
The construction of iterates is not yet included. By further capitalizing upon storage locations that
become available during a look-ahead step, the storage overhead could be reduced by roughly 50 %.
method total cost cost overhead storage overhead
(MVs, IPs) (MVs, IPs) relativ (N-vectors)
In terms of MVs, the cost of a look-ahead step of length h j ? 1 has been specified
in the previous subsection. By subtracting the cost for h j non-look-ahead steps, that
is 2h j MVs, we obtain the overhead summarized in Table 4.1. We stress that when
our algorithm has no overhead except for the necessary test of regularity,
which, if it fails, would reveal an upcoming instability and initiate a look-ahead step.
The table lists additionally the overhead in storage of N-vectors in a straightforward
implementation that is not optimized with respect to memory usage.
For comparison, we cite from page 60 of [5] or page 180 of [6] that the CGS look-ahead
procedure of Brezinski and Redivo Zaglia requires 6h
(the typical case) and 5h j +n (which means that a relatively
large look-ahead step is needed in one of the first few iterations). Therefore, compared
to our numbers, the overhead in terms of MVs is about four times (if h j is large, but
five times (if h larger than in our LA-BiOS (assuming as the basis
a standard CGS implementation requiring 2MVs per ordinary iteration). However,
we also note that, according to the above numbers, for a step without look-ahead
1), the methods of [5] and [6] need 3MVs instead of 2MVs.
5. Look-Ahead Strategies. In this section we address the delicate issue of
when to perform a look-ahead step, which means in an LTPM to decide whether the
new vertical index n+1 is a regular index or an inner index, i.e., whether the required
product vectors w l
n+1 in the (n 1)th row of the w-table should be computed as
regular or as inner vectors. Therefore, the look-ahead procedure in LTPMs serves
to stabilize the underlying Lanczos method, the vertical movement of the product
method in the w-table. Consequently, the criterion, when to carry out a look-ahead
step in an LTPM can be based on the criterion given in [15] for the Lanczos algorithm.
However, since the Lanczos vectors e yn and yn are not computed explicitly in a product
method, we need to rewrite the conditions of this criterion in terms of the product
vectors w l
. Let us first motivate these conditions.
In the case of an exact breakdown, where
e
z n 6= 0; yn 6= 0, a division by zero would occur in the next Lanczos step. The first
task of the look-ahead process is to circumvent these exact breakdowns without the
necessity of restarting the Lanczos process and loosing its superlinear convergence.
In finite precision arithmetic, exact breakdowns are very unlikely. However, near
breakdowns, where jffi
very small, may occur and cause large relative roundoff errors
in the Lanczos coefficients ff n and fi n given by (3.2). To be more precise, we recall
that the relative roundoff error in the computation of the inner product ffi
is bounded by [18, p. 64]
" denotes the roundoff unit. Thus, a small value of jffi
leads in finite precision
arithmetic to a big relative roundoff error in the computation of the inner product
n , which also causes a perturbation of the Lanczos coefficients ff n and fi n , since they
depend on ffi
respectively. The second task of the look-ahead process is
therefore, to avoid a convergence deterioration due to perturbed Lanczos coefficients.
Of course, similar roundoff effects may come up in the numerators of the formulas
(3.2) for ff n and fi n , but large relative errors in those will only be harmful if the
denominators are small too.
We would like to point out that in an LTPM the inner products ffi
n can be enlarged
to a certain extent by an appropriate adaptive choice of the polynomials n [39, 40],
as long as h
example, considering the BiOStab case, where
we obtain, since hez
Thus, minimizing the relative roundoff error in the calculation of ffi
n is equivalent
to choosing n\Gamma1 such that it minimizes
to OR
n i, which corresponds to the use of orthogonal
residual polynomials of degree 1 instead of minimal residual polynomials of degree 1
in the recursive definition of l (i). Therefore, minimizing the relative roundoff error in
the inner product ffi
conflicts often with the objective of avoiding large intermediate
residuals in order to prevent the recursive residual to drift apart from the true residual
[40]. Performing a look-ahead step is then the only possible remedy.
We need now to find a criterion for deciding when a look-ahead step should be
performed so that both the above objectives can be attained. In view of the recursion
(4.2) for the product vectors in the case of look-ahead, it follows that a block can
be closed and a new product vector can be computed as regular vector only if the
diagonal blocks D j \Gamma1 and D j of the Gramian are numerically nonsingular. Thus, the
first condition that needs to be fulfilled in order to compute a new product vector
n+1 as regular vector (n
oe min (D j ) ";
where oe min (D j ) denotes the minimal singular value of the block D j . Note that this
does not mean that D j is well conditioned; it just guarantees numerical nonsingu-
larity. In practice, (5.1) can be replaced by some other condition that implies this
nonsingularity, for example by one implemented in a linear solver used for computing
in (2.19) and ff n in (2.22).
Freund et al. [15] use a second condition to guarantee that the Krylov space is
stably extended in the next Lanczos step in the sense that the basis of Lanczos vectors
is sufficiently well conditioned. In terms of product vectors, this second condition
amounts to computing, for any l, the new product vector w l
n+1 as regular vector if in
addition to (5.1) the following conditions for the coefficients ff n and fi n are fulfilled:
Here k\Deltak 1 denotes the ' 1 -norm and n(A) is an estimate for kAk which is updated
dynamically to ensure that the blocks W n
j do not become larger than a user specified
maximal size [15]. The motivation for (5.2) was to ensure that in the new regular
vector
(obtained from (4.2) with c
since n+ is regular) the component in
the new direction Aw n
n is sufficiently large, which will be the case if
n and tol 2 is a chosen tolerance.
Compared to (5.2), condition (5.3) costs additional two inner products and the
calculation of w t , which only in the regular case can be reused for the computation
of the new product vector w n
n+1 . Since we have
we could replace (5.3) by the less expensive condition
However, in an LTPM it is not possible to normalize all product vectors w l
(see
Section 6); so C is not equal to 1. Moreover, (5.5) is less strict than (5.3) and (5.2).
Since a look-ahead step is more expensive than regular steps providing the same
increase of the Krylov space dimension, a tight look-ahead criterion can save overall
computational cost. Therefore, it is reasonable to spend extra effort for it. For this
reason we favor criterion (5.3).
A drawback of (5.3) is that it does not take the angle between Aw n
n and w t into
account. If C c
should be
chosen larger than in the case where it is nearly 1. This motivates the choice
with suitably chosen constants C depending on the roundoff
unit ". This criterion requires an extra inner product and an appropriate choice for
. For many small problems C worked well, but
for larger problems we observed that C c decays dramatically with the block size.
Therefore, the probability that (5.3) with tol 2 as defined in (5.6) will be fulfilled
decreases with the block length and leads very often (especially in BiOS) to situations
where the maximal user specified block size was reached. Further investigations are
needed to see if this problem can be solved by a better choice of C 1 and C 2 or by
a more appropriate selection for ff n in the inner case, instead of using, as in [15],
2.
6. Obtaining the solution of b. So far we have only introduced various
algorithms for constructing a sequence of product vectors w l
n that provide a basis for
Km . However, our goal is to solve the linear system b. We now describe how
to accomplish this with these algorithms. There are several basic approaches to constructing
approximate solutions of linear systems from a Krylov space basis. Ours is
related to the Galerkin method, but avoids the difficulty that arises when the Galerkin
solution does not exist. (This difficulty causes, for example, the so-called pivot break-down
of the biconjugate gradient method.) Our approach is a natural generalization
of the one that lead to "unnormalized BiORes" introduced in [20] and renamed "in-
consistent BiORes" in [25]. In contrast to the BiOMin, BiODir, and BiORes versions
of the BiCG method, the inconsistent BiORes variant is not endangered by pivot
breakdowns. An alternative would be to construct approximate solutions based on
the quasi-minimal residual (QMR) approach [16]. For a combination of this approach
with LTPMs we refer the reader to [36].
Let the doubly indexed sequence of scalars
ae l
n be given by
ae l
We define a doubly indexed sequence of product iterates x l
n as follows. Starting with
an arbitrary x 0
N , we choose the initial product vector w 0
0 so that
ae 0
Here,
ae 0
1. For product iterates are now
implicitly defined by
ae l
ae l n
ae l n
if
ae l
Of course, x l
n will be constructed only when w l
is. If
ae l
it follows from (6.3) that
x l
ae l
n can be considered as an approximate solution of corresponding
residual is w l
ae l
n . In order to derive recursions for the scalars
ae l
n and the product
iterates x l
n , we introduce the blocks
\Theta
x l
\Theta
x l
\Theta
ae l
ae l
\Theta
ae l
ae l
as well as the auxiliary product iterates x 0l
auxiliary scalars
ae 0l
defined by
x 0l
ae 0l
Again, b
Then, by (6.3), (4.1), and (6.4),
ae 0l
Next, using (4.2) we conclude that
ae l
Aw l
l
Aw l
ae 0l
ae 0l
This shows that
x l
ae l
ae 0l
If we arrange the product iterates x l
n and the scalars
ae l
n in two tables analogous to
the w-table (with the n-axis pointing downwards and the l-axis to the right), these
two recursions can be used to proceed in vertical direction.
To obtain recursions for a horizontal movement, we assume first that the polynomials
are given by the normalized three-term recurrence (3.3). This covers all
algorithms described in this paper except look-ahead BiOS. Using (3.4) and (6.3) we
see that the product iterates satisfy
For the scalars
ae l
n the recursion
ae l+1
ae l
ae
are valid, but since the
polynomials l are normalized (that is, l all l), the scalars
ae l
n do not change
with the index l, and we have simply
ae l
ae 0
In look-ahead BiOS only one horizontal movement is explicitly computed per
step, namely in substep 5 of Loop 4.3 based on the recurrence (4.13). If we define in
analogy to (4.10) and (4.11)
\Theta
x 0nk
\Theta
x 0n
\Theta
ae 0n k
ae 0l
\Theta
ae 0n
ae
and
x
ae
recurrences for the auxiliary iterates and the corresponding scalars are given by
l
ae 0l+1
ae
l
Using for the scaling parameter fl n in (4.2) the special choice
n\Gamman
with 1m := [1
also the Lanczos polynomials ae n could be normalized (ae n
1), so that
ae l
However, as we mentioned before, some fl n might
turn out to be zero, which would lead to a so-called pivot breakdown. Moreover, to
avoid overflow or underflow, in the Lanczos process the scaling parameter fl n is often
used to normalize the Lanczos vectors yn . But since the Lanczos vectors yn are not
explicitly computed in an LTPM, we cannot base the choice of fl n here on their norm.
However, independent of the size of the blocks generated by the look-ahead process,
it is always necessary to compute the product vectors w n
n+1 in an LTPM. Therefore,
we chose here fl n to normalize w n
n+1 , that is,
7. Numerical Examples. In this section we demonstrate the practical performance
of our look-ahead versions of LTPMs in numerical examples. The tests are
restricted to BiOStab, BiOxMR2 and BiOS. The look-ahead versions of these LTPMs
are denoted by LABiOStab, LABiOxMR2, and LABiOS, respectively. For all tests
the initial iterate x used, and the iteration is terminated when the norm of
the recursive residual is less than
", the square root of the roundoff unit. The test
programs were written in FORTRAN90/95 and run on workstations with 64-bit IEEE
arithmetic. We start with small, artificially constructed model problems and move
gradually to large real-world problems.
Example 1. The following small test example was proposed by Joubert [29] and
also used by Brezinski and Redivo-Zaglia [6]:
. The Lanczos process, and hence, BiOStab, BiOxMR2,
and BiOS without look-ahead break down at step 2. On the contrary, all look-ahead
versions avoid this breakdown and converge after 4 iterations as shown in our plots
of the true residual norms kb \Gamma Ax l
ae l n
k in
Figure
7.1.
iteration number
true residual LABiOSstab1e-101e+10
iteration number
true residual LABiOxMR21e-101e+10
iteration number
true residual LABiOS
Fig. 7.1. The true residual norm history (i.e. log(kb \Gamma Ax l
ae l n
vs. n) for the linear system
defined in (7.1) solved by different LTPMs with look-ahead.
Example 2. Our second example is taken from [6], Example 5.2: the matrix
of order 400 and the right-hand side imply that the solution is
the Lanczos process, and
thus the LTPMs break down in the first iteration. Our look-ahead versions perform
two inner steps at the first and second iteration. All following iterations are regular
steps. This is a typical behavior: there are only few and short look-ahead steps, and
therefore the resulting mean overhead per iteration is nearly negligible.1e-101e+10
iteration number
true residual
Fig. 7.2. The true residual norm history (i.e. log(kb \Gamma Ax l
ae l n
vs. n) for the linear system
defined in (7.2) solved by different LTPMs with look-ahead.
Table
Indices of regular steps in LTPMs for three problems with a p-cyclic system matrix.
Example LABiOStab LABiOxMR2 LABiOS
In the next set of examples we consider p-cyclic matrices of the form
Hochbruck [27] showed that the computational work for solving a linear system with
a p-cyclic system matrix by QMR with look-ahead can be reduced by approximately
a factor 1=p (compared to a straight-forward implementation using sparse matrix-vector
multiplications with A), if the initial Lanczos vectors have only one nonzero
block conforming to the block structure of A, if the inner vectors are chosen so that
the nonzero structure of is not destroyed, and if the blocks B k are used
for generating only possibly nonzero components of the Krylov space basis. Then it
can be proven that in each cycle of p steps there are at least consecutive exact
breakdowns for p ? 2. But when using directly the system matrix A to generate the
Krylov subspace, we have only in the first cycle of p steps consecutive exact
breakdowns, while in the following cycles these will, in general, no longer persist,
but must be expected to become near-breakdowns. Therefore, such problems provide
good test examples for look-ahead algorithms.
Example 3. In this example we consider a 5-cyclic matrix with
right-hand side
and as initial left Lanczos vector we choose
entries. The convergence history for the different LTPMs
applied to this problem are shown in Figure 7.3, and the indices of the regular steps
are listed in Table 7.1. Those are found exactly where predicted.
Example 4. We move now to a bigger 4-cyclic system matrix with
is a 100 \Theta 100 matrix with random entries. The results for this
problem are shown in Figure 7.4, and the indices of the regular steps are depicted
also in Table 7.1. Again, they occur where predicted.
Example 5. Finally, we consider an 8-cyclic system matrix with B defined as
in Example 4. The convergence history plotted in Figure 7.5 shows oscillations in
the residual norm history of LABiOS, but overall LABiOS needs one iteration step
less than LABiOStab and LABiOxMR2 to fulfill the convergence condition. For
all methods the same look-ahead criterion ((5.3) with tol 2 defined as in (5.6) and
used. Especially for LABiOS the correct choice of the
look-ahead criterion seems to be crucial. While with the above values of C 1 and C 2
the breakdowns occurred only where expected, we discovered for this larger problem
Fig. 7.3. The true residual norm history (i.e. log(kb \Gamma Ax l
ae l n
vs. n) for the linear system
with the 5-cyclic system matrix defined in Example 3 solved by different LTPMs with look-ahead.
Fig. 7.4. The true residual norm history (i.e. log(kb \Gamma Ax l
ae l n
vs. n) for the linear system
with the 4-cyclic system matrix defined in Example 4 solved by different LTPMs with look-ahead.
that in BiOS the maximal user specified block length of 10 was reached very often
for which indicates, that the constructed inner vectors
become more and more linear dependent. Therefore, further investigations are needed
to figure out a better choice for the coefficient vector ff n in the inner case. For
example, following the proposal in [21], Hochbruck used in [27] a Chebyshev iteration
for the generation of the inner vectors instead of ff n
which we adapted from [15].
Example 6. In our last example we take a real world problem from the Harwell-
Fig. 7.5. The true residual norm history (i.e. log(kb \Gamma Ax l
ae l n
vs. n) for the linear system
with the 8-cyclic system matrix defined in Example 5 solved by different LTPMs with look-ahead.
Boeing Sparse Matrix Collection, namely SHERMAN1, a matrix of order 1000 with
3750 nonzero entries. The right-hand side b and the initial left Lanczos vector e
z 0
were generated as different unit vectors with random entries. Without look-ahead,
all LTPMs introduced here break down (BiOStab at step 186, BiOStab2 at step 176,
BiOxMR2 at step 145 and BiOS at step 352). On the contrary, the look-ahead
versions in combination with the look-ahead criterion (5.3) with tol 2 defined as in
converge as shown in Figure 7.6. Due to the real
spectrum of the system matrix A, there is only a slight difference in the convergence
of LABiOStab and LABiOxMR2. It was reported to us that BiCGStab(') with
and no look-ahead can handle this problem in about the same number of MVs as our
BiCGStab with look-ahead.
8. Conclusions. We have proposed look-ahead versions for various Lanczos-
type product methods that make use of the Lanczos three-term recurrences. Since
they are based on the Lanczos look-ahead version of Gutknecht [23] and Freund et al.
[15], they can handle look-ahead steps of any length and avoid steps that are longer
than needed. The algorithms proposed in this work should be easy to understand due
to the introduction of an array of product vectors, symbolically displayed in the w-
table, and the visualization of the progress in this w-table. Furthermore, the w-table
proved to be a useful tool to derive optimal variants, for which the computational
work in terms of MVs is minimized.
A variety of numerical examples demonstrate the practical performance of the
proposed algorithms. However, larger problems indicate that further work should
be directed to finding an improved look-ahead criterion that more reliably avoids
critical perturbations of the Lanczos coefficients by roundoff errors. Moreover, one
should investigate if there is a better way of constructing inner vectors than the choice
adapted from [15]. Alternatives would be to orthogonalize them within each block [2]
or to construct them by Chebyshev iteration [21]; but it is not clear if the additional
cost involved pays off.
Fig. 7.6. The true residual norm history (i.e. log(kb \Gamma Ax l
ae l n
vs. n) for a real-world
problem with the SHERMAN1 matrix from the Harwell-Boeing collection solved by different LTPMs
with look-ahead.
The look-ahead process in an LTPM stabilizes primarily the vertical movement
in the w-table, except in the BiOS algorithm where the w-table is symmetric. For
the horizontal movement it is also important to generate the Krylov space stably,
and both BiOStab2 and BiOxMR2 (in particular when suitably modified) do that
more reliably than BiCGStab, since the two-dimensional steps offer more flexibility.
This has also a lasting positive effect on the roundoff in the vertical movement. To
stabilize the horizontal movement further, a local minimal residual polynomial of
degree ' 1 with an adaptive choice of ', as in BiCGStab(') could be used. A
further possibility is to adapt ' to the size h j of the current Lanczos block, which
would mean to perform in each regular step an h j -dimensional local minimization of
the residual. An alternative is to trade in the local residual minimization for a more
stable Krylov space generation whenever the former causes a problem. Yet another
possibility, indicated in Section 4.2, is to combine the Lanczos process with a hybrid
Chebyshev iteration.
It is known that in finite-precision arithmetic BiORes is usually more affected
by roundoff than the standard BiOMin version of BiCG, at least with regard to the
gap between recursively and explicitly computed residuals. Therefore, we are in the
process to extend this work to look-ahead procedures for LTPMs that are based on
coupled two-term recurrences.
--R
Avoiding the look-ahead in the Lanczos method
Nonsymmetric Lanczos and finding orthogonal polynomials associated with indefinite weights
CGM: a whole class of Lanczos-type solvers for linear systems
Breakdowns in the computation of orthogonal poly- nomials
New look-ahead Lanczos-type algorithms for linear systems
Avoiding breakdown in the CGS algorithm
Avoiding breakdown in variants of the BI-CGSTAB algorithm
A quasi-minimal residual variant of the Bi-CGSTAB algorithm for nonsymmetric systems
Working Note 78: Computational variants of the CGS and BiCGstab methods
Numerical determination of fundamental modes
A transpose-free quasi-minimal residual algorithm for non-Hermitian linear systems
An implementation of the look-ahead Lanczos algorithm for non-Hermitian matrices
QMR: a quasi-minimal residual method for non-Hermitian linear systems
Matrix Computations
"look-around Lanczos"
The unsymmetric Lanczos algorithms and their relations to Pad'e approx- imation
Generalized conjugate gradient and Lanczos methods for the solution of non-symmetric systems of linear equations
An iteration method for the solution of the eigenvalue problem of linear differential and integral operators
The Tchebyshev iteration for nonsymmetric linear systems
Reduction to tridiagonal form and minimal realizations
Scientific Computing on Vector Computers
BiCGstab(l) for linear equations involving unsymmetric matrices with complex spectrum
Maintaining convergence properties of BiCGstab methods in finite precision arithmetic
BiCGstab(l) and other hybrid Bi-CG methods
Analysis of the Look Ahead Lanczos Algorithm
Accelerating the Jacobi method for solving simultaneous equations by Chebyshev extrapolation when the eigenvalues of the iteration matrix are complex
Residual smoothing techniques for iterative methods
Generalized biorthogonal bases and tridiagonalisation of matrices
--TR | lanczos-type product methods;look-ahead;sparse linear systems;non-Hermitian matrices;iterative methods |
354387 | A Block Algorithm for Matrix 1-Norm Estimation, with an Application to 1-Norm Pseudospectra. | The matrix 1-norm estimation algorithm used in LAPACK and various other software libraries and packages has proved to be a valuable tool. However, it has the limitations that it offers the user no control over the accuracy and reliability of the estimate and that it is based on level 2 BLAS operations. A block generalization of the 1-norm power method underlying the estimator is derived here and developed into a practical algorithm applicable to both real and complex matrices. The algorithm works with n t matrices, where t is a parameter. For t=1 the original algorithm is recovered, but with two improvements (one for real matrices and one for complex matrices). The accuracy and reliability of the estimates generally increase with t and the computational kernels are level 3 BLAS operations for t > 1. The last t-1 columns of the starting matrix are randomly chosen, giving the algorithm a statistical flavor. As a by-product of our investigations we identify a matrix for which the 1-norm power method takes the maximum number of iterations. As an application of the new estimator we show how it can be used to efficiently approximate 1-norm pseudospectra. | Introduction
. Research in matrix condition number estimation began in the
1970s with the problem of cheaply estimating the condition number
and an approximate null vector of a square matrix A given some factorization of
it. The earliest algorithm is one of Gragg and Stewart [10]. It was improved by
Cline, Moler, Stewart and Wilkinson [4], leading to the 1-norm condition estimation
algorithm used in LINPACK [8] and later included in Matlab (function rcond).
During the 1980s, attention was drawn to various componentwise condition numbers
and it was recognized that most condition estimation problems can be reduced
to the estimation of kAk when matrix-vector products Ax and A T x can be cheaply
computed [2], [16, Sec. 14.1]. Hager [12] derived an algorithm for the 1-norm that is
a special case of the more general p-norm power method proposed by Boyd [3] and
later investigated by Tao [18]. Hager's algorithm was modified by Higham [14] and
incorporated in LAPACK (routine xLACON) [1] and Matlab (function condest).
The LINPACK and LAPACK estimators both produce estimates that in practice
are almost always within a factor 10 and 3, respectively, of the quantities they are
estimating [13], [14], [15]. This has been entirely adequate for applications where only
an order of magnitude estimate is required, such as the evaluation of error bounds.
However, in some applications an estimate with one or more correct digits is required
(see, for example, the pseudospectra application described in section 4), and for these
the LINPACK and LAPACK estimators have the drawback that they offer the user
no way to control or improve the accuracy of the estimate. Here, "accuracy" refers
to average case behaviour. Also of interest for a norm estimator is its worst case
behaviour, that is, its "reliability".
This work was supported by Engineering and Physical Sciences Research Council grants
GR/L76532 and GR/L94314.
y Department of Mathematics, University of Manchester, Manchester, M13 9PL, England
(higham@ma.man.ac.uk, http://www.ma.man.ac.uk/~higham/).
z Department of Mathematics, University of Manchester, Manchester, M13 9PL, England
(ftisseur@ma.man.ac.uk, http://www.ma.man.ac.uk/~ftisseur/).
N. J. HIGHAM AND F. TISSEUR
Table
Empirical probabilities that minf e OE s one A of the form
inv(randn(100)) and N(0; 1) vectors x j .
Since estimating kAk cheaply appears inevitably to admit the possibility of arbitrarily
poor estimates (although proving so is an open problem [6]), one might look
for an approach for which probabilistic statements can be made about the accuracy of
the estimate. The definition of a subordinate matrix norm
suggests the estimate
ae
oe
where s is a parameter and the x j are independently chosen random vectors. In
the case of the 2-norm (kxk and an appropriate distribution of the x j ,
explicit bounds are available on the probability of such estimates being within a given
factor of kAk [7]. As is done in [11] for certain estimates of the Frobenius norm we can
scale our estimates e OE s / ' s OE s , where the constant ' s is chosen so that the expected
value of e
OE s is kAk (note that e
OE s can therefore be greater or less than kAk). For
the 1-norm (kxk
which is our interest here, we investigate this approach
empirically. For a fixed matrix A of the form, in Matlab notation, inv(randn(100)),
Table
1.1 shows the observed probabilities that minf e
various ff and s, based on 1000 separate evaluations of the OE s with vectors x j from
the normal N(0; 1) distribution, and where ' s is determined empirically so that the
mean of the e
OE s is
The table shows that even with only 35% of the estimates
were within a factor 0.9 of the true norm. The statistical sampling technique is
clearly too crude to be useful for obtaining estimates with correct digits. One way to
exploit the information contained in the vectors Ax j is to regard them as first iterates
from the 1-norm power method with starting vectors x j and to continue to iterate.
These considerations motivate the block generalization of the 1-norm power method
that we present in this paper. Our block power method works with a matrix with t
columns instead of a vector. To give a feel for how our new estimator compares with
the sampling technique, we applied the estimator (Algorithm 2.4) to 1000 random
matrices of the form inv(randn(100)). The results are shown in Table 1.2; the
estimates for are obtained at approximately the same cost as the estimates
e
OE s for respectively. The superiority of the new estimator is clear.
In section 2 we derive the block 1-norm power method and develop it into a
practical algorithm for both real and complex matrices. In section 3 we present
numerical experiments that give insight into the behaviour of the algorithm. An
application involving complex matrices is given in section 4, where we describe how
the algorithm can be used to approximate 1-norm pseudospectra. Conclusions are
presented in section 5.
Table
Empirical probabilities that est - for est from Algorithm 2.4 and A of the form
inv(randn(100)).
Finally, we note that although our work is specific to the 1-norm, the 1-norm
can be estimated by applying our algorithm to A , since
2. Block 1-Norm Power Method. The 1-norm power method is a special case
of Boyd's p-norm power method [3] and was derived independently by Hager [12]. For
a real matrix A we denote by sign(A) the matrix with (i; according
as a ij - 0 or a ij ! 0. The jth column of the identity matrix is denoted by e j .
Algorithm 2.1 (1-norm power method). Given A 2 R n\Thetan this algorithm computes
and x such that
repeat
quit
(smallest such
Algorithm 2.1 was modified by Higham [14, Alg. 4.1] (see also [15], [16, Alg. 14.4])
to improve its reliability and efficiency. The modifications that improve the reliability
are, first, to force at least two iterations and, second, to take as the final estimate the
maximum of that produced by the algorithm and
The vector b is a heuristic choice intended to "pick out" any large elements of A in
those cases where such elements fail to be revealed during the course of the algorithm.
Efficiency is improved by terminating the algorithm after computing - if it is the same
as the previous -, since it can be shown that convergence would otherwise be declared
after the subsequent computation of z.
To obtain a more accurate and reliable estimate than that provided by Algorithm
2.1 we could run the algorithm t times in succession on t different starting
vectors. This idea was suggested in [12], with each starting vector being the mean
of the unit vectors e j not already visited and with the algorithm being prohibited
from visiting unit vectors previously visited. Note that the estimates obtained this
way are nondecreasing in t. This approach has two weaknesses: it allows limited
communication of information between the t different iterations and the highest level
computational kernel remains matrix-vector multiplication. We therefore develop a
block algorithm that works with an n \Theta t matrix as a whole instead of t separate
4 N. J. HIGHAM AND F. TISSEUR
n-vectors. The block approach offers the potential of better estimates, through providing
more information on which to base decisions, and it allows the use of level
3 BLAS operations, thus promising greater efficiency. The following algorithm estimates
not just the 1-norm of A, but, as a by-product, the 1-norms of the t columns
of A having largest 1-norms.
Algorithm 2.2 (block 1-norm power method). Given A 2 R n\Thetan and a positive
integer t, this algorithm computes vectors g and ind with
t such that g j is a lower bound for the 1-norm of the column of A of jth
largest 1-norm.
Choose starting matrix X 2 R n\Thetat with columns of unit 1-norm.
repeat
Sort g so that
ind best
re-order ind correspondingly.
Like the basic 1-norm power method Algorithm 2.2 has the attractive
property that it generates increasing sequences of estimates. Denote with a superscript
"(k)" quantities from the kth iteration of the loop in Algorithm 2.2 and let a j denote
the jth column of A.
Lemma 2.3. The sorted vectors g (k) and h (k) satisfy
and
2:
Proof. First, we have
1-j-t
r
But
r
y
r
r
Thus
r
r k1 kx (k)
r k1 - max
Furthermore, if h (k)
r s (k)
To prove (2.3), assume without loss of generality that x (k\Gamma1)
t. Then
has the form
Z
ff t+1 ff t+1
where each ff i in row i is, in general, different, and ff i - ka n. The algorithm
chooses the ind values corresponding to the t rows of Z (k) with largest 1-norm, and
the g (k+1)
on the next stage are at least as large as these 1-norms. Since row j
of Z (k) has 1-norm ka
follows.
Algorithm 2.2 has three possible sources of inefficiency. First, the columns of S are
vectors of \Sigma1s and so a pair of columns s i and s j may be parallel
case z and in computing Z a matrix-vector multiplication is redundant. There
can be up to t\Gamma1 redundant matrix-vector products per iteration, this maximum being
achieved on the second iteration with S the matrix of ones when A has nonnegative
elements. The second possible inefficiency is that a column of S may be parallel to one
from the previous iteration, in which case the formation of
redundant computation. We choose to detect parallel columns and replace them by
random vectors randf\Gamma1; 1g not already in S or the previous
denotes a vector with elements from the uniform distribution on the set f\Gamma1; 1g.
The detection is done by forming inner products between columns and looking for
elements of magnitude n. The total cost of these computations is O(nt 2 ) flops, which
is negligible compared with the 2n 2 t flops required for each matrix product in the
algorithm, since t - n in practice.
Our strategy could be extended to check for parallel columns between the current
S and all previous S; we return to this possibility in section 3. If all the columns
of S are parallel to columns from the previous iteration then it is easy to see that
Algorithm 2.2 is about to converge; we therefore immediately terminate the iteration
without computing Z.
Finally, on step k we can have x (k)
so that the computation of Ax (k)
then repeats an earlier computation. Repeated
vectors e j are easily avoided by keeping track of the indices of all the previously used
e j and selecting ind ind t from among the indices not previously used. If all
the indices are repeats, then again we prematurely terminate the iteration, saving a
matrix product.
Note that the first and third inefficiencies are possibly only for t ? 1, while the
second can happen for the original 1-norm power method.
Our strategy of detecting redundant computations and replacing them by ones
that provide new information has three benefits. First, it can reduce the amount of
computation, through premature detection of convergence. Second, it can lead to better
estimates. For the columns of S this depends on the random replacement vectors
generated, but for the e j the improvement is deterministic up to future replacements
6 N. J. HIGHAM AND F. TISSEUR
of columns of S. The third benefit is that the dimensions of the matrix multiplications
remain constant at each iteration, as opposed to varying if we simply skip redundant
computations; this helps us to make efficient use of the computing resources.
The next algorithm incorporates these modifications. The algorithm is forced to
take at least 2 (and at most itmax) iterations so that it computes at least t columns
of A; it also explicitly identifies the approximate maximizing vector that achieves the
norm estimate.
Algorithm 2.4 (practical block 1-norm estimator). Given A 2 R n\Thetan and positive
integers t and itmax - 2, this algorithm computes a scalar est and vectors v and
w such that est -
Choose starting matrix X 2 R n\Thetat with columns of unit 1-norm.
recording indices of used unit vectors e j .
est
est
est ? est old or
ind best
est - est old , est = est old , goto (6), end
est
(2) If every column of S is parallel to a column of S old , goto (6), end
Ensure that no column of S is parallel to another column of S
or to a column of S old by replacing columns of S by randf\Gamma1; 1g.
best , goto (6), end
re-order ind correspondingly.
(5) If ind(1: t) is contained in ind \Gamma hist, goto (6), end
Replace ind(1: t) by the first t indices in ind(1: n) that are
not in ind \Gamma hist.
hist ind(1: t)]
best
Note that this algorithm does not explicitly compute lower bounds for the 1-norms
of all t largest columns of A. If this information is required (and we are not aware of
any applications in which it is needed) then it can be obtained by keeping track of
the largest
Inequality (2.2), now expressed as est (k) - h (k)
est (k+1) , is still valid, except
that h (k)
est (k+1) is possible on the last iteration if the original ind (k)
1 is a repeat
(this event is handled by the test (1)). However, (2.3) is no longer true, because of
the avoidance of repeated indices.
How do Algorithms 2.4 and 2.2 compare? Ignoring the itmax test, Algorithm 2.4
terminates in at most n=t+1 iterations, since t vertices e j are visited on each iteration
after the first and no vertex can be visited more than once. The same cannot be said
of Algorithm 2.2 because of the possibility of repeated vertices. Algorithm 2.4 can
produce a smaller estimate than Algorithm 2.2: when a redundant computation is
avoided, the new information computed can lead to an apparently more promising
vertex (based on the relative sizes of the h i ) replacing one that actually corresponds
to a larger column of A. However it is more likely, when Algorithms 2.4 and Algorithm
2.2 produce different results, that Algorithm 2.4 produces a better estimate, as
in the following example.
Example 2.5. For a certain starting
obtained the following results. For Algorithm 2.2:
1: (0, 8.10e-001) (0, 6.13e-001)
2: (10, 1.77e+000) (4, 1.76e+000)
Underestimation ratio: 1.99e-001
The first column denotes the iteration number. The kth row gives the sorted g (k)
t, each one preceded by the corresponding index ind i . (Since X 1 does not have
columns e j , the ind i for the first iteration are shown as zero.) For Algorithm 2.4:
1: (0, 8.10e-001) (0, 6.13e-001)
2: (10, 1.77e+000) (4, 1.76e+000)
parallel column between S and S-old
3: (2, 8.87e+000) (5, 4.59e+000)
Exact estimate!
Algorithm 2.2 converges after 2 iterations and produces an estimate too small by a
factor 5. However, on the second iteration Algorithm 2.4 detects a column of S parallel
to one of S old and replaces it. The new column produces a different Z matrix and
causes the convergence test (4) to be failed. The extra iteration visits the (unique)
column of maximum 1-norm, so an exact estimate is obtained.
When 2.4 differs from the modified version of Algorithm 2.1
used in LAPACK 3.0 [14, Alg. 4.1], [16, Alg. 14.4] in two ways. First, Algorithm 2.4
does not use the "extra estimate" (2.1). Second, the LAPACK algorithm checks for
but not for - does Algorithm 2.4. This is an oversight.
We recommend that LAPACK's xLACON be modified to include the extra test. This
change will not affect the estimates produced but will sometimes reduce the number
of iterations.
We now explain our choice of starting matrix. We take the first column of X to
be the vector of 1s, which is the starting vector used in Algorithm 2.1. This has the
advantage that for a matrix with nonnegative elements the algorithm converges with
an exact estimate on the second iteration, and such matrices arise in applications,
for example as a stochastic matrix or as the inverse of an M-matrix. The remaining
columns are chosen as randf\Gamma1; 1g, with a check for and correction of parallel columns,
exactly as for S in the body of the algorithm. We choose random vectors because it
is difficult to argue for any particular fixed vectors and because randomness lessens
the importance of counterexamples (see the comments in the next section).
Next, we consider complex matrices, which arise in the pseudospectrum application
of section 4. Everything in this section remains valid for complex matrices
provided that sign(A) is redefined as the matrix (a
transposes are replaced by conjugate transposes. The matrix S is now complex with
elements of unit modulus and we are much less likely to find parallel columns of S
8 N. J. HIGHAM AND F. TISSEUR
from one iteration to the next or within the current S. Therefore for complex matrices
we omit the tests for parallel columns. However, we take the same, real, starting
matrix. There is one further question in the complex case. In the analogue of Algorithm
2.1 for complex matrices in [14, Alg. 5.1] z is defined as z = Re(A -), based
on subgradient considerations. In our block algorithm should we take
The former can be justified from heuristic considerations and preserves
more information about A. We return to this question in the next section.
The motivation for Algorithm 2.4 is to enable more accurate and reliable estimates
to be obtained than are provided by the 1-norm power method. The question arises
of how the accuracy and reliability of the estimates varies with t. Little can be
said theoretically because, unlike the approach of [12] mentioned at the start of this
section, the estimates are not monotonic in t. If we run Algorithm 2.4 for t 1 and for
using a common set of t 1 starting vectors, we can obtain a smaller estimate
for t 2 than for t 1 , because a less promising choice of unit vector e j can turn out to
be better than a more promising choice made with more available information. Non-monotonicity
is unlikely, however, and we argue that it is a price worth paying for
the other advantages that accrue. In the next section we investigate the behaviour of
Algorithm 2.4 empirically.
3. Numerical Experiments. Our aim in this section is to answer the following
questions about Algorithm 2.4, bearing in mind that for the algorithm is an
implementation of the well understood 1-norm power method.
1. How does the accuracy and reliability of the norm estimates vary with t?
2. How good are the norm estimates in general?
3. How does the number of iterations behave for t ? 1?
Note that we are not searching for counterexamples, as was done for previous condition
estimators [4], [5], [14]. We know that for a fixed starting matrix and any t - n there
must be families of matrices whose norm is underestimated by an arbitrarily large
factor, since the algorithm samples the behaviour of the n \Theta n matrix A on fewer than
vectors. But since the algorithm uses a random starting matrix for t ? 1, each
counterexample will be valid only for particular starting matrices.
All our tests have been performed with Matlab.
Our first group of tests deals with random real matrices. Amongst the matrices
A we used were:
1. A from the normal N(0; 1) distribution (denoted randn) and its inverse, orthogonal
QR factor, upper triangular part, and inverse of the upper triangular
part.
2. A from the uniform distribution on the set f\Gamma1; 0; 1g (denoted rand(-1,0,1-)),
A \Gamma1 from the uniform distribution on the interval [0; 1].
3. A and A \Gamma1 of the form U \Sigma V T where U and V are random orthogonal matrices
and with the singular values oe i distributed exponentially,
arithmetically or with all except the smallest equal to unity, and with 2-norm
condition number ranging from 1 to 10 16 .
Note that we omit, for example, matrices from the uniform [0; 1] and uniform f\Gamma1; 1g
distributions, because for such matrices Algorithm 2.4 is easily seen to produce the
exact norm for all t.
We chose n and t in the range
For each test matrix we recorded a variety of statistics including the underestimation
ratio averaged and minimized over each type of A for fixed n and
t, the relative error jest and the number of iterations. We declared an
estimate exact if the relative error was no larger than 10 \Gamma14 (the unit roundoff is of
For a given matrix A we first generated a starting
columns, where t max is the largest value of t to be used, and then ran Algorithm 2.4
using starting t). In this way we could see the effect of
increasing t. In particular, we checked what percentage of the estimates for a given t
were at least as large as the estimates for all smaller t; we denote this "improve%".
We set Algorithm 2.4.
First, we give some general comments on the results.
1. Increasing t usually gave larger average and minimum underestimation ratios,
though there were exceptions. The quantity improve% was 100 about half
the time and never less than 78.
2. The number of iterations averaged between 2 and 3 throughout, with maxima
ranging from 2 to 5 depending on the type of matrix. Thus increasing t from
1 has little effect on the number of iterations-an important fact that could
not be predicted from the theory. Although there are specially constructed
examples for which the 1-norm power method requires many iterations (one
is described below), it is rare for the limit of 5 iterations to come into effect.
3. Throughout the tests we also computed the extra estimate (2.1) used by
the LAPACK norm estimator [14]. As expected, in none of our tests (with
random matrices) was the extra estimate larger than the estimate provided
by Algorithm 2.4 with
In
Tables
3.1 and 3.2 we show detailed results for two particular types of random
matrix from among those described above, with
rand(-1,0,1-). The columns headed "Products" show the average and maximum
total number of matrix products AX and A T S. In each case 5000 matrices were used.
For the matrices inv(randn) taking significantly improves the worst-case and
average estimates and the proportion of exact estimates over
estimate is exact almost 98 percent of the time. For the matrices rand(-1,0,1-)
the improvements as t increases are less dramatic but still useful; notice that exactly
four matrix products were required in every case. As well as recording the
number of products, we checked how convergence was achieved. For the matrices
rand(-1,0,1-) convergence was always achieved at the test (4) in Algorithm 2.4,
while for inv(randn) convergence was declared at tests (4) and (2) in approximately
96 and 4 percent of the cases, respectively (with just a few instances of convergence
at (5) for t - 2). The last two columns of Table 3.1 show the average and maximum
number of parallel columns of S that were detected. A small number of repeated e j
vectors were detected and replaced (the largest average was 0.03, occurring for
and the maximum number of 7 occurred for columns or repeated
were detected for the matrices in Table 3.2.
The strategy of replacing parallel columns of S and repeated e j vectors has little
effect on the overall performance of Algorithm 2.4 in our tests with random matrices.
Since particular examples can be found where it is beneficial (see Example 2.5) and
the cost is negligible, we feel its use is worthwhile. However, we see no advantage
to extending the strategy to compare the columns of S with those of all previous S
matrices.
Higham [15] gives a tridiagonal matrix for which the 1-norm power method (Algo-
rithm 2.1) requires n iterations to converge. We have constructed a matrix for which
N. J. HIGHAM AND F. TISSEUR
Table
Results for 5000 matrices inv(randn) of dimension 100.
Underest. ratio Products Parallel cols.
t min average % exact average max improve% average max
9 0.893 1.000 99.88 4.0 4 100.00 1.22 15
Table
Results for 5000 matrices rand(f-1,0,1g) of dimension 100.
Underest. ratio Products
t min average % exact average max improve%
9 0.775 0.951 28.56 4 4 90.74
the maximum iterations is required. It is the inverse of a bidiagonal matrix:
An
1 .
. ff3
1 .
. \Gammaff3
(The minus sign in front of the matrix is necessary!) It is straightforward to show
that when Algorithm 2.1 is applied to An (ff) it produces, for
Table
Results for matrix A 100 repetitions.
Underest. ratio Products
t min average % exact average max improve%
6 1.000 1.000 100.00 4.6 11 100.00
7 1.000 1.000 100.00 4.3 11 100.00
8 1.000 1.000 100.00 4.2 8 100.00
9 1.000 1.000 100.00 4.1 8 100.00
Thus every column of An (ff) is computed, in order from first to last, and the exact
norm is obtained. If the algorithm is terminated after p iterations then it produces
the estimate (1 \Gamma ff
behaves in exactly the same way.
We applied Algorithm 2.4 1000 times to A 100 as in all
our tests). The results are shown in Table 3.3; in the 6th column "11 " denotes that
convergence was declared because the iteration limit was reached (the percentage of
such occurrences varied from 100% for to 0.1% for 7). The underestimation
ratio for agrees with the theory and is unacceptably small (note that there is
no randomness, and hence only one estimate, for 1). But for all t - 2 the average
norm estimates are satisfactory. The extra estimate (2.1) has the value 0.561; thus it
significantly improves the estimate for but is worse than the average estimates
for all greater t.
For complex matrices we have tried both at (3) in
Algorithm 2.4. Tables 3.4 and 3.5 compare the two choices for 5000 100 \Theta 100 random
complex matrices of the form inv(rand+i*rand), where rand is a matrix from the
uniform distribution on the interval [0; 1]. In this test, larger underestimation ratios
are obtained for the percentage of exact estimates is higher, and the
statistics on the number of matrix products are slightly better. In other tests we have
found the complex choice of Z always to perform at least as well, overall, as the real
choice (see, for example, the riffle shuffle example in section 4). We therefore keep Z
complex in Algorithm 2.4. In version 3.0 of LAPACK the norm estimator has been
modified from that in version 2.0 to keep the vector z complex.
Finally, how does Algorithm 2.4 compare with the suggestion of Hager mentioned
at the beginning of section 2 of running the 1-norm power method t times in suc-
cession? In tests with random matrices we have found Hager's approach to produce
surprisingly good norm estimates, but they are generally inferior to those from Algorithm
2.4. Since Hager's approach is based entirely on level 2 BLAS operations
Algorithm 2.4 is clearly to be preferred.
4. Computing 1-Norm Pseudospectra. In this section we apply the complex
version of Algorithm 2.4 to the computation of 1-norm pseudospectra. For ffl - 0 and
N. J. HIGHAM AND F. TISSEUR
Table
Results for 5000 matrices inv(rand+i*rand) of dimension 100, with
Underest. ratio Products
t min average % exact average max improve%
9 0.859 1.000 99.86 4.0 4 100.00
Table
Results for 5000 matrices inv(rand+i*rand) of dimension 100, with
Underest. ratio Products
t min average % exact average max improve%
9 0.819 0.999 98.12 4.0 6 99.74
any subordinate matrix norm the ffl-pseudospectrum of A 2 C n\Thetan is defined by [22]
z is an eigenvalue of A +E for some E withkEk - ffl g
or, equivalently, in terms of the resolvent (zI
Most published work on pseudospectra has dealt with the 2-norm and the utility of 2-
norm pseudospectra in revealing the effects of non-normality is well appreciated [19],
The 2-norm and any other p-norm of an n \Theta n matrix differ by a factor at most
n. For small n, pseudospectra therefore do not vary much between different p-norms.
However, J'onsson and Trefethen have shown [17] that in Markov chain applications
the choice of norm for pseudospectra can be crucial. In Markov chains representing
a random walk on an m-dimensional hypercube and a riffle shuffle of m cards, the
transition matrices are of dimension exponential in m and factorial in m, respectively.
It is known that when measured in an appropriate way, these random processes converge
to a steady state not gradually but suddenly after a certain number of steps.
As these processes involve powers of matrices of possibly huge dimension, and as the
1-norm is the natural norm for probability 1 , 1-norm pseudospectra are an important
tool for explaining the transient behaviour [17].
One of the most useful graphical representations of pseudospectra is a plot of level
curves of the resolvent. We therefore consider the standard approach of evaluating
on an equally spaced grid of points z in some region of interest in the
complex plane and sending the results to a contour plotter. A variety of methods for
carrying out these computations for the 2-norm are surveyed by Trefethen [21], but
most of the ideas employed are not directly applicable to the 1-norm.
Explicitly forming (zI at each grid point is computationally expensive.
A more efficient approach is to factorize P LU at each point z by LU
factorization with partial pivoting and to use Algorithm 2.4 to estimate k(zI \Gamma
the matrix multiplications in the algorithm become triangular solves with multiple
right-hand sides. This approach can take advantage of sparsity in A. However, the
method still requires O(n 3 ) operations per grid point when A is full.
We consider instead a more efficient approach that is applicable when we can
compute a Schur factorization [9, Chap. 7]
where Q is unitary and T is upper triangular. Given this factorization we have
so that forming a matrix product with (zI its conjugate transpose reduces
to solving a multiple right-hand side triangular system and multiplying by Q and
. Given this initial decomposition the cost per grid point of estimating the resolvent
norm using Algorithm 2.4 is just O(n 2 t) flops-a substantial saving over the
first approach, and of the same order of magnitude as the cost of standard methods
for computing 2-norm pseudospectra (provided t is small). In place of the Schur
decomposition we could use a Hessenberg decomposition, computed either by Gauss
transformations or Householder transformations [9, Sec. 7.4]; these decompositions
are less expensive but the cost per grid point is a larger multiple of n 2 flops because
of the need to factorize a Hessenberg matrix.
In more detail our approach is as follows.
Algorithm 4.1 (1-norm pseudospectra estimation). Given A 2 C n\Thetan and a
positive integer t, this algorithm estimates k(zI on a specified grid of points
in the complex plane, using Algorithm 2.4 with parameter t.
Compute the Schur factorization
for each grid point z
Apply the complex version of Algorithm 2.4 to (zI
with parameter t, using the representation (4.2).
Our experience is that Algorithm 4.1 frequently leads to visually acceptable contour
plots even for 1. As the following example shows, however, a larger value of t
may be needed. We take an example from Markov chains: the Gilbert-Shannon-Reeds
model of a riffle shuffle on a deck of n cards. The transition matrix P is of dimension
n!. Remarkably, as J'onsson and Trefethen explain [17], the dimension of the matrix
1 Because of the use of row vectors in the Markov chain literature, it is actually the 1-norm that
is relevant. By using k1 we can continue to work with the 1-norm.
14 N. J. HIGHAM AND F. TISSEUR
can be reduced to n by certain transformations that preserve the 1-norms of powers
and of the resolvent. For this experiment we took and, as in [17], we computed
pseudospectra of the "decay matrix" working with the reduced
form of A.
Figure
4.1 shows approximations to the 1-norm pseudospectra computed on a
100 \Theta 100 grid, with 2.4. Contours are plotted for
the dashed line marks the unit circle, and the eigenvalues are
plotted as dots. We did not exploit the fact that, since A is real, the pseudospectra
are symmetric about the real axis. The contour plot for clearly incorrect in
the outer contour, while yields an improvement and the plot for
to visual accuracy. Table 4.1 summarizes the key statistics from these computations,
showing that on average the norm estimates had about t correct significant digits for
3. The figure and the table confirm that it is better to keep Z complex in
Algorithm 2.4.
We give a further example, in which A is a spectral discretization of an integral
operator of Landau [21, Sec. 21]; like the operator, A is complex and symmetric. We
took dimension Fresnel number 8. As noted in [21] for the 2-norm,
a fine grid is needed to resolve the details for this example; we used a 200 \Theta 200
grid.
Figure
4.2 shows the computed pseudospectra for
summarizes the statistics for these values and for 2. Contours are plotted for
the dashed line marks the unit circle, and the eigenvalues
are plotted as dots. In Table 4.2, "11 " denotes that convergence was declared because
the iteration limit was reached (the percentage of such occurrences was 0.12% for
and 0.025% for one of the contour lines misses an eigenvalue in
the north-west corner of the plot. For the plot differs from the exact one only
by some tiny oscillations in two outer contours. While Algorithm 2.4 performs well
even for small t, as measured by the underestimation ratio, quite accurate values of
the 1-norm of the resolvent are needed in this example in order to produce smooth
contours.
5. Conclusions. We have derived a new matrix 1-norm estimation algorithm,
Algorithm 2.4, with a number of key features. Most importantly, the algorithm has a
parameter t that can be used to control the accuracy and reliability of the estimate
(which is actually a lower bound). While there is no guarantee that increasing t increases
the estimate (leaving aside the fact that the starting matrix is partly random),
the estimate typically does increase with t, leading quickly to one or more correct significant
digits in the estimate. A crucial property of the algorithm is that the number
of iterations and matrix products required for convergence is essentially independent
of t (for random matrices about 2 iterations are required on average, corresponding
to 4 products of n \Theta n and n \Theta t matrices). The algorithm avoids redundant computations
and keeps constant the size of the matrix multiplications. In future work we
intend to investigate how the choice of t affects the efficiency of the algorithm in a
high performance computing environment.
Unlike the statistically-based norm estimation techniques in [7], [11] which currently
apply only to real matrices, our algorithm handles both real and complex
matrices.
Since our algorithm uses a partly random starting matrix for t - 2, it is natural
to ask whether bounds, valid for all A, can be obtained on the probability of the
estimate being within a certain factor of kAk 1 . We feel that the very features of the
algorithm that make it so effective make it difficult or impossible to derive useful
bounds of this type.
For our algorithm is very similar to the estimator in LAPACK, the differences
being that our algorithm omits the extra estimate (2.1) and for real matrices
we test for parallel rather than simply repeated sign vectors (which improves the ef-
ficiency). Unlike in the estimator in LAPACK 2.0, for complex matrices we do not
take the real part of the z vector (which improves both the quality of the estimates
and the efficiency), and this change has been incorporated into LAPACK 3.0.
The new algorithm makes an attractive replacement for the existing LAPACK
estimator. The value would be the natural default choice (with the extra
estimate (2.1) included for extra reliability and backward compatibility) and a user
willing to pay more for more accurate and reliable 1-norm estimates would have the
option of choosing a larger t.
Acknowledgements
. We thank Nick Trefethen for providing M-files that compute
the riffle shuffle and Landau matrices used in section 4.
N. J. HIGHAM AND F. TISSEUR
Table
Results for riffle shuffle example.
Underest. ratio Products
t min average % exact average max
Fig. 4.1. 1-norm pseudospectra for the riffle shuffle. Clockwise from top left:
Table
Results for Landau matrix example.
Underest. ratio Products
t min average % exact average max
Fig. 4.2. 1-norm pseudospectra for the Landau matrix. Clockwise from top left:
N. J. HIGHAM AND F. TISSEUR
--R
Solving sparse linear systems with sparse backward error.
The power method for
An estimate for the condition number of a matrix.
A set of counter-examples to three condition number estimators
Open problems in numerical linear algebra.
Estimating extremal eigenvalues and condition numbers of matrices.
Matrix Computations.
A stable variant of the secant method for solving nonlinear equations.
Condition estimates.
A survey of condition number estimation for triangular matrices.
FORTRAN codes for estimating the one-norm of a real or complex matrix
Experience with a matrix norm estimator.
Accuracy and Stability of Numerical Algorithms.
A numerical analyst looks at the
Convergence of a subgradient method for computing the bound norm of matrices.
Pseudospectra of matrices.
Pseudospectra of linear operators.
Computation of pseudospectra.
Spectra and Pseudospectra: The Behavior of Non-Normal Matrices and Operators
--TR
--CTR
J. R. Cash , F. Mazzia , N. Sumarti , D. Trigiante, The role of conditioning in mesh selection algorithms for first order systems of linear two point boundary value problems, Journal of Computational and Applied Mathematics, v.185 n.2, p.212-224, 15 January 2006
J. R. Cash , F. Mazzia, A new mesh selection algorithm, based on conditioning, for two-point boundary value codes, Journal of Computational and Applied Mathematics, v.184 n.2, p.362-381, 15 December 2005 | LAPACK;matrix condition number;p-norm power method;level 3 BLAS;1-norm pseudospectrum;condition number estimation;matrix 1-norm;matrix norm estimation |
354402 | Constraint Preconditioning for Indefinite Linear Systems. | The problem of finding good preconditioners for the numerical solution of indefinite linear systems is considered. Special emphasis is put on preconditioners that have a 2 2 block structure and that incorporate the (1,2) and (2,1) blocks of the original matrix. Results concerning the spectrum and form of the eigenvectors of the preconditioned matrix and its minimum polynomial are given. The consequences of these results are considered for a variety of Krylov subspace methods. Numerical experiments validate these conclusions. | Introduction
In this paper we are concerned with investigating a new class of preconditioners
for indefinite systems of linear equations of a sort which arise in constrained
optimization as well as in least-squares, saddle-point and Stokes problems. We
attempt to solve the indefinite linear system
A
where A 2 IR n\Thetan is symmetric and B 2 IR m\Thetan . Throughout the paper we shall
assume that m - n and that A is non-singular, in which case B must be of full
rank.
Example 1. Consider the problem of minimizing a function of n variables
subject to m linear equality constraints on the variables, i.e.
minimize
subject to
Any finite solution to (1.2) is a stationary point of the Lagrangian function
where the - i are referred to as Lagrangian multipliers. By differentiating L with
respect to x and - the solution to (1.2) is readily seen to satisfy linear
equations of the form (1.1) with x d. For this
application these are known as the Karush-Kuhn-Tucker (KKT) conditions. 2
Example 2 (The Stokes Problem). The Stokes equations in compact form are
defined by
div
Discretising equations (1.3) together with the boundary conditions
defines a linear system of equations of the form (1.1), where b
Carsten Keller, Nick Gould and Andy Wathen
Among the most important iterative methods currently available, Krylov sub-space
methods apply techniques that involve orthogonal projections onto sub-spaces
of the form
The most common schemes that use this idea are the method of conjugate
gradients (CG) for symmetric positive definite matrices, the method of minimum
residuals (MINRES) for symmetric and possibly indefinite matrices and
the generalised minimum residual method (GMRES) for unsymmetric matrices,
although many other methods are available-see for example Greenbaum [12].
One common feature of the above methods is that the solution of the linear
system (1.1) is found within n +m iterations in exact arithmetic-see Joubert
and Manteuffel [14, p. 152]. For very large (and possibly sparse) linear systems
this upper limit on the number of iterations is often not practical. The idea
of preconditioning attempts to improve on the spectral properties, i.e. the
clustering of the eigenvalues, such that the total number of iterations required
to solve the system to within some tolerance is decreased substantially.
In this paper we are specifically concerned with non-singular preconditioners
of the form
n\Thetan approximates, but is not the same as A. The inclusion of the
exact representation of the (1; 2) and (2; 1) matrix blocks in the preconditioner,
which are often associated with constraints (see Example 1), leads one to hope
for a more favourable distribution of the eigenvalues of the (left-)preconditioned
linear system
Since these blocks are unchanged from the original system, we shall call G a
constraint preconditioner. A preconditioner of the form G has recently been
used by Luk-san and Vl-cek [16] in the context of constrained non-linear programming
problems-see also Coleman [4], Polyak [18] and Gould et al. [11].
Here we derive arguments that confirm and extend some of the results in [16]
and highlight the favourable features of a preconditioner of the form G. Note
that Golub and Wathen [10] recently considered a symmetric preconditioner of
the form (1.4) for problems of the form (1.1) where A is non-symmetric.
In Section 2 we determine the eigensolution distribution of the preconditioned
system and give lower and upper bounds for the eigenvalues of G \Gamma1 A
Constraint Preconditioning 3
in the case when the submatrix G is positive definite. Section 3 describes the
convergence behaviour of a Krylov subspace method such as GMRES, Section 4
investigates possible implementation strategies, while in Section 5 we give numerical
results to support the theory developed in this paper.
Preconditioning A
For symmetric (and in general normal) matrix systems, the convergence of an
applicable iterative method is determined by the distribution of the eigenvalues
of the coefficient matrix. In particular it is desirable that the number of
distinct eigenvalues, or at least the number of clusters, is small, as in this case
convergence will be rapid. To be more precise, if there are only a few distinct
eigenvalues then optimal methods like CG, MINRES or GMRES will terminate
(in exact arithmetic) after a small and precisely defined number of steps.
We prove a result of this type below. For non-normal systems convergence as
opposed to termination is not so readily described-see Greenbaum [12, p. 5].
2.1 Eigenvalue Distribution
The eigenvalues of the preconditioned coefficient matrix G \Gamma1 A may be derived
by considering the general eigenvalue problem
x
y
x
y
Y Z
be an orthogonal factorisation of B T , where
n\Thetam and Z 2 IR n\Theta(n\Gammam) is a basis for
the nullspace of B. Premultiplying (2.6) by the non-singular and square matrix6 6 4
and postmultiplying by its transpose gives6 6 4
x z
x y
x z
x y
with and where we made use of the equalities
Performing a simultaneous sequence of row and column inter-
4 Carsten Keller, Nick Gould and Andy Wathen
changes on both matrices in (2.7) reveals two lower block-triangular matrices
~
and thus the preconditioned coefficient matrix G \Gamma1 A is similar to
I
\Theta (Z T GZ)
Here the precise forms of \Theta, \Upsilon and \Gamma are irrelevant for the argument that
they are in general non-zero. We just proved the following theorem.
Theorem 2.1. Let A 2 IR (n+m)\Theta(n+m) be a symmetric and indefinite matrix
of the form
where A 2 IR n\Thetan is symmetric and B 2 IR m\Thetan is of full rank. Assume Z is
an n \Theta (n \Gamma m) basis for the nullspace of B. Preconditioning A by a matrix
of the form
n\Thetan is symmetric, G 6= A and B 2 IR m\Thetan is as above, implies
that the matrix G \Gamma1 A has
(1) an eigenvalue at 1 with multiplicity 2m; and
eigenvalues which are defined by the generalised eigenvalue
problem Z T AZx
Note that the indefinite constrained preconditioner applied to the indefinite linear
system (1.1) yields the preconditioned matrix P which has real eigenvalues.
Remark 1. In the above argument we assumed that B has full row rank and
consequently applied an orthogonal factorisation of B T which resulted in a
Constraint Preconditioning 5
upper triangular matrix R 2 IR m\Thetam . If B does not have full row rank, i.e.
rows and columns can be
deleted from both matrices in (2.7), thus giving a reduced system of dimension
This removal of the redundant information does not
impose any restriction on the proposed preconditioner, since all mathematical
arguments equivalently apply to the reduced system of equations.
2.2 Eigenvector Distribution
We mentioned above that the termination for a Krylov subspace method is
related to the location of the eigenvalues and the number of corresponding
linearly independent eigenvectors. In order to establish the association between
eigenvectors and eigenvalues we expand the general eigenvalue problem (2.7),
yielding
From (2.11) it may be deduced that either In the former case
equations (2.9) and (2.10) simplify to
which can consequently be written as
Y Z
and
z
. Since Q is orthogonal, the general
eigenvalue problem (2.12) is equivalent to considering
with w 6= 0 if and only if oe = 1. There are m linearly independent eigenvectors
corresponding to linearly
independent eigenvectors (corresponding to eigenvalues
Now suppose - 6= 1, in which case x Equations (2.9) and (2.10) yield
6 Carsten Keller, Nick Gould and Andy Wathen
The general eigenvalue problem (2.14) defines
m) of these are not equal to 1 and for which two cases have to be
distinguished. If x z 6= 0, y must satisfy
from which follows that the corresponding eigenvectors are defined by
If x we deduce from (2.15) that
and hence that As
z x T
0 in this case, no
extra eigenvectors arise.
Summarising the above, it is evident that P has
We now show that, under realistic assumptions, these eigenvectors are in fact
linearly independent.
Theorem 2.2. Let A 2 IR (n+m)\Theta(n+m) be a symmetric and indefinite matrix
of the form
where A 2 IR n\Thetan is symmetric and B 2 IR m\Thetan is of full rank. Assume the
preconditioner G is defined by a matrix of the form
n\Thetan is symmetric, G 6= A and B 2 IR m\Thetan is as above. Let Z
denote an n \Theta (n \Gamma m) basis for the nullspace of B and suppose that Z T GZ
is positive definite. The preconditioned matrix G \Gamma1 A has n+m eigenvalues
as defined by Theorem 2.1 and linearly independent eigenvectors.
There are
eigenvectors of the form
that correspond to the
case
Constraint Preconditioning 7
eigenvectors of the form
z x T
arising from
z
linearly independent,
m) eigenvectors of the form
that
correspond to the case - 6= 1.
Proof. To prove that the m eigenvectors of P are linearly independent
we need to show
y (1)
a (1)a (1)
x z
(2)
(2)
x y
(2)
(2)
y (2)
a (2)a (2)
x z
y (3)
a (3)a (3)
implies that the vectors a are zero vectors. Multiplying
by A and G \Gamma1 , and recalling that in the previous equation the first
matrix arises from the case - m), the second matrix from
the case - the last matrix arises
from - k
y (1)
a (1)a (1)
x z
(2)
(2)
x y
(2)
(2)
y (2)
a (2)a (2)
x z
y (3)
a (3)a (3)
Subtracting Equation (2.16) from (2.17) we obtain6 6 4
x z
y (3)
a (3)a (3)
8 Carsten Keller, Nick Gould and Andy Wathen
which simplifies to6 6 4
x z
y (3)
a (3)a (3)
since - k
The assumption that Z T GZ is positive definite implies that x z
in (2.18) are linearly independent and thus a (3)
Similarly, a (2)
follows from the linear independence of
x z
(2)
x y
(2)
thus (2.16) simplifies to6 6 4
y (1)
a (1)a (1)
But y (1)
are linearly independent and thus a (1)
Remark 2. Note that the result of Theorem 2.2 remains true if Z T (flA
is positive definite for some scalars fl and oe-see Parlett [17, p. 343] for details.
To show that the eigenvector bounds of Theorem 2.2 can in fact be attained,
consider the following two examples.
Example 3 (Minimum bound). Consider the matrices
so that 2. The preconditioned matrix P has an eigenvalue at
1 with multiplicity 3, but only one eigenvector arising from case (1) in Theorem
2.2. This eigenvector may be taken to be
. 2
Example 4 (Maximum bound). Let A 2 IR 3\Theta3 be defined as in Example 3,
but assume A. The preconditioned matrix P has an eigenvalue at 1 with
multiplicity 3 and clearly a complete set of eigenvectors. These may be taken
to be
, and
Constraint Preconditioning 9
2.3 Eigenvalue Bounds
It is apparent from the calculations in the previous section that the eigenvalue
at 1 with multiplicity 2m is independent of the choice of G in the preconditioner.
On the contrary, the eigenvalues that are defined by (2.14) are highly
sensitive to the choice of G. If G is a close approximation of A, we can expect a
more favourable distribution of eigenvalues and consequently may expect faster
convergence of an appropriate iterative method. In order to determine a good
factorisation of A it will be helpful to find intervals in which the
are located. If G is a positive definite matrix one possible approach is
provided by Cauchy's interlace theorem.
Theorem 2.3 (Cauchy's Interlace Theorem, see [17, Theorem 10.1.1]).
Suppose n\Thetan is symmetric, and that
Label the eigenpairs of T and H as
Then
Proof. See Parlett [17, p. 203]. 2
The applicability of Theorem 2.3 is verified by recalling the definitions of Q and
Z given in the previous section, and by considering the generalised eigenvalue
problems
and
Carsten Keller, Nick Gould and Andy Wathen
Since G is positive definite so is Q T GQ, and we may therefore write
RR T . Rewriting
and (2.20) gives
and
where
Now, since the matrix M \Gamma1 Q T AQM \GammaT is similar to G
defines the same eigenvalues ff i A. We may therefore
apply Theorem 2.3 directly. The result is that the
In particular, the - i are
bounded by the extreme eigenvalues of G \Gamma1 A so that the - i will necessarily be
clustered if G is a good approximation of A. Furthermore, a good preconditioner
G for A implies that Z T GZ is at least as good a preconditioner for Z T AZ. To
show that the preconditioner Z T GZ can in fact be much better, consider the
following example, taken from the CUTE collection [3].
Example 5. Consider the convex quadratic programming problem BLOWEYC
which may be formulated as
subject to
Z
Selecting a size parameter of 500 discretisation intervals defines a set of linear
equations of the form (1.1), where Letting G be the
diagonal of A, we may deduce by the above theory that the extreme eigenvalues
of G \Gamma1 A give a lower and upper bound for the defined by
the general eigenvalue problem (2.14). In Figure 2.1 (a) the 1002 eigenvalues
of G \Gamma1 A are drawn as vertical lines, whereas Figure 2.1 (b) displays the 500
eigenvalues of (Z T GZ)
The spectrum of Figure 2.1 (a) is equivalent to a graph of the entire spectrum
of P, but with an eigenvalue at 1 and multiplicity 502 removed. Rounded to two
decimal places the numerical values of the two extreme eigenvalues of G \Gamma1 A are
Constraint Preconditioning 11
-0.4
size of eigenvalue
(a) Eigenvalues of G \Gamma1 A
-0.4
size of eigenvalue
(b) Eigenvalues of (Z T GZ)
Figure
2.1: Continuous vertical lines represent the eigenvalues of (a) G \Gamma1 A and
0:02 and 1:98, whereas the extreme eigenvalues of (Z T GZ) are given
by 0:71 and 1. Note that for this example a large number of eigenvalues of
are clustered in the approximate intervals [0:02; 0:38] and [1:65; 1:97]. The
eigenvalue distribution in Figure (2.1) (b) reveals that there is one eigenvalue
near 0:71 and a group of eigenvalues near 1. It follows that any appropriate
iterative method that solves (1.5) can be expected to converge in a very small
number of steps; this is verified by the numerical results presented in Section 5.It is readily seen from Example 5 that in this case the bounds provided by
Theorem 2.3 are not descriptive in that there is significantly more clustering of
the eigenvalues than implied by the theorem.
Convergence
In the context of this paper, the convergence of an iterative method under pre-conditioning
is not only influenced by the spectral properties of the coefficient
matrix, but also by the relationship between the dimensions n and m. In par-
ticular, it follows from Theorem 2.1 that in the special case when the
preconditioned linear system (1.5) has only one eigenvalue at 1 with multiplicity
2n. For gives an eigenvalue at 1 with multiplicity 2m and
(generally distinct) eigenvalues whose value may or may not be equal to
1. Before we examine how these results determine upper bounds on the number
12 Carsten Keller, Nick Gould and Andy Wathen
of iterations of an appropriate Krylov subspace method, we recall the definition
of the minimum polynomial of a matrix.
Definition 1. Let A 2 IR (n+m)\Theta(n+m) . The monic polynomial f of minimum
degree such that is called the minimum polynomial of A.
The importance of this definition becomes apparent when considering subsequent
results and by recalling that similar matrices have the same minimum
polynomial.
The Krylov subspace theory states that the iteration with any method with
an optimality property such as GMRES will terminate when the degree of the
minimum polynomial is attained-see Axelsson [1, p. 463] (To be precise, the
number may be less in special cases where b is a combination of a few eigenvectors
that affect the 'grade' of A with respect to b). In particular, the degree of
the minimum polynomial is equal to the dimension of the corresponding Krylov
subspace (for general b) and so the following theorems are relevant.
Theorem 3.1. Let A 2 IR (n+m)\Theta(n+m) be a symmetric and indefinite matrix
of the form
where A 2 IR n\Thetan is symmetric and B 2 IR m\Thetan is of full rank. Let
If A is preconditioned by a matrix of the form
where G 2 IR n\Thetan , G 6= A and B 2 IR m\Thetan is as above, then the Krylov
subspace K(P; b) is of dimension at most 2 for any b.
Proof. Writing the preconditioned system (2.8) in its explicit form we
observe that P is in fact given by
I 0
\Upsilon I
where \Upsilon is non-zero if and only if A 6= G. To show that the dimension
of the corresponding Krylov subspace is at most 2 we need to determine
the minimum polynomial of the system. It is evident from (3.23) that the
Constraint Preconditioning 13
eigenvalues of P are all 1 and so the
minimum polynomial is of order 2. 2
Remark 3. It is of course possible in the case to solve the (square)
constrained equation Bx and then to obtain x
gives motivation for why the result of Theorem 3.1 is independent of G.
Remark 4. The important consequence of Theorem 3.1 is that termination
of an iteration method such as GMRES will occur in at most 2 steps for any
choice of b, even though the preconditioned matrix is not diagonalisable (unless
Theorem 3.2. Let A 2 IR (n+m)\Theta(n+m) be a symmetric and indefinite matrix
of the form
where A 2 IR n\Thetan is symmetric and B 2 IR m\Thetan is of full rank. Assume
n and that A is non-singular. Furthermore, assume A is preconditioned by
a matrix of the form
n\Thetan is symmetric, G 6= A and B 2 IR m\Thetan is as above. If Z T GZ
is positive definite, where Z is an n \Theta (n \Gamma m) basis for the nullspace of B,
then the dimension of the Krylov subspace K(P; b) is at most n \Gamma m+ 2.
Proof. From the eigenvalue derivation in Section 2.1 it is evident that the
characteristic polynomial of the preconditioned linear system (1.5) is
To prove the upper bound on the dimension of the Krylov subspace we need
to show that the order of the minimum polynomial is less than or equal to
2.
14 Carsten Keller, Nick Gould and Andy Wathen
Expanding the polynomial
obtain a matrix of the form6 6 4
Here \Psi n\Gammam is defined by the recursive formula
\Theta
with base cases \Psi
Note that the (2; 1), (2; 2) and (3; 2) entries of matrix (3.24) are in fact zero,
since the - i m) are the eigenvalues of S, which is similar to
a symmetric matrix and is thus diagonalisable. Thus (3.24) may be written
as 2
and what remains is to distinguish two different cases for the value of \Phi n\Gammam ,
that is \Phi In the former case the order the minimum
polynomial of P is less than or equal to thus the dimension
of the Krylov subspace K(P; b) is of the same order. In the latter case the
dimension of K(P; b) is less than or equal to n \Gamma m+ 2 since multiplication
of (3.25) by another factor (P \Gamma I) gives the zero matrix.
The upper bound on the dimension of the Krylov subspace, as stated in Theorem
3.2, can be reduced in the special case when (Z repeated
eigenvalues. This result is stated in Theorem 3.3. The following (ran-
domly generated) example shows that the bound in Theorem 3.2 is attainable.
Example 6. Let A 2 IR 6\Theta6 and B T 2 IR 6\Theta2 be given by
2:69 1:62 1:16 1:60 0:81 \Gamma1:97
1:62 6:23 \Gamma1:90 1:89 0:90 0:05
0:81 0:90 \Gamma0:16 0:01 1:94 0:38
Constraint Preconditioning 15
and assume that diag(A). For the above matrices the (3; 1) entry of (3.25)
is
\Gamma0:22 \Gamma0:02
It follows that the minimum polynomial is of order 6 and thus the bound given
in Theorem 3.2 is sharp. 2
Theorem 3.3. Let A 2 IR (n+m)\Theta(n+m) be a symmetric and indefinite matrix
of the form
where A 2 IR n\Thetan is symmetric and B 2 IR m\Thetan is of full rank. Assume
non-singular and that A is preconditioned by a matrix of the
n\Thetan is symmetric, G 6= A and B 2 IR m\Thetan is as above.
Furthermore, let Z be an n \Theta (n \Gamma m) basis for the nullspace of B and
assume (Z T GZ) m) distinct eigenvalues
of respective multiplicity - i , where
the dimension of the Krylov subspace K(P; b) is at most k 2.
Proof. The proof is similar to the one for Theorem 3.2. In the case
when (Z distinct eigenvalues of multiplicity - i we
may, without loss of generality, write the characteristic polynomial of P as
Y
Y
Expanding (y) we obtain the matrix6 6 4
Carsten Keller, Nick Gould and Andy Wathen
Y
Here \Psi k is given by the recursive formula
\Theta
with base cases \Psi
Note that the (2; 1), (2; 2) and (3; 2) blocks of matrix (3.26) are in fact zero.
It follows that, for \Phi k 6= 0, a further multiplication of (3.26) by
the zero matrix and thus the dimension of Krylov subspace K(P; b) is less
then or equal to k 2. 2
To verify that the bound in Theorem 3.3 is attainable consider the following
example.
Example 7. Let A 2 IR 4\Theta4 , G 2 IR 4\Theta4 and B T 2 IR 4\Theta1 be given by
. Then two of the eigenvalues that are
defined by the generalised eigenvalue problem (2.14) are distinct and given by
[2; 4]. It follows that the (3; 1) entry of (3.26) is non-zero with
and so the minimum polynomial is of order 4. 2
Implementation
There are various strategies that can be used to implement the proposed pre-
conditioner, two of which are used in the numerical results in Section 5. The
first strategy applies the standard (preconditioned) GMRES algorithm [20],
where the preconditioner step is implemented by means of a symmetric indefinite
factorisation of (1.4). Such a factorisation of the preconditioner may be
much less demanding than the factorisation of the initial coefficient matrix if
G is a considerably simpler matrix than A. The second approach, discussed in
the next section, is based on an algorithm that solves a reduced linear system
Constraint Preconditioning 17
4.1 Conjugate Gradients on a Reduced Linear System
In [11] Gould et al. propose a Conjugate Gradient like algorithm to solve
equality constrained quadratic programming problems such as the one described
in Example 1. The algorithm is based on the idea of computing an implicit basis
Z which spans the nullspace of B. The nullspace basis is then used to remove
the constraints from the system of equations, thus allowing the application of
the Conjugate Gradients method to the (positive definite) reduced system.
Assume that W GZ is a symmetric and positive definite preconditioner
matrix of dimension (n \Gamma m) \Theta (n \Gamma m) and Z is an n \Theta (n \Gamma m) matrix.
The algorithm can then be stated as follows.
Algorithm 4.1: Preconditioned CG for a Reduced System.
(1) Choose an initial point x satisfying
(2) Compute
\Gammag
(3) Repeat the following steps until
x
The computation of the preconditioned residual in (4.29) is often the most
expensive computational factor in the algorithm. Gould et al. suggest avoiding
the explicit use of the nullspace Z, but instead to compute g + by applying a
Carsten Keller, Nick Gould and Andy Wathen
symmetric indefinite factorisation of
r +#
In practice (4.30) can often be factored efficiently by using the MA27 package
of the Harwell Subroutine Library when G is a simple matrix block, whereas
the direct application of MA27 to the original system (1.1) is limited by space
requirements as well as time for large enough systems [6]. In this context the
factorisation consists of three separate routines, the first two of which analyse
and factorise the matrix in (4.30). They need to be executed only once in Step
(1) of Algorithm 4.1. Repeated calls to the third routine within MA27 apply
forward- and backward-substitutions to find the initial point x in Step (1), solve
for g in (4.27) and also to find g + in (4.29).
Remark 5. The computation of the projected residual g + is often accompanied
by significant roundoff errors if this vector is much smaller than the residual
Iterative refinement is used in (4.28) to redefine r + so that its norm is closer
to that of g + . The result is a dramatic reduction of the roundoff errors in the
projection operation-see Gould et al. [11].
5 Numerical Results
We now present the results of numerical experiments that reinforce the analysis
given in previous sections. The test problems we use are partly randomised
sparse matrices (Table 5.1) and partly matrices that arise in linear and non-linear
optimization (Table 5.2)-see Bongartz et al. [3]. As indicated through-
out, all matrices are of the form
where A 2 IR n\Thetan is symmetric, B 2 IR m\Thetan has full rank and m - n.
Four different approaches to finding solutions to (1.1) are compared-three
iterative algorithms based on Krylov subspaces, and the direct solver MA27
which applies a sparse variant of Gaussian elimination-see Duff and Reid [6].
To investigate possible favourable aspects of preconditioning it makes sense to
compare unpreconditioned with preconditioned solution strategies. The indefinite
nature of matrix (5.31) suggests the use of MINRES in the unpreconditioned
case. As outlined in Section 4 we employ two slightly different strategies
in order to implement the preconditioner G. The first method applies a standard
(full) GMRES(A) code (PGMRES in Tables 5.1 and 5.2 below), which is
Constraint Preconditioning 19
mathematically equivalent to MINRES(A) for symmetric matrices A, whereas
the second approach implements Algorithm 4.1 (RCG in Tables 5.1 and 5.2 be-
low). The choice in the preconditioner is made for both PGMRES
and RCG.
Random I Random II Random III Random IV
non-zero entries in A 2316 9740 39948 39948
non-zero entries in B 427 1871 3600 686
MINRES # of iterations 174 387 639 515
time in seconds 0:4 3:1 17:5 13:1
PGMRES # of iterations 46 87 228 242
time in seconds 0:2 3:9 96:0 108:9
RCG # of iterations 36 67 197 216
time in seconds 0:1 1:0 5:9 5:9
time in seconds 0:1 0:9 5:4 2:9
Table
5.1: Random test problems
All tests were performed on a SUN Ultra SPARCII-300MhZ (ULTRA-30)
workstation with 245 MB physical RAM and running SunOS Release 5:5:1.
Programs were written in standard Fortran 77 using the SUN WorkShop f77
compiler (version 4:2) with the -0 optimization flag set. In order to deal with
large sparse matrices we implemented an index storage format that only stores
non-zero matrix elements-see Press et al. [19]. The termination criterion for
all iterative methods was taken to be a residual vector of order less than 10 \Gamma6
in the 2-norm.
As part of its analysis procedure, MA27 accepts the pattern of some coefficient
matrix and chooses pivots for the factorisation and solution phases of
subsequent routines. The amount of pivoting is controlled by the special parameter
Modifying u within its positive range influences
the accuracy of the resulting solution, whereas a negative value prevents any
pivoting-see Duff and Reid [6]. In this context, the early construction of some
of the test examples with the default value accompanied by difficulties
in the form of memory limitations. We met the trade-off between less use
of memory and solutions of high enough accuracy by choosing the parameter
value Tables 5.1 and 5.2 .
The time measurements for the eight test examples indicate that the itera-
Carsten Keller, Nick Gould and Andy Wathen
non-zero entries in A 3004 672 1020 13525
non-zero entries in B 2503 295 148 50284
MINRES # of iterations 363 no convergence 51 180
time in seconds 1:8 no convergence 0:1 14:2
PMGREM # of iterations
time in seconds 0:6 0:1 0:2 13:2
RCG # of iterations
time in seconds 0:6 0:1 0:2 16:2
time in seconds 0:5 0:1 0:1 15:6
Table
5.2: CUTE test problems
tion counts for each of the three proposed iterative methods are comparable as
far as operation counts, i.e. work, is concerned. The numerical results suggest
that the inclusion of the (1; 2) and (2; 1) block of A into the preconditioner,
together with results in a considerable reduction of iterations,
where the appropriate bounds of Theorems 3.1, 3.2 and 3.3 are attained in all
cases. Specifically, Theorem 3.1 applies in context of problem CVXQP1.
Test problems RANDOM III and RANDOM IV in Table 5.1 emphasise the
storage problems that are associated with the use of long recurrences in the
PGMRES algorithm. The time required to find a solution to both RANDOM
III and RANDOM IV via the PGMRES algorithm is not comparable to any
of the other methods, which is due to the increased storage requirements and
the data trafficking involved. A solution to the memory problems is to restart
PGMRES after a prescribed number of iterations, but the iteration counts for
such restarts would not be comparable with those of full PGMRES.
The relevance of the time measurements for MA27 are commented on in the
next section.
6 Conclusion
In this paper we investigated a new class of preconditioner for indefinite linear
systems that incorporate the (2; 1) and (2; 2) blocks of the original matrix.
These blocks are often associated with constraints. In our numerical results
we used a simple diagonal matrix G to approximate the (1; 1) block of A,
Constraint Preconditioning 21
even though other approximations, such as an incomplete factorisation of A,
are possible. We first showed that the inclusion of the constraints into the
preconditioner clusters at least 2m eigenvalues at 1, regardless of the structure
of G. However, unless G represents A exactly, P does not have a complete set
of linearly independent eigenvectors and thus the standard convergence theory
for Krylov subspace methods is not readily applicable.
To find an upper bound on the number of iterations, required to solve linear
system of the form (1.1) by means of appropriate subspace methods, we used
a minimum polynomial argument. Theorem 3.1 considers the special condition
m, in which case termination is guaranteed in two iterations. For
Theorem 3.2 gives a general (sharp) upper bound on the dimension of the Krylov
subspace, whereas Theorem 3.3 defines a considerably stronger result if some
of the defined by (Z T GZ) are repeated.
In the special case when G is a positive definite matrix block we were able to
apply Cauchy's interlacing theorem in order to give an upper and lower bound
for the eigenvalues that are defined by the (2; 2) block of matrix (2.8).
To confirm the analytical results in this paper we used three different sub-space
methods, MINRES of Paige and Saunders for the unpreconditioned matrix
system and RCG of Gould et al. and also PGMRES of Saad and Schultz for
the preconditioned case. Overall, the results show that the number of iterations
is decreased substantially if preconditioning is applied. The Krylov subspaces
that are built during the execution of the two preconditoned implementations
are in theory of equal dimension for any of the eight test examples, and thus
PGMRES and RCG can be expected to terminate in the same number of steps.
However, convergence to any prescribed tolerance may occur for a different
number of steps since PGMRES and RCG minimize different quantities. This
can be seen in some of the examples. Nevertheless, we note that convergence
for both methods is attained much earlier than suggested by the bounds in
Theorems 3.1, 3.2 and 3.3.
The time measurements for MA27 in the last section suggest that the preconditioned
conjugate gradients algorithm, discussed in Section 4.1, is a suitable
alternative to the direct solver. Whereas both MINRES and especially PGM-
RES are considerably slower than MA27, the timings for RCG are in virtually
all cases comparable. For problems of large enough dimension or bandwidth
the resources required by MA27 must become prohibitive in which case RCG
becomes even more competitive.
22 Carsten Keller, Nick Gould and Andy Wathen
Acknowledgments
The authors would like to to thank Gene. H. Golub for his insightful comments
during the process of the work.
--R
Cambridge University Press
CUTE: Constrained and Unconstrained Testing Environment
Linearly constrained optimization and projected preconditioned conjugate gradients
Direct Methods for Sparse Matrices
The multifrontal solution of indefinite sparse symmetric linear equations
Perturbation of eigenvalues of preconditioned Navier-Stokes operators
Polynomial Based Iteration Methods for Symmetric Linear Systems
Matrix Computations
An iteration for indefinite system and its application to the Navier-Stokes equations
On the solution of equality constrained quadratic programming problems arising in optimiza- tion
Iterative Methods for Solving Linear Systems
Iterative methods for non-symmetric linear systems
Iterative Methods for Linear and Nonlinear Equations
The Symmetric Eigenvalue Problem
Numerical Recipes in Fortran: The Art of Scientific Computing
GMRES: a generalised minimal residual algorithm for solving nonsymmetric linear systems
--TR
--CTR
Joo-Siong Chai , Kim-Chuan Toh, Preconditioning and iterative solution of symmetric indefinite linear systems arising from interior point methods for linear programming, Computational Optimization and Applications, v.36 n.2-3, p.221-247, April 2007
Luca Bergamaschi , Jacek Gondzio , Manolo Venturin , Giovanni Zilli, Inexact constraint preconditioners for linear systems arising in interior point methods, Computational Optimization and Applications, v.36 n.2-3, p.137-147, April 2007
Z. Dostl, An optimal algorithm for a class of equality constrained quadratic programming problems with bounded spectrum, Computational Optimization and Applications, v.38 n.1, p.47-59, September 2007
H. S. Dollar , N. I. Gould , W. H. Schilders , A. J. Wathen, Using constraint preconditioners with regularized saddle-point problems, Computational Optimization and Applications, v.36 n.2-3, p.249-270, April 2007
Luca Bergamaschi , Jacek Gondzio , Giovanni Zilli, Preconditioning Indefinite Systems in Interior Point Methods for Optimization, Computational Optimization and Applications, v.28 n.2, p.149-171, July 2004
S. Cafieri , M. D'Apuzzo , V. Simone , D. Serafino, Stopping criteria for inner iterations in inexact potential reduction methods: a computational study, Computational Optimization and Applications, v.36 n.2-3, p.165-193, April 2007
S. Bocanegra , F. F. Campos , A. R. Oliveira, Using a hybrid preconditioner for solving large-scale linear systems arising from interior point methods, Computational Optimization and Applications, v.36 n.2-3, p.149-164, April 2007
S. Cafieri , M. D'Apuzzo , V. Simone , D. Serafino, On the iterative solution of KKT systems in potential reduction software for large-scale quadratic problems, Computational Optimization and Applications, v.38 n.1, p.27-45, September 2007
Silvia Bonettini , Emanuele Galligani , Valeria Ruggiero, Inner solvers for interior point methods for large scale nonlinear programming, Computational Optimization and Applications, v.37 n.1, p.1-34, May 2007 | indefinite matrices;preconditioning;krylov subspace methods |
354647 | A bandwidth analysis of reliable multicast transport protocols. | Multicast is an efficient communication technique to save bandwidth for group communication purposes. A number of protocols have been proposed in the past to provide a reliable multicast service. Briefly classified, they can be distinguished into sender-initiated, receiver-initiated and tree-based approaches. In this paper, an analytical bandwidth evaluation of generic reliable multicast protocols is presented. Of particular importance are new classes with aggregated acknowledgments. In contrast to other approaches, these classes provide reliability not only in case of message loss but also in case of node failures. Our analysis is based on a realistic system model, including data packet and control packet loss, asynchronous local clocks and imperfect scope-limited local groups. Our results show that hierarchical approaches are superior. They provide higher throughput as well as lower bandwidth consumption. Relating to protocols with aggregated acknowledgments, the analysis shows only little additional bandwidth overhead and therefore high throughput rates. | INTRODUCTION
A number of reliable multicast transport protocols have been
proposed in the literature, which are based on the acknowledgment
scheme. Reliability is ensured by replying acknowledgment
messages from the receivers to the sender, either
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
Second International Workshop on Networked Group Communication '00,
Palo Alto, CA USA
Copyright 2000 ACM 0-89791-88-6/97/05 .$5.00
to conrm correct data packet delivery or to ask for a re-
transmission. Reliable multicast protocols are usually clas-
sied into sender-initiated, receiver-initiated and tree-based
ones. Brie
y characterized, in sender-initiated approaches
receivers reply positive acknowledgments (ACKs) to conrm
correct message delivery in contrast to receiver-initiated
protocols, which indicate transmission errors or losses by
negative acknowledgments (NAKs). Both classes can result
in an overwhelming of the sender and the network around
the sender by a large number of ACK or NAK messages.
This problem is the well-known acknowledgment implosion
problem, which is a vital challenge for the design of reliable
multicast protocols, since it limits the scalability for
large receiver groups. Tree-based approaches promise to be
scalable even for a large number of receivers, since they arrange
receivers into a hierarchy, called ACK tree [10]. Leaf
node receivers send their positive or negative acknowledgments
to their parent node in the ACK tree. Each non-leaf
receiver is responsible for collecting ACKs or NAKs only
from their direct child nodes in the hierarchy. Since the
maximum number of child nodes is limited, no node is overwhelmed
with messages and scalability for a large receiver
group is ensured. The maximum number of child nodes can
be determined according to the processing performance of
a node, its available network bandwidth, its memory equip-
ment, and its reliability.
In this paper we present a throughput analysis based on
bandwidth requirements as well as the overall bandwidth
consumption of all group members, which refer to the data
transfer costs. One characteristic of multicast transmissions
is that the component with the weakest performance may
determine the transmission speed. This means, a group
member with a low bandwidth connection, low processing
power, high packet loss rate or high packet delay may prevent
high transmission rates. Therefore, it is very useful to
be able to quantify the necessary requirements for a given
multicast protocol.
The remainder of this paper is structured as follows. In Section
2 we discuss the background of our throughput analysis
and take a look at related work. In Section 3 we brie
y
classify the analyzed protocols. Our bandwidth evaluation
in Section 4 starts with a denition of the assumed system
model before the various protocol classes are analyzed in
detail. To illustrate the results, some numerical evaluations
are presented in Section 5 before we conclude with a brief
summary.
2. RELATED WORK
Reliable multicast protocols were already analyzed in previous
work. The rst processing requirements analysis of
generic reliable multicast protocols was presented by Pingali
et al. [8]. They compared the class of sender- and receiver-initiated
protocols. Following analytical papers are often
based on the model and analytical methods introduced by
[8]. Levine et al. [3] have extended the analysis to the class
of ring- and tree-based approaches. In Maihofer et al. [5]
protocols with aggregated acknowledgments are considered.
A bandwidth analysis of generic reliable multicast protocols
was done by Kasera et al. [2], Nonnenmacher et al. [6]
and Poo et al. [9]. In [2], local recovery techniques are analyzed
and compared. The system model is based on a special
topology structure consisting of a source link from the
sender to the backbone, backbone links and nally tail links
from the backbone to the receivers. In [6] a similar topology
structure is used. They studied the performance gain
of protocols using parity packets to recover from transmission
errors. The protocols use receiver-based loss detection
with multicasted NAKs and NAK avoidance. In [9], non-hierarchical
protocols are compared. In contrast to previous
work, not only stop-and-wait error recovery is considered
in the analysis but also go-back-N and selective-repeat
schemes.
Our paper diers from previous work in the following ways.
First, we consider the loss of data packets and control pack-
ets. Second, we assume that local clocks are not synchronized
which aects the NAK-avoidance scheme (see Section
3.2). NAK-avoidance works less e-ciently with this more
realistic assumption. Third, our analysis considers that local
groups may not be conned perfectly, so that local data
or control packets may reach nodes in other local groups.
Finally, our work extends previous analysis by two new tree-based
protocol classes. They are based on aggregated ACKs
to be able to cope with node failures.
3. CLASSIFICATIONOFRELIABLEMUL-
In this section we brie
y classify the reliable multicast protocols
analyzed in this paper. A more detailed and more
general description for some of these classes can be found in
[8], [3] and [6].
3.1 Sender-Initiated Protocols
The class of sender-initiated protocols is characterized by
positive acknowledgments (ACKs) returned by the receivers
to the sender. A missing ACK detects either a lost data
packet at the corresponding receiver, a lost ACK packet or
a crashed receiver, which cannot be distinguished by the
sender. Therefore, a missing ACK packet leads to a data
packet retransmission from the sender. We assume that such
a retransmission is always sent using multicast. This protocol
class will be referred to as (A1). Note that the use of
negative acknowledgments, for example to speed up retrans-
missions, does not necessarily mean that a protocol is not
of class (A1). Important is that positive acknowledgments
are necessary, for example to release data from the sender's
buer space. An example for a sender-initiated protocol is
the Xpress Transport Protocol (XTP) [12].
3.2 Receiver-Initiated Protocols
In contrast to sender-initiated protocols, receiver-initiated
protocols return only negative acknowledgments (NAKs) instead
of ACKs. As in the sender-initiated protocol class, we
assume that retransmissions are sent using multicast. When
a receiver detects an error, e.g. by a wrong checksum, a skip
in the sequence number or a timeout while waiting for a data
packet, a NAK is returned to the sender. Pure receiver-initiated
protocols have a non-deterministic characteristic,
since the sender is unable to decide when all group members
have correctly received a data packet.
Receiver-initiated protocols can either send NAKs using unicast
or multicast transmission. The protocol class sending
unicast NAKs will be called (N1). An example for (N1)
is PGM [11]. The approach using multicast NAKs (N2) is
known as NAK-avoidance scheme. A receiver that has detected
an error sends a multicast NAK provided that it has
not already received a NAK for this data packet from another
receiver. Thus, in optimum case, only one NAK is
received by the sender for each lost data packet. An example
for such a protocol is the scalable reliable multicasting
protocol (SRM) [1].
3.3 Tree-Based Protocols
Tree-based approaches organize the receivers into a tree
structure called ACK tree, which is responsible for collecting
acknowledgments and sending retransmissions. We
assume that the sender is the root of the tree. If a receiver
needs a retransmission, the parent node in the ACK tree
is informed rather than the sender. The parent nodes are
called group leaders for their children which form a local
group. Note that a group leader may also be a child of another
local group. A child which is only a receiver rather
than a group leader is called leaf node.
The rst considered scheme of this class (H1) is similar to
sender-initiated protocols since it uses ACKs sent by the receivers
to their group leaders to indicate correctly received
packets. Each group leader that is not the root node also
sends an ACK to its parent group leader until the root node
is reached. If a timeout for an ACK occurs at a group
leader or the root, a multicast retransmission is invoked.
An example of a protocol similar to our denition of (H1)
is RMTP [7]. The second scheme (H2) is based on NAKs
with NAK suppression similar to (N2) and selective ACKs
(SAKs), which are sent periodically for deciding deterministically
when packets can be removed from memory. A SAK
is sent to the parent node after a certain number of packets
are received or after a certain time period has expired, to
propagate the state of a receiver to its group leader. TMTP
[14] is an example for class (H2).
Before the next scheme will be introduced, it is necessary to
understand that (H1) and (H2) can guarantee reliable delivery
only if no group member fails in the system. Assume for
example that a group leader G1 fails after it has acknowledged
correct reception of a packet to its group leader G0
which is the root node. If a receiver of G1 's local group
needs a retransmission, neither G1 nor G0 can resend the
data packet since G1 has failed and G0 has removed the
packet from memory. This problem is solved by aggregated
hierarchical ACKs (AAKs) of the third scheme (H3). A
group leader sends an AAK to its parent group leader after
all children have acknowledged correct reception. After
a group leader or the root node has received an AAK, it can
remove the corresponding data from memory because all
members in this subhierarchy have already received it cor-
rectly. Lorax [4] and RMTP-II [13] are examples for AAK
protocols. Our denition of (H3)'s generic behavior is as
follows:
1. Group leaders send a local ACK after the data packet
is received correctly.
2. Leaf node receivers send an AAK after the data packet
is received correctly.
3. The root node and group leaders wait a certain time
to receive local ACKs from their children. If a timeout
occurs, the packet is retransmitted to all children or selective
to those whose ACK is missing. Since leaf node
receivers send only AAKs rather than local ACKs, a
received AAK from a receiver is also allowed to prevent
the retransmission.
4. The root node and group leaders wait to receive AAKs
from their children. Upon reception of all AAKs, the
corresponding packet can be removed from memory
and a group leader sends an AAK to its parent group
leader. If a timeout occurs while waiting for AAKs, a
unicast AAK query is sent to the aected nodes.
5. If a group leader or leaf node receiver receives further
retransmissions after an AAK has been sent or the
prerequisites for sending an AAK are met, these data
packets are acknowledged by AAKs rather than ACKs.
The same applies for receiving an AAK query that is
replied with an AAK if the prerequisites are met.
In summary, ACKs are used for fast error recovery in case
of message loss and AAKs to clear buer space. Besides the
AAK scheme, we consider in our analysis of (H3) a threshold
scheme to decide whether a retransmission is performed
using unicast or multicast. The sender or group leader compares
the number of missing ACKs with a threshold pa-
rameter. If the number of missing ACKs is smaller than
this threshold, the data packets are retransmitted using uni-
cast. Otherwise, if the number of missing ACKs exceeds the
threshold, the overall network and node load is assumed to
be lower using multicast retransmission.
Our next protocol will be denoted as (H4) and is a combination
of the negative acknowledgment with NAK suppression
scheme (H2) and aggregated acknowledgments (H3). Similar
to (H2), NAKs are used to start a retransmission. Instead
of selective periodical ACKs, aggregated ACKs are
used to announce the receivers' state and allow group leaders
and the sender to remove data from memory. Like SAKs,
we assume that AAKs are sent periodically. We dene the
generic behavior of (H4) as follows:
1. Upon detection of a missing or corrupted data packet,
receivers send a NAK per multicast scheduled at a random
time in the future and provided that not already a
NAK for this data packet is received before the scheduled
time. If no retransmission arrives within a certain
time period, the NAK sending scheme is repeated.
2. Group leaders and the sender retransmit a packet per
multicast if a NAK has been received.
3. After a certain number of correctly received data pack-
ets, leaf node receivers send an AAK to its group leader
in the ACK tree. A group leader forwards this AAK
to its parent group leader or sender, respectively, as
soon as the same data packets are correctly received
and the corresponding AAKs from all child nodes are
received.
4. The sender and group leaders initiate a timer to wait
for all AAKs to be received. If the timer expires, an
AAK query is sent to those child nodes whose AAK is
missing.
5. If a group leader or leaf node receiver gets an AAK
query and the prerequisites for sending an AAK are
met, the query is acknowledged with an AAK.
4. BANDWIDTH ANALYSIS
4.1 Model
Our model is similar to the one used by Pingali et al. [8] and
Levine et al. [3]. A single sender is assumed, multicasting
to R identical receivers. In case of tree-based protocols,
the sender is the root of the ACK tree. We assume that
nodes do not fail and the network is not partitioned, i.e.
that retransmissions are nally successful. In contrast to
previous work, packet loss can occur on both, data packets
and control packets. Multicast packet loss probability is
given by q and unicast packet loss probability by p for any
node.
Table
1 summarizes the notations for protocol classes
(A1), (N1), (N2), (H1) and (H2). In Table 2 the additional
notations for the protocol classes (H3) and (H4) are given.
We assume that losses at dierent nodes are independent
events. In fact, since receivers share parts of the multicast
routing tree, this assumption does not hold in real networks.
However, if all classes have similar trees no protocol class is
privileged relative to another one by this assumption.
In the following subsections, the generic protocol classes are
analyzed in detail. Although our work considers more protocols
and a more general system model, the notations and
basic analyzing methods follows [8] and [3].
4.2 Sender-Initiated Protocol (A1)
We determine the bandwidth requirements at the sender
S and each receiver W A1
R , based on the necessary band-width
for sending a single data packet correctly to all re-
ceivers. We assume that the sender waits until all ACKs are
received and then sends a retransmission if necessary.
The bandwidth consumption at the sender is:
(receiving ACKs)
e
Wa (i) (1)
E(W
E( e
Wd (m) and Wa(i) are the bandwidths required for a data
packet or ACK packet for the m-th or i-th transmission,
Table
1: Notations for the analysis of (A1), (N1),
(N2), (H1) and (H2)
R Size of the receiver set.
Branching factor of a tree or the local group size.
Bandwidth for a data packet, ACK, NAK and
SAK, respectively.
Bandwidth requirements for protocols
w at the sender, receiver, group leader
and overall bandwidth consumption.
Throughput respectively relative bandwidth efciency
for protocols w at the sender, receiver,
group leader and overall system throughput.
S Number of periodical SAKs received by the
sender in the presence of control message loss.
Probability for unicast or multicast data loss at
a receiver, respectively.
Probability for unicast ACK, NAK or multicast
loss.
~ Probability that a retransmission is necessary
for protocol (A1) or (N2), respectively.
ps Probability for simultaneous and therefore un-necessary
NAK sending in (N2), (H2) and (H4).
l Probability for receiving a data or control packet
from another local group.
L w Number of ACKs or NAKs per data packet sent
by receiver r that reach the sender or total number
of ACKs or NAKs per data packet received
from all receivers.
r , N w
Total number of transmissions per data packet
received by receiver r from the parent node or
total number of received data packets from all
local groups, respectively.
Total number of transmissions per data packet
received by a group leader from all local groups.
w Number of necessary transmissions for receiver
r, to receive a data packet correctly in the presence
of data and ACK or NAK loss or total number
of transmissions for all receivers.
O Number of necessary rounds to correctly deliver
a packet to all receivers or to receiver r.
O Total number of empty rounds or empty rounds
for receiver r, respectively.
k Number of NAKs sent in round k.
respectively. M A1 is the total number of transmissions necessary
to transmit a packet correctly to all receivers in the
presence of data packet and ACK loss and e
L A1 is the total
number of ACKs received for this data packet. E(W A1
S ) is
the expectation of the bandwidth requirement at the sender.
The only unknowns are E(M A1 ) and E( e
E( e L
This means, the sender gets one ACK per data packet transmission
E(M A1 ) from every receiver R, provided that the
data packet is not lost with probability (1 qD ) and the
ACK is not lost with probability (1 pA ).
Now, the number of transmissions have to be analyzed. The
probability for a retransmission is:
i. e. either a data packet is lost (qD ) or the data packet is
received correctly and the ACK is lost ((1 qD )pA ). So,
the probability that the number of necessary transmissions
r for receiver r is smaller or equal to m (m=1, 2, .) is:
As the packet losses at dierent receivers are independent
from each other:
R
Y
R
R
R
R
R
~
R
E(M A1 ) is the expected total number of necessary transmissions
to receive the data packet correctly at all receivers.
Now E(W A1
S ) is entirely determined. The bandwidth efciency
respectively maximum throughput for the sender
S , to send data packets successfully to a receiver is:
Accordingly, the processing requirement for a packet at the
receiver is:
(sending ACKs) (10)
E(W
where a packet is received with probability (1 qD ) and each
received packet is acknowledged. The maximum throughput
of a receiver is:
R =E(W A1
system throughput A1 is determined by the minimum
of the throughput rates at the sender and receivers:
R g:
Now we are able to determine the total bandwidth consump-
tion. In contrast to previous work, our denition of total
bandwidth consumption is the bandwidth that is necessary
at the sender and receivers to send and receive messages.
This means, we assume that the internal network structure
is not known and therefore not considered in the analysis.
In [2] and [6], total bandwidth is dened on a per link basis.
Such a denition encompasses the total costs within the net-work
but has the disadvantage, that a network topology has
to be dened with routers and links between routers. Here,
we want to determine the total costs at the communication
endpoints, i.e. the costs for the sender and receivers.
The total bandwidth consumption of protocol (A1) is then
the sum of the sender's and receivers' bandwidth consumptions
E(W
R
4.3 Receiver-Initiated Protocol (N1)
As in the sender-initiated protocol, data packets are always
transmitted using multicast. In (N1), error control is realized
by unicast NAKs. The sender collects all NAKs received
within a certain timeout period and sends only one
retransmission independent of the number of received or lost
control packets during that round.
The bandwidth requirement at the sender is:
(receiving NAKs)
e
Wn (i) (15)
E(W N1
E( e
The only unknowns are E(M N1 ) and E( e
L N1 ). The number
of transmissions, M N1 , until all receivers correctly receive a
packet is only determined by the probability for data packet
loss analogous to Eq. 8 of (A1) with qD instead of ~
p.
To determine E( e
steps have to be
done. First, the number of transmissions, M N1
r , for a single
receiver is given by the probability qD . This means, M N1
r
counts the number of trials until the rst success occurs. The
probability for the rst success in a Bernoulli experiment at
trial k with probability for success (1 qD ) is:
The necessary number of transmissions for a single receiver
r follows from the Bernoulli distribution and [8]:
E(M N1
E(M N1
r jM N1
E(M N1
r jM N1 r >
r > 1)[E(M N1
r jM N1
Besides the necessary number of transmissions, we have to
introduce the number of rounds, necessary to correctly deliver
a data packet. A round starts with the sending of a
data packet and ends with the expiration of a timeout at
the sender. Normally, there will be one data transmission
in each round. However, if the sender receives no NAKs
due to NAK loss, no retransmission is made and new NAKs
must be sent by the receivers in the next round. O N1 r is
the number of necessary rounds for receiver r. The number
of rounds is the sum of the number of necessary rounds
for sending transmissions M N1
r and the number of empty
rounds O N1 e;r in which all NAKs are lost and therefore no
retransmission is made:
O N1
r +O N1
E(M N1
r ) is given in Eq. 18. E(O N1
e;r ) can be determined
analogous to E(M N1
r ), with probability pk for the loss of all
sent NAKs in round k (see Eq. 25). The expected number
of empty rounds E(O N1
e;r ) is the expected number of empty
rounds after the rst transmission plus the expected number
of empty rounds after the second transmission and so on:
E(O N1
is the expectation for the number of empty rounds
plus the last successful NAK reception at the sender which
is subtracted. Nk , the number of NAKs sent in round k is
given by:
where qD k is the probability for a single receiver that until
round k all data packets are lost. The number of empty
rounds after transmission k is determined by the failure
probability:
pk is the probability that all sent NAKs in round k are lost.
The number of sent NAKs is equal to the number of receivers
qD k R that need a retransmission in round k (see Eq. 24).
e
L N1 is the number of NAKs received by the sender and #1
is the total number of NAKs sent in all rounds:
E( e
Finally, at the receiver we have:
E(W N1
r > 1)[E(O N1
r jO N1
Note that the last, successful transmission is not replied with
a NAK.
The throughput rates are analogous to (A1):
N1
R =E(W N1
R g:
The total bandwidth consumption is:
E(W
R
4.4 Receiver-Initiated Protocol (N2)
In contrast to (N1), this protocol class sends NAKs to all
group members using multicast. Ideally, NAK suppression
ensures that only one NAK is received by the sender. As in
the previous protocol, the sender collects all NAKs belonging
to one round and then starts a retransmission:
(receiving NAKs)
e L N2
Wn (i) (31)
E(W N2
E( e
E(M N2 ) is determined analogous to (A1) and (N1) with
loss probability qD (see Eq. 8). e
contains the number of
necessary and additional NAKs received at the sender:
E( e
Nk , the number of NAKs sent in round k, is the sum of:
NAK of the rst receiver that did not receive the data packet
plus NAK of another unsuccessful receiver that did not receive
the rst NAK packet and sends a second NAK and so
on:
The rst receiver sends a NAK provided that the data packet
was lost with probability qD k . The second receiver sends a
NAK provided that the data packet was lost and the NAK
of the rst receiver was lost (Nk;1qN ) or the rst receiver
sends no NAK (1 Nk;1 ), and so on.
In Eq. 38, a perfect system model is assumed in which additional
NAKs are only sent due to NAK loss at receivers.
This means, receivers must have synchronized local clocks
and a dened sending order for NAKs. However, since receivers
are usually not synchronized in real systems it can
occur that NAKs are sent simultaneously. Therefore, we extend
Equations 36-38 with the probability for simultaneous
sending (ps) to:
ps qN ps
The number of rounds O N2 r for receiver r is obtained analogous
to protocol (N1). It is the sum of the number of necessary
rounds for sending transmissions M N2
r and the number
of empty rounds O N2
e;r in which all NAKs are lost and therefore
no retransmission is made. The total number of rounds
O N2 for all receivers can be dened analogous to O N2
O N2
r +O N2
e;r (41)
O
The number of necessary transmissions, M N2
r , for a single
receiver r is given by the probability qD . Analogous to Eq.
of protocol (N1) the expectation is:
E(M N2
The number of empty rounds after transmission k is determined
by the failure probability:
pk is the probability that all sent NAKs in round k are lost.
The expected number of empty rounds E(O N2
e ) is equal to
the expected number of empty rounds after the rst transmission
plus the expected number of empty rounds after the
second transmission and so on. Now, E(O N2
e ) and E(O N2
can be determined analogous to M N2
r (see Eq.
E(O
E(O N2
is the expectation for the number of empty rounds
plus the last successful NAK reception at the sender, which
is subtracted.
At the receiver we have:
E(W N2
r > 1)[E(O N2
r jO N2
r > 1)[E(O N2
r jO N2
#2 is the average number of NAKs sent in each round and
#3 is the mean number of receivers that did not receive a
data packet and therefore want to send a NAK:
where (1=1 pk ) is the number of empty rounds plus the
last successful NAK sending (see Eq. 18, 45 and 46).
The second term in E(W N2
R ) is the processing requirement
to send NAKs, where the considered receiver r is only with
probability #2/#3 the one that sends a NAK. In the third
term the number of sent NAKs is subtracted from the number
of total NAKs to get the number of received NAKs.
The throughput rates are:
N2
R =E(W N2
R g: (50)
The total bandwidth consumption is:
E(W
R
4.5 Tree-Based Protocol (H1)
Our analysis distinguishes between the three dierent kinds
of nodes in the ACK tree, the sender at the root of the
tree, the receivers that form the leaves of the ACK tree
and the receivers that are inner nodes. We will call these
inner receivers group leaders. Group leaders are sender and
receiver as well.
Our analysis of all tree-based protocols is based on the
assumption that each local group consists of exactly B
members and one group leader. We assume further, that
when a group leader has to sent a retransmission, the group
leader has already received this packet correctly. The following
subsections analyze the bandwidth requirements at
the sender W H1
S , receivers W H1
R and group leaders W H1
H .
4.5.1 Sender (root node)
e
Wa (i) (52)
E(W H1
E( e
M H1 is the number of necessary transmissions until all
members of a local group have received a packet correctly.
E(M H1 ) is determined analogous (B instead of R) to Eq.
8 of protocol (A1), since every local group is like a sender-based
system. Furthermore, the number of ACKs received
by the sender and group leaders in the presence of possible
ACK loss E( e
similar to E( e
of
R:
E( e L H1
4.5.2 Receiver (leaf node)
E(N H1
r;t ) is the total number of received transmissions at receiver
r and consists mainly of the sent messages from the
parent E(N H1
r ), provided that each local group has its own
multicast address. However, if the whole multicast group
has only one multicast address, retransmissions may reach
members outside of this local group. The probability for receiving
a retransmission from another local group is assumed
to be p l for any receiver. Such received transmissions from
other local groups increase the load of a node. In our analysis
we assume that transmissions from other local groups
do not decrease the necessary number of local retransmis-
sions, since in many cases they are received after a local
retransmission have already been triggered.
First we want to determine the number of group leaders.
The number of nodes R in a complete tree with branching
factor B and height h is:
and the tree height follows to:
The number of group leaders plus the sender is therefore:
The number of received transmissions E(N H1
r ) from the parent
node at receiver r is:
E(N H1
The total number of received transmissions E(N H1
r;t ) at receiver
r is now:
E(N H1 r;t
Finally, the bandwidth requirement W H1
R for a receiver is:
r
Wa (j) (60)
E(W H1
r )E(Wa
4.5.3 Group leader (inner node)
Since a group leader is a sender and receiver as well, the
bandwidth requirement is the sum of the sender and receiver
bandwidth requirements. However, Wd(1) is not considered
here, since the initial transmission is sent using the multi-cast
routing tree rather than the ACK tree. Furthermore,
a group leader may receive additional retransmissions only
from G 2 group leaders, since its parents group leader and
this group leader itself have to be subtracted.
e
Wa
| {z }
as sender
r
| {z }
as receiver
E(N H1
E(W H1
E( e
E(N H1
E(Wd (1)) E(M H1 )(1 qD )p l E(Wd
The maximum throughput rates H1
R , H1
H for the
sender, receiver and group leader are:
H1
R =E(W H1
system throughput H1 is given by the minimum
of the throughput rates for the sender, receiver and group
leader:
R g: (66)
The total bandwidth consumption of protocol (H1) is then
the sum of the sender's, leaf node receivers' and group lead-
ers' bandwidth consumptions:
(R G+ 1)W H1
4.6 Tree-Based Protocol (H2)
uses selective periodical ACKs (SAKs) and NAKs with
avoidance. The sender and group leaders collect all
NAKs belonging to one round and send a retransmission
if the waiting time has expired and at least one NAK has
been received. We have to distinguish between the number
of rounds and the number of transmissions. The number of
rounds is equal or greater than the number of retransmis-
sions, since if a sender or receiver receives no NAK within
one round, no retransmission is invoked.
A SAK is sent by the receiver to announce its state, i.e.
its received and missed packets, after a sequence of data
packets have been received. We assume that a SAK is sent
after a certain period of time. Therefore, when analyzing
the processing requirements for a single packet, only the
proportionate requirements for sending and receiving a SAK
(W ) is considered. S is assumed to be the number of SAKs
received by the sender in the presence of possible SAK loss:
4.6.1 Sender (root node)
e
E(W H2
E( e
E(M H2 ) is determined analogous to protocol (N2) (B instead
of R).
To determine E( e
that NAKs are received
from the child nodes of this local group as well as may be
received from other local groups with probability p l (see Eq.
33, 34 and 59):
E( e
Nk , the number of NAKs sent in round k and pk , the failure
probability for empty rounds are obtained analogous to Eq.
44 of (N2). G, the number of group leaders
is obtained analogous to Eq. 57 of (H1).
4.6.2 Receiver (leaf node)
Retransmissions are received mainly from the parent node,
but may also be received from other group leaders. Analo-
gous, NAKs are mainly received from other receivers in the
same local group but may also be received from receivers
in other local groups. The bandwidth requirement for a receiver
is analogous to Eq. 47 of protocol (N2):
E(W H2
r > 1)[E(O H2
r jO H2
r > 1)[E(O H2
r jO H2
| {z }
from this local group
| {z }
from other local groups
#2 and #3 can be obtained analogous to Eq. 48 and 49 of
protocol (N2) with B instead of R.
4.6.3 Group leader (inner node)
As the group leader role contains the sender role and the
receiver role as well, the processing requirements are:
E(W H2
l E(M H2 )(1 qD )E(Wd )
The second and third line in the above equation are the processing
requirements for one other local group. They have to
be subtracted because in contrast to the sender or receivers,
a group leader has a local parent group and local child group
which are already considered for the normal operations.
Finally, the maximum throughput rates are:
H2
R =E(W H2
R g: (75)
The total bandwidth consumption of protocol (H2) is:
(R G+ 1)W H2
4.7 Tree-Based Protocol (H3)
We assume that the correct transmission of a data packet
consists of two phases. In the rst phase, the data is transmitted
and ACKs are collected until all ACKs are received,
i.e. until all nodes have received the data packet. Then the
second phase starts, in which the missing AAKs are col-
lected. Note that most AAKs are already received in phase
one, since AAKs are sent from group leaders as soon as all
children have sent their AAKs. In this case, a retransmission
is acknowledged with an AAK rather than an ACK. So,
only nodes whose AAK is missing must be queried in phase
two.
Table
2: Additional notations for (H3) and (H4)
Bandwidth for an AAK or AAK query packet.
bandwidth for a periodical
AAK or AAK query packet, respectively.
Bandwidth to send a data packet per unicast
or multicast, respectively.
pq Probability for AAK query loss.
AA Probability of a unicast AAK loss.
Current number of receivers that need a re-transmission
Threshold for unicast retransmission. If n k is
smaller than , unicast is used for retransmission
and multicast otherwise.
Probability that n k is smaller than the threshold
for multicast retransmissions and therefore
unicast is used.
Probability that a retransmission is necessary
due to data or ACK loss.
Probability that an AAK query fails.
Mean number of sent unicast messages per
packet retransmission.
m Number of necessary unicast or multicast
transmissions in the presence of failures, respectively
aa Number of sent ACKs or AAKs.
Number of received ACKs or AAKs.
Number of sent AAK queries.
e L w aaq Number of received AAK queries.
Baa Number of receivers in a local group from
which the AAK is missing when phase two
starts.
pc Probability that no AAK can be sent due to
missing AAKs of child nodes.
4.7.1 Sender (root node)
The bandwidth requirement of a sender is:
e
L H3 a
e
Waa
u are the number of necessary multicast or
unicast transmissions, respectively. Wd;m and W d;u determine
the bandwidth requirements for a multicast or unicast
packet transmission. Waa is the necessary bandwidth for an
AAK and e
L H3 aa is the number of received AAKs. The processing
of AAKs is similar to the processing of data packets
and ACKs. If AAKs are missing after a timeout has oc-
curred, the sender or group leader sends unicast AAK query
messages (Waaq ) to the corresponding child nodes. Note
that this processing is started after all ACKs have been received
and no further retransmissions due to lost data packets
are necessary. L H3
aaq is the number of necessary unicast
AAK queries in the presence of message loss.
that unicast is used for retransmissions,
the number of unicast and multicast transmissions are:
Please note that the rst transmission is always sent with
multicast. The probability for a retransmission due to data
or ACK loss is given by:
unicast z }| {
multicast
z }| {
| {z }
data loss
unicast z }| {
multicast
z }| {
pA
| {z }
no data loss but ACK loss
E(M H3 ) is determined by instead of ~
instead of R
analogous to Eq. 8 of protocol (A1). is the threshold for
unicast or multicast retransmissions. If the current number
of nodes nk , which need a retransmission is smaller than the
threshold , then unicast is used for the retransmission. p t
is the probability that the current number of nodes nk is
smaller than the threshold :
is used to obtain M H3 , p t can only be determined
. In this case, parameter p t is unnecessary to
determine M H3 . Nu is the mean number of receivers per
round for which a unicast retransmission is invoked:
E(N H3
r ) is the total number of transmissions that reach receiver
r with unicast and multicast from its parent node in
the ACK tree:
E(N H3
The number of ACKs that reach the sender or group leader
in the presence of ACK loss is given by:
E( e
a
pc is the probability that no AAK can be sent due to missing
AAKs of child nodes. The number of AAK query rounds L1 ,
is determined by the probability ^
p that a query fails:
E(L1 ) can be determined analogous to M A1 of protocol (A1)
(see Eq. 8) with Baa instead of R and
p instead of ~ p. Baa is
the number of receivers, the sender has to query when the
rst AAK timeout occurs, which is equal to the number of
receivers that have not already successfully sent an AAK in
the rst phase:
Baa X
Baa
E(N H3 r
is the probability that no AAK can be
sent in a round or that the AAK is lost. Queries are sent
with unicast to the nodes whose AAK is missing. The total
number of queries in all rounds are:
E(L H3
The number of AAKs received at the sender is the number
of AAKs in the retransmission phase plus the number of
AAKs in the AAK query phase, which is exactly one AAK
from every receiver in Baa (see Eq. 84).
E( e
S ) is entirely determined by:
E(W H3
E( e
a
E( e
aa )E(Waa
4.7.2 Receiver (leaf node)
The bandwidth requirement at the receiver is given by:
a
Wa (j)
aa
e L H3 aaq
Waa (l) +Waaq (l)
r;t is the total number of transmission that reach receiver
r. In contrast to the already obtained N H3
r , additional data
retransmissions are considered from other local groups that
may be received with probability
E(N H3
r
The number of transmissions that are acknowledged with an
ACK, L H3
a , or with an AAK, L H3
aa , are:
r
Here we assume that only transmissions from this local
group are acknowledged. e
the number of AAK queries
received by an receiver are:
e
aaq =Baa E(L H3
where 1=Baa is the probability to be a receiver that gets an
AAK query. Finally, the expectation for a receiver's band-width
requirements is:
E(W H3
a
E( e
4.7.3 Group leader (inner node)
The bandwidth requirement at a group leader consists of the
sender and receiver bandwidth requirements (see Eq. 64):
E(W H3
E(Wd;m (1)) E(N H3
r )p l E(Wd
Finally, the maximum throughput rates are:
H3
R =E(W H3
R g:
The total bandwidth consumption of protocol (H3) is:
(R G+ 1)W H3
4.8 Tree-Based Protocol (H4)
The generic denition of protocol class (H4) is given in Section
3.3. As in (H3), the correct transmission of a data
packet consists of two phases. In the rst phase, the data is
transmitted. If NAKs are received by the sender or group
leaders, retransmissions are invoked. We assume that the re-transmission
phase is nished before the second phase starts.
In this phase AAKs are sent from receivers to their parent
in the ACK tree. Missing AAKs are queried per unicast
messages by the sender and group leaders. In a NAK-based
protocol this is only reasonable if it is done after a certain
number of correct data packet transmissions rather than after
every transmission. Therefore, the costs for sending and
receiving AAKs (Waa; ) as well as the costs for querying
AAKs (Waaq; ) can be set to a proportionate cost of the
other costs.
4.8.1 Sender (root node)
e
Wn (j)
e
aa
E(W H4
E( e
E(L H4
E( e
E(M H4 ) and E( e L H4 ) are determined analogous to protocol
(H2). The number of AAK queries is determined by the
p that a query fails:
The number of query rounds E(L1) can be determined analogous
to M A1 of protocol (A1) (see Eq. 8) with Baa instead
of R and
p instead of ~
p. Baa is the number of receivers,
the sender has to query when the rst AAK timeout at the
sender occurs. Since receivers send one AAK autonomously
after a certain number of successfully receptions, the number
of nodes to query in phase 2 is the number of lost AAKs.
Baa X
Baa
The total number of unicast query messages in all rounds
are:
E(L H4
Using unicast, only those nodes are queried whose AAK is
missing. So nally, the number of received AAKs at the
sender is equal to the number of child nodes in the ACK
tree:
E( e
4.8.2 Receiver (leaf node)
E(W H4
r > 1)[E(O H4
r jO H4
E( e
r > 1)[E(O H4
r jO H4
| {z }
from this local group
| {z }
from other local groups
#2 and #3 can be obtained analogous to (N2) with B instead
of R. E( e
aaq ), the number of received AAK queries and
replied AAKs is (see Eq. 95):
e
aaq =Baa E(L H4
and the number of rounds O H4 is determined analogous to
(N2).
4.8.3 Group leader (inner node)
As the group leader role contains the sender role and the
receiver role as well, the processing requirements are:
E(W H4
l E(M H4 )(1 qD )E(Wd )
Finally, the maximum throughput rates at the sender, re-
ceiver, group leader and overall throughput are:
H4
R =E(W H4
R g: (112)
The total bandwidth consumption of protocol (H4) is:
(R G+ 1)W H4
5. NUMERICAL RESULTS
We examine the relative performance and bandwidth consumption
of the analyzed protocols by means of some numerical
examples. The mean bandwidth costs are set equal to 1
for data packets (Wd , W d;u , Wd;m ), 0.1 for control packets
(Wa , Wn , Waa , Waaq ) and 0.01 for periodical control packets
(W , Waa; and Waaq; ). The following graphs show
the throughput of the various protocol classes relative to
the normalized maximum throughput of 1.
Figure
1.a shows the bandwidth limited maximum through-put
of the sender-initiated protocol (A1) and the receiver-initiated
protocols (N1) and (N2). The loss probability for
data packets as well as control packets is 0.01 for the dotted
lines and 0.1 for the solid ones. The probability for simultaneous
NAK sending in (N2) is set to 0.1.
The results in Figure 1.a show, that a protocol based on positive
acknowledgments like (A1) is not applicable for large
receiver groups, since the large number of ACKs overwhelms
the sender. The performance of (N1) and (N2) is much
better than (A1)'s performance. Particularly, if packet loss
probability is low, only few NAK messages are returned to
the sender which improves the performance. (N2) with NAK
avoidance scheme provides the best performance of all non-hierarchical
approaches.
In
Figure
1.b, the results for the hierarchical protocol classes
(H1), (H2), (H3) and (H4) are shown. The number of child
nodes is set equal to 10 for all classes and the probability
for receiving packets from other local groups, p l , is set equal
to 0.001. (H3) is shown with which corresponds with
protocol (H1) except for the additional aggregated ACKs of
(H3). means that all retransmission are sent with
multicast.
All protocol classes experience a throughput degradation
with increasing group sizes although the local group size
0,3
0,4
0,6
Number of Receivers
Throughput
(a) sender and receiver-initiated protocols0,1
0,3
0,4
0,6
Number of Receivers
Throughput
(b) tree-based protocols
Figure
1: Throughput of reliable multicast protocols1000100000
Number of Receivers
Bandwidth
Consumption
(a) sender and receiver-initiated protocols1000100000
Number of Receivers
Bandwidth
Consumption H1 (p=0,01)
(b) tree-based protocols
Figure
2: Bandwidth consumption of reliable multicast protocols
remains constant. This results from our assumption that a
packet is received with probability outside the
scope of a local group. With increasing number of receivers,
the number of groups increase also and therefore more packets
from other local groups are received. Note that if each
local group is assigned a separate multicast address for re-
transmissions, no packets from other local groups are received
and therefore p l has to be set equal to 0. In this
case, the throughput of all hierarchical approaches remains
constant.
The protocols with negative acknowledgments and NAK
avoidance provide again the best performance. As it can
be further seen in the gure, the additional overhead for
periodical aggregated acknowledgments is very low, therefore
provides almost the same performance as (H2). In
case of (H3), the aggregated acknowledgments are sent after
every correct message transmission. Therefore, the performance
reduction compared to (H1) is more signicant than
between (H4) and (H2). If (H3)'s aggregated ACKs are also
sent periodically as in (H4), the performance would be almost
the same as (H1)'s performance. This means that the
additional costs for providing reliability even in the presence
of node failures are small and therefore acceptable for
protocol implementations.
Due to readability, the result for (H3) with
shown in the gure. With only retransmissions for
equal or more than 2 nodes are made using multicast and
with unicast otherwise. In this case, (H3) provides better
performance especially for a large number of receivers. For
a packet loss probability of 0.1 the performance is equal to
(H1).
By comparing Figure 1.a and 1.b it can be seen that tree-based
protocols are superior. Their throughput degradation
with increasing multicast group size is much smaller compared
to non-hierarchical approaches. Furthermore, they
are more robust against high packet loss probabilities.
The following gure depicts the bandwidth consumption of
all analyzed protocols. Figure 2.a shows that the bandwidth
consumption of (N2) for small group sizes is below (A1)'s
requirements. However, for large group sizes and higher loss
probability, (N2)'s bandwidth consumption is 2,5 times the
bandwidth consumption of (A1). (N1) provides the lowest
bandwidth consumption of the three classes.
In
Figure
2.b it can be seen that tree-based protocols require
less overall bandwidth than non-hierarchical approaches. In
tree-based protocols save in most cases about 50%
of the bandwidth costs and compared to (N2) up to 85%
(please note the logarithmic y-axis). The results for (H3)
with are not shown in the gure, which are in any
case lower compared to (H3) with In case of a
packet loss probability of 0.01, (H3) with
only the bandwidth of (H1). If the packet loss probability is
higher than 0.01, the bandwidth consumption is even smaller
than that of (H1). For example, with packet loss probability
0.1, (H3)'s bandwidth consumption with
than 30% below (H1)'s requirements. In contrast to non-hierarchical
approaches, i.e. if local group sizes are small,
NAK with NAK avoidance protocols require low bandwidth
costs. Therefore, protocols (H2) and (H4) provide the lowest
bandwidth consumption.
6.
We have analyzed the throughput in terms of bandwidth
requirements and the overall bandwidth consumption of
sender-initiated, receiver-initiated and tree-based multi-cast
protocols assuming a realistic system model with data
packet loss, control packet loss and asynchronous clocks.
Of particular importance are the analyzed protocol classes
with aggregated acknowledgments. In contrast to other hierarchical
approaches, these classes provide reliability even
in the presence of node failures.
The results of our numerical examples show that hierarchical
approaches are superior. They provide higher through-put
and lower overall bandwidth consumption compared to
sender-initiated or receiver-initiated protocols. The protocol
classes with aggregated acknowledgments lead to only
a small throughput decrease and slightly increased overall
bandwidth consumption compared to the same classes without
aggregated acknowledgments. This means, that the additional
costs for providing a reliable multicast service even
in the presence of node failures are small and therefore acceptable
for reliable multicast protocol implementations.
7.
--R
A reliable multicast framework for light-weight sessions and application level framing
A comparison of server-based and receiver-based local recovery approaches for scalable reliable multicast
How bad is reliable multicast without local recovery.
Reliable multicast transport protocol (RMTP).
A comparison of sender-initiated and receiver-initiated reliable multicast protocols
Performance comparison of sender-based and receiver-based reliable multicast protocols
PGM reliable transport protocol speci
An overview of the reliable multicast transport protocol II.
A reliable dissemination protocol for interactive collaborative applications.
--TR
XTP: the Xpress Transfer Protocol
A comparison of sender-initiated and receiver-initiated reliable multicast protocols
A reliable dissemination protocol for interactive collaborative applications
The case for reliable concurrent multicasting using shared ACK trees
A reliable multicast framework for light-weight sessions and application level framing
A comparison of reliable multicast protocols
--CTR
Thorsten Lohmar , Zhaoyi Peng , Petri Mahonen, Performance Evaluation of a File Repair Procedure Based on a Combination of MBMS and Unicast Bearers, Proceedings of the 2006 International Symposium on on World of Wireless, Mobile and Multimedia Networks, p.349-357, June 26-29, 2006 | analysis;receiver-initiated;bandwidth;tree-based;reliable multicast;sender-initiated;AAK |
354704 | Parametric Analysis of Computer Systems. | A general parametric analysis problem which allows the use of parameter variables in both the real-time automata and the specifications is proposed and solved. The analysis algorithm is much simpler and can run more efficiently in average cases than can previous works. | Introduction
A successful real-world project management relies on the satisfaction of various timing and nontiming restraints
which may compete with each other for resources. Examples of such restraints include timely re-
sponses, budget, domestic or international regulations, system configurations, environments, compatibilities,
In this work, we define and algorithmically solves the parametric analysis problem of computer systems
which allows for the formal description of system behaviors and design requirements with various timing and
nontiming parameter variables and asks for a general conditions on all solutions to those parameter variables.
The design of our problem was influenced by previous work of Alur et al. [AHV93] and Wang [Wang95]
which will be discussed briefly later. Our parametric analysis problem is presented in two parts : an automaton
with nontiming parameter variables and a specification with both timing and nontiming parameter
variables. The following example is adapted from the railroad crossing example and shows how such a
platform can be useful.
The popular railroad crossing example consists of a train monitor and gate-controller. In figure 1, we give
a parametric version of the automaton descriptions of the monitor and controller respectively. The ovals
represent meta-states while arcs represent transitions. By each transition, we label the transition condition
and the clocks to be reset to zero on the transition. The global state space can be calculated as the Cartesian-
product of local state spaces.
The safety requirement is that whenever a train is at the crossing, the gate must be in the D mode (gate
is down). The more money you spend on monitor, the more precise you can tell how far away a train is
approaching. Suppose we now have two monitor types, one costs 1000 dollars and can tell if a train is coming
to the crossing in 290 to 300 seconds; the other type costs 500 and can tell if a train is coming to the crossing
in 200 to 350 seconds.
We also have two gate-controller types. One costs 900 dollars and can lower the gate in 20 to 50 seconds
and skip the U mode (gate is up) when a train is coming to the crossing and the controller is in the R mode
(gate-Raising mode). The other type costs 300 dollars and can lower the gate in 100 to 200 seconds and
cannot skip the U mode once the controller is in the R mode.
Suppose now the design of a rail-road crossing gate-controller is subjected to the budget constraint : the
cost of the monitor ($ M ) and that of controller ($ C ) together cannot exceed 1500 dollars. We want to make
sure under this constraint, if the safety requirement can still be satisfied. This can be expressed in our logic
Monitor
U
Controller
A
R
Figure
1: Railroad Gate Controller Example
D). Here 82 is a modal operator from CTL [CE81, CES86] which
means for all computations henceforth, the following statement must be true. k
Our system behavior descriptions are given in statically parametric automata (SPA) and our specifications
are given in parametric computation tree logic (PCTL). The outcome of our algorithm are Boolean expressions,
whose literals are linear inequalities on the parameter variables, and can be further processed with standard
techniques like simplex, simulated extract useful design feedback.
In the remainder of the introduction, we shall first briefly discuss related work on the subject, and then
sketch an outline of the rest of the paper.
1.1 Related work
In the earliest development [CE81, CES86], people use finite-state automata to describe system behavior and
check to see if they satisfy specification given in branching-time temporal logic CTL. Such a framework is
usually called model-checking. A CTL (Computation Tree Logic) formula is composed of binary propositions
Boolean operators (:; -), and branching-time modal operators (9U ; 9fl; 8U ; 8fl). 9 means
"there exists" a computation. 8 means "for all" computations. U means something is true "until" something
else is true. fl means "next state." For example, 9pUq says there exists a computation along which p is true
until q is true. Since there is no notion of real-time (clock time), only ordering among events are considered.
The following shorthands are generally accepted besides the usual ones in Boolean algebra. 93OE 1 is for
9true U Intuitively 3 means "eventually"
while 2 means "henceforth."
CTL model-checking has been used to prove the correctness of concurrent systems such as circuits and
communication protocols. In 1990, the platform was extended by Alur et al. to Timed CTL (TCTL) model-checking
problem to verify dense-time systems equipped with resettable clocks [ACD90]. Alur et al. also
solve the problem in the same paper with an innovative state space partitioning scheme.
In [CY92], the problems of deciding the earliest and latest times a target state can appear in the computation
of a timed automaton was discussed. However, they did not derive the general conditions on parameter
variables.
In 1993, Alur et al. embark on the reachability problem of real-time automata with parameter variables
[AHV93]. Particularly, they have established that in general, the problem has no algorithm when three clocks
are compared with parameter variables in the automata [AHV93]. This observation greatly influences the
design of our platform.
In 1995, Wang propose another platform which extends the TCTL model-checking problem to allow for
timing parameter variables in TCTL formulae [Wang95]. His algorithm gives back Boolean conditions whose
literals are linear equalities on the timing parameter variables. He also showed that his parametric timing
analysis problem is PSPACE-hard while his analysis algorithm is of double-exponential time complexity.
Henzinger's HyTech system developed at Cornell also has parametric analysis power[AHV93, HHWT95].
However in their framework, they did not identify a decidable class for the parametric analysis problem and
their procedure is not guaranteed to terminate. In comparison, our framework has an algorithm which can
generate the semilinear description of the working solutions for the parameter variables.
1.2 Outline
Section 2 presents our system behavior description language : the Statically Parametric Automaton (SPA).
Section 3 defines Parametric Computation Tree Logic (PCTL) and the Parametric Analysis Problem. Section
4 presents the algorithm, proves its correctness, and analyzes its complexity. Section 5 concludes the
paper.
We also adopt N and R + as the sets of nonnegative integers and nonnegative reals respectively.
Statically parametric automata (SPA)
In an SPA, people may combine propositions, timing inequalities on clock readings, and linear inequalities of
parameter variables to write the invariance and transition conditions. Such a combination is called a state
predicate and is defined formally in the following. Given a set P of atomic propositions, a set C of clocks,
and a set H of parameter variables, the syntax of a state predicate j of P , C, and H, has the following syntax
rules.
a
are state predicates. Notationally,
we let B(P; C; H) be the set of all state predicates on P , C, and H. Note the parameter variables considered
in H are static because their value do not change with time during each computation of an automaton. A
state predicate with only
a literals is called static.
Statically Parametric Automata
A Statically Parametric Automaton (SPA) is a tuple (Q; -) with the following restrictions.
ffl Q is a finite set of meta-states.
is the initial meta-state.
ffl P is a set of atomic propositions.
ffl C is a set of clocks.
ffl H is a set of parameters variables.
function that labels each meta-state with a condition true in that meta-state.
Q is the set of transitions.
defines the set of clocks to be reset during each transition.
defines the transition triggering conditions. k
An SPA starts execution at its meta-state q 0 . We shall assume that initially, all clocks read zero. In between
meta-state transitions, all clocks increment their readings at a uniform rate. The transitions of the SPA may
be fired when the triggering conditon is satisfied. With different interpretation to the parameter variables,
it may exhibit different behaviors. During a transition from meta-state q i to q j , for each x 2 ae(q
reading of x will be reset to zero. There are state predicates with parameter variables on the states as well as
transitions. These parameters may also appear in the specifications of the same analysis problem instance.
A state s of SPA -) is a mapping from P [ C to ftrue; falseg [ R + such that for
each for each x 2 C, s(x) is the set of nonnegative real
numbers. k
The same SPA may generate different computations under different interpretation of its parameter vari-
ables. An interpretation, I, for H is a mapping from N [H to N such that for all c 2 N ,
-) is said to be interpreted with respect to I, when all state predicates in A have
their parameter variables interpreted according to I.
Satisfaction of interpreted state predicates by a state
predicate j is satisfied by state s under interpretation I, written as s
a
a
Now we are going to define the computation of SPA. For convenience, we adopt the following conventions.
An -) is unambiguous iff for all states s, there is at most one q 2 Q such
that for some I, s Ambiguous SPA's can be made unambiguous by incorporating meta-state names
as propositional conjuncts in the conjunctive normal forms of the -state predicate of each meta-state. For
convenience, from now on, we shall only talk about unambiguous SPA's. When we say an SPA, we mean an
unambiguous SPA.
Given an SPA - ), an interpretation I for H, and a state s, we let s Q be the
meta-state in Q such that s there is no meta-state q 2 Q such that s
undefined.
Given two states s; s 0 , there is a meta-state transition from s to s 0 in A under interpretation I, in symbols
are both defined,
Also, given a state s and a ffi be the state that agrees with s in every aspect except for
all
of interpreted SPA
Given a state s of SPA -) and an interpretation I, a computation of A starting
at s is called an s-run and is a sequence ((s of pairs such that
ffl for each t there is an i 2 N such that t i - t; and
ffl for each integer i - 1, s Q
i is defined and for each real 0 -
ffl for each i - 1, A goes from s i to s i+1 because of
- a meta-state transition, i.e. t
3 PCTL and parametric analysis problem
Parametric Computation Tree Logic (PCTL) is used for specifying the design requirements and is defined
with respect to a given SPA. Suppose we are given an SPA
OE for A has the following syntax rules.
Here j is a state predicate in B(P; C; H), OE 1 and OE 2 are PCTL formulae, and ' is an element in N [ H.
Note that the parameter variable subscripts of modal can also be used as parameter variables
in SPA. Also we adopt the following standard shorthands
for
8true U-' OE 1 , 92-' OE 1 for :83-':OE 1 .
With different interpretations, a PCTL formula may impose different requirements. We write in notations
s I OE to mean that OE is satisfied at state s in A under interpretation I. The satisfaction relation is defined
inductively as follows.
ffl If OE is a state predicate, then s I OE iff OE is satisfied by s as a state predicate under I.
there are an in A, an i - 1, and a
s.t.
- for all
- for all
- for all
- for all
Given an SPA A, a PCTL formula OE, and an interpretation I for H, we say A is a model of OE under I,
written as A j= I OE, iff s I OE for all states s such that s
We now formally define our problem.
Statically Parametric Analysis Problem
Given an SPA A and a specification (PCTL formula) OE, the parametric analysis problem instance for A
and OE, denoted as PAP(A, OE), is formally defined as the problem of deriving the general condition of all
interpretation I such that A j= I OE. I is called a solution to PAP(A; OE) iff A
We will show that such conditions are always expressible as Boolean combinations of linear inequalities of
parameter variables.
4 Parametric analysis
In this section, we shall develop new data-structures, parametric region graph and conditional path graph,
to solve the parametric analysis problem. Parametric region graph is similar to the region graph defined in
[ACD90] but it contains parametric information. A region is a subset of the state space in which all states
exhibit the same behavior with respect to the given SPA and PCTL formula.
Given a parametric analysis problem for A and OE, a modal subformula OE 1 of OE, and the parametric region
(v; v)
(v; w)
Figure
2: Railroad Gate Controller Example
graph with region sets V , the conditional path graph for OE 1 is a fully connected graph of V whose arcs
are labeled with sets of pairs of the form : (-; T ) where - is a static state predicate and T is an integer
set. Conveniently, we call such pairs conditional time expressions (CTE). Alternatively, we can say that the
conditional path graph J OE 1
for OE 1 is a mapping from V \Theta V to the power set of CTE's. For a v; v
is satisfied by I, then there is a finite
s-run of time t ending at an s 0 2 v 0 such that OE 1 is satisfied all the way through the run except at s 0 . In
subsection 4.2, we shall show that all our modal formula evaluations can be decomposed to the computation
of conditional time expressions.
The kernel of this section is a Kleene's closure procedure which computes the conditional path graph. Its
computation utilizes the following four types of integer set manipulations.
g.
g.
means the addition of i consecutive T 1 .
is the complement of T 1 , i.e., g.
It can be shown that all integer sets resulting from such manipulations in our algorithm are semilinear. 1
Semilinear expressions are convenient notations for expressing infinite integer sets constructed regularly.
They are also closed under the four manipulations. There are also algorithms to compute the manipulation
results. Specifically, we know that all semilinear expressions can be represented as the union of a finite
number of sets like a + c . Such a special form is called periodical normal form (PNF). It is not difficult to
prove that given operands in PNF, the results of the four manipulations can all be transformed back into
PNF. Due to page-limit, we shall skip the details here.
The intuition behind our algorithm for computing the conditional path graph is a vertex bypassing scheme.
Suppose, we have three regions u; v; w whose connections in the conditional path graph is shown in Figure 2.
Then it is clear that by bypassing region v, we realized that J OE 1
should be a superset of
(v; w); D ' J OE 1
(v; v)g
Our conditional path graph construction algorithm utilizes a Kleene's closure framework to calculate all the
arc labels.
In subsection 4.1, we kind of extend the regions graph concepts in [ACD90] and define parametric region
graph. In subsection 4.2, we define conditional path graph, present algorithm to compute it, and present
our labelling algorithm for parametric analysis problem. In subsections 4.3 and 4.4, we briefly prove the
seminlinear integer set is expressible as the union of a finite number of integer sets like
for some a;
algorithm's correctness and analyze its complexity.
4.1 Parametric region graph
The brilliant concept of region graphs were originally discussed and used in [ACD90] for verifying dense-time
systems. A region graph partitions its system state space into finitely many behavior-equivalent subspaces.
Our parametric region graphs extend from Alur et al's region graph and contains information on parameter
variable restrictions. Beside parameter variables, our parametric region graphs have an additional clock -
which gets reset to zero once its reading reaches one. - is not used in the user-given SPA and is added when
we construct the regions for the convenience of parametric timing analysis. It functions as a ticking indicator
for evaluating timed modal formulae of PCTL. The reading of - is always between 0 and 1, that is, for every
state s, 0 - s(- 1.
The timing constants in an SPA A are the integer constants c that appear in conditions such as
and x - c in A. The timing constants in a PCTL formula OE are the integer constants c that appear in
subformulae like x \Gamma y - c; x - c; 9OE 1 U-c OE 2 , and 8OE 1 U-c OE 2 . Let KA:OE be the largest timing constant used in
both A and OE for the given parametric analysis problem instance.
For each ffi 2 R + , we define fract(ffi) as the fractional part of ffi, i.e.
Regions
Given an SPA -) and a PCTL formula OE for A, two states s; s 0 of A, s - =A:OE s 0
(i.e. s and s 0 are equivalent with respect to A and OE) iff the following conditions are met.
ffl For each
ffl For each x \Gamma y - c used in A or OE,
ffl For each
ffl For every x; y
[s] denotes the equivalent class of A's states, with respect to relation - =A:OE , to which s belongs and it is called
a region. k
Note because of our assumption of unambiguous SPA's, we know that for all s Using the
above definition, parametric region graph is defined as follows.
Graph (PR-graph)
The Parametric Region Graph (PR-graph) for an SPA -) and a PCTL formula
OE is a directed graph such that the vertex set V is the set of all regions and the arc set F
consists of the following two types of arcs.
ffl An arc (v; v 0 transitions in A. That is, for every s 2 v, there is an s
such that s ! s 0 .
ffl An arc (v; v 0 ) may be a time arc and represent passage of time in the same meta-state. Formally, for
every s 2 v, there is an s 0 2 v 0 such that
- there is no -
s and -
s, and
Just as in [Wang95], propositional value-changings within the same meta-states are taken care of automatically
For each (v; v 0 ) in F , we let ffl(v; v 0 ) =" if going from states in v to states in v 0 , the reading of - increments
from a noninteger to an going from states in v to states in v 0 , the reading of - increments
from an integer to a noninteger; otherwise ffl(v; v 0 is an
KClosure OE 1
/* It is assumed that for all regions v 2 V , we know the static state predicate condition L OE 1 (v) which makes
1 satisfied at v. */
f
(1) For each (v; w) 2 F , if ffl(v; w) =", f
(2) else let J OE 1
0)g.
(2) for each v 2 V , do f
(1) for each u; w
(v; w);
Table
1: Construction of the conditional path graph
integer.
Also we conveniently write v Similarly,
we let v Q be the meta-state such that for all s 2 v(v
Since regions have enough informations to determine the truth values all propositions and clock inequalities
used in a parametric analysis problem, we can define the mapping from state predicates to static state
predicates through a region. Formally, given a region v and a state predicate j, we write v(j) for the static
predicate constructed according to the following rules.
ffl v(false) is false.
ffl v(p) is true iff 8s 2
c) is false otherwise.
c) is false otherwise.
a
a
For convenience, we let h-iv be the region in a PR-graph that agrees with v in every aspect except that for
all Given a PCTL formula OE and a path is called a OE-path
(OE-cycle) iff there is an interpretation I such that for each 1 -
4.2 Labeling Algorithm
To compute the parametric condition for a parametric modal formula like 9OE 1 U-' OE 2 at a region, we can
instead decompose the formula into a Boolean combinations of path conditions and then compute those path
conditions. For example, suppose under interpretation I, we know there exists a OE 1 -path v 1
Then a sufficient condition for all states in v 1 satisfying 9OE 1 U-' OE 2 is that I(') - 5-v n
Now we define our second new data structure : conditional path graph to prepare for the presentation of the
algorithm.
Conditional path graph
Given a region graph of OE, the conditional path graph for OE 1 , denoted as
is a mapping from V \Theta V to the power set of conditional time expressions such that for all v; v
there is a finite s-run of time
t ending at an s 0 2 v 0 such that OE 1 is satisfied all the way through the run except at s 0 . k
The procedure for computing J OE 1
() is presented in table 1. Once the conditional path graph has been
constructed for using KClosure OE 1
(), we can then turn to the labeling algorithm in table 2 to calculate
the parametric conditions for the modal formulas properly containing OE 1 . However, there is still one thing
which we should define clearly before presenting our labeling algorithm, that is : "How should we connect
the conditional time expressions in the arc labels to parametric conditions ?" Suppose, we want to examine
if from v to v 0 , there is a run satisfying the parametric requirement of - '. The condition can be derived
as
expressions T in PNF and (numerical or variable)
parameter ' is calculated according to the following rewriting rules.
is a new integer variable never used before.
Note since we assume that the operands are in PNF, we do not have to pay attention to the case of +; ; .
Table
2 presents the labeling alogrithm for L OE (v). This algorithm maps pairs of vertices and temporal
logic formulas to a Boolean combination of linear inequalities with parameter variables as free variables. Also
note the labeling algorithm relies on the special case of 92-0 OE j which essentially says there is an infinite
computation along which OE j is always true.
Also the presentation in table 2 only covers some typical cases. For the remaining cases, please check the
appendix.
4.3 Correctness
The following lemma establishes the correctness of our labeling algorithm.
Given PAP(A; OE), an interpretation I for H, and a vertex v in GA:OE , after executing L OE (v) in
our labeling algorithm, I satisfies L OE (v) iff v
proof : The proof follows a standard structural induction on OE, which we often saw in related model-checking
literature, and very much resembles the one in [Wang95]. Due to page-limit, we shall omit it here. k
4.4 Complexity
According to our construction, the number of regions in GA:OE , denoted as jG A:OE j, is at most 3jQj
coefficient 3 and constant +1 reflect the introduction of ticking indicator -. The
inner loop of KClosure OE 1
will be executed for jG A:OE j 3 times. Each iteration takes time proportional to
(v; w)j2 jJ OE 1 (v;v)j . The conditional path graph arc labels, i.e. J OE 1
roughly corresponds to
the set of simple paths from u to v, although they utilize the succinct representation of semilinear expressions.
Thus according to the complexity analysis in [Wang95], we find that procedure KClosure OE 1
() has complexity
doubly exponential to the size of GA:OE , and thus triply exponential to the size of input, assuming constant
time for the manipulation of semilinear expressions.
We now analyze the complexity of our labeling procedure. In table 2, procedure L OE i () invokes KClosure OE j ()
at most once. Label(A; OE) invokes L OE i () at most jGA jjOEj times. Thus the complexity of the algorithm is
roughly triply exponential to the size of A and OE, since polynomials of exponentialities are still exponential-
ities.
Finally, PCTL satisfiability problem is undecidable since it is no easier than TCTL satisfiability problem[ACD90].
(1) construct the PR-graph
(2) for each v 2 V , recursively compute L OE (v);
case (false), L false (v) := false;
case (p) where
case or y is zero in v, evaluate x \Gamma y - c as in the next case; else x \Gamma y - c is evaluated
to the same value as it is in any region u such that (u; v) 2 F .
case
case ( P
a
a
a
case
case
case
(1) KClosure OE j (V; F );
(2) let L 92-0 OE j (v) be W
ii W
case
(1) KClosure OE j (V; F );
(2) let L 9OE j U-' OE k (v) be W
ii W
case
These cases are treated in ways similar to the above case and are left in table ?? which is appended at the
end of the paper.
case
(1) KClosure OE j (V; F );
(2) let L 8OE j U-' OE k (v) be
case
These cases are treated in ways similar to above case and are left in table ?? which is appended at the end
of the paper.
Table
2: Labeling algorithm
5 Conclusion
With the success of CTL-based techniques in automatic verification for computer systems [Bryant86, BCMDH90,
HNSY92], it would be nice if a formal theory appealing to the common practice of real-world projects can
be developed. We feel hopeful that the insight and techniques used in this paper can be further applied to
help verifying reactive systems in a more natural and productive way.
Acknowledgements
The authors would like to thank Prof. Tom Henzinger. His suggestion to use dynamic programming to solve
timing analysis problem triggered the research.
--R
"Automata, Languages and Programming: Proceedings of the 17th ICALP,"
"Proceedings, 5th IEEE LICS."
"Proceedings, 25th ACM STOC,"
Symbolic Model Checking: 10 20 States and Beyond
"Proceedings, Workshop on Logic of Programs,"
Automatic Verification of Finite-State Concurrent Systems using Temporal-Logic Specifications
"Proceedings, 3rd CAV,"
the next generation.
Symbolic Model Checking for Real-Time Systems
"Proceedings, 10th IEEE Symposium on Logic in Computer Science."
--TR
Automatic verification of finite-state concurrent systems using temporal logic specifications
Graph-based algorithms for Boolean function manipulation
Automata for modeling real-time systems
Parametric real-time reasoning
Model-checking in dense real-time
Parametric timing analysis for real-time systems
Minimum and Maximum Delay Problems in Real-Time Systems
Design and Synthesis of Synchronization Skeletons Using Branching-Time Temporal Logic
HYTECH
--CTR
Farn Wang , Hsu-Chun Yen, Reachability solution characterization of parametric real-time systems, Theoretical Computer Science, v.328 n.1-2, p.187-201, 29 November 2004 | real-time systems;parameters;verification;model-checking |
354705 | Modelling IP Mobility. | We study a highly simplified version of the proposed mobility support in version 6 of Internet Protocols (IP). We concentrate on the issue of ensuring that messages to and from mobile agents are delivered without loss of connectivity. We provide three models, of increasingly complex nature, of a network of routers and computing agents that are interconnected via the routers: the first is without mobile agents and is treated as a specification for the next two; the second supports mobile agents, and the third additionally allows correspondent agents to cache the current location of a mobile agent. Following a detailed analysis of the three models to extract invariant properties, we show that the three models are related by a suitable notion of equivalence based on barbed bisimulation. Finally, we report on some experiments in simulating and verifying finite state versions of our model. | Introduction
We study the modelling of mobile hosts on a network using a simple process
description language, with the intention of being able to prove properties about a
protocol for supporting mobility. The present case study grew out of our interest in
understanding the essential aspects of some extant mechanisms providing mobility
support.
Indeed, the model we study may be considered an extreme simplification of
proposals for mobility support in version 6 of Internet Protocols (IP) [IDM91,
This work was supported under the aegis of IFCPAR 1502-1. An extended abstract of this
paper appears in the Proceedings of CONCUR 98.
y Corresponding author. CMI (LIM), 39 rue Joliot-Curie, F-13453, Marseille, France.
amadio@gyptis.univ-mrs.fr. Partly supported by CTI-CNET 95-1B-182, Action Incitative IN-
z IIT Delhi, New Delhi 110016, India. sanjiva@cse.iitd.ernet.in. Partly supported by AICTE
1-52/CD/CA(08)/96-97.
TUSM94, JP96] 1 . IPv6 and similar mobile internetworking protocols enable messages
to be transparently routed between hosts, even when these hosts may change
their location in the network. The architecture of the model underlying these solutions
may be described as follows: A network consists of several subnetworks,
each interfaced to the rest of the network via a router. Each node has a globally
unique permanent identification and a router address for routing messages to it,
with a mapping associating a node's identifying name to its current router address.
The router associated by default with a node is called its "home router". When
a mobile node moves to a different subnet, it registers with a "foreign" router
administering that subnet, and arranges for a router in its home subnet to act
as a "home" proxy that will forward messages to it at its new "care-of address".
Thus any message sent to a node at its home router can eventually get delivered
to it at its current care-of address. In addition, a mobile node may inform several
correspondent nodes of its current location (router), thus relaxing the necessity
of routing messages via its home subnet. This model, being fairly general, also
applies to several mobile software architectures.
The particular issue we explore here, which is a key property desired of most
mobility protocols, is whether messages to and from mobile agents are delivered
without loss of connectivity during and after an agent's move. Although IP does
not guarantee that messages do not get lost, we model an idealized form of Mobile
IPv6 without message loss, since the analysis presented here subsumes that
required for Mobile IP with possible loss of messages.
We should clarify at the outset that we are not presenting a new architecture
for mobility support; nor are we presenting a new framework or calculus for mo-
bility. Rather, our work may be classified as protocol modelling and analysis: We
take an informal description of an existing protocol, idealize it and abstract away
aspects that seem irrelevant to the properties we wish to check or which are details
for providing a particular functionality, then make a model of the simplified
protocol and apply mathematical techniques to discover the system structure and
its behavioral properties.
We believe that this approach constitutes a useful way of understanding such
protocols, and may assist in the formulation and revision of real-world protocols
for mobile systems. From the informal descriptions of mobility protocols in the
literature, it is difficult to assure oneself of their correctness. As borne out by
our work, the specification and combinatorial analysis of such protocols is too
complicated to rely on an informal justification.
The literature contains various related and other proposals for mobility sup-
port, for example, in descriptions of kernel support for process migration [PM83,
runtime systems for migrant code [BKT92, JLHB88]. A significant
body of work concerns object mobility support in various object-based
software architectures, see, e.g., [Dec86, Piq96, VRHB While these studies
address several other issues relevant to software mobility (e.g., garbage collection),
1 We do not model various aspects of network protocols for mobility. In particular, we totally
ignore security and authentication issues, as well as representational formats and conventions
in network packets, e.g., encapsulation/decapsulation of messages, and tunneling. Broadcast is
not dealt with at all.
we are not aware of any complete modelling and analysis in those settings that
subsumes our work.
We have recently learnt of a "light-weight" formal analysis of the IPv6 protocol
[JNW97]. In that work, Nitpick, a tool that checks properties of finite binary relations
and generates counter-examples, is applied to a finite instance of an abstract
version of the IPv6, to verify that messages do not travel indefinitely in cycles 2 .
The approach is quite similar in spirit to our work summarized in Appendix B,
and uses inductive invariants to verify a cache acyclicity property. This property
is one of those that arise in our analysis presented in x4.3. The Nitpick analysis
focuses on the particular cache management policy suggested for IPv6 together
with timestamping of messages (which requires a protocol for approximating a
global clock), whereas our analysis shows that any cache management policy that
satisfies a particular invariant will ensure correctness.
The literature also contains a number of frameworks for describing mobility
protocols, such as the extension of Unity called Mobile Unity [MR97, PRM97,
RMP97]. One should distinguish the "analysis of protocols for mobility" from
the "definition of models or calculi for mobility"; mobility protocols provide an
implementation basis for the latter, just as, e.g., garbage collection algorithms
provide an implementation basis for functional programming. While we agree that
frameworks including the dynamic generation of names and processes as primitive
operations, such as the -calculus and related formalisms [MPW92, AMST97],
may be suitable for describing mobile systems, we believe that our work provides
some evidence for the assertion that these primitives are not always necessary
nor always appropriate for describing and analyzing mobility protocols. Further,
there is no need to develop an "ad hoc" (in the original, non-pejorative sense)
formalism for analyzing mobility protocols.
The structure of the paper reflects our analysis methodology. After introducing
our language for modelling the protocol (x2), we present a model of the
protocol (x3), and then look for its essential structure in the form of a big invariant
(x4). From this analysis, we gather some insight on why the protocol
works correctly, and suggest some variations. We present the protocol model in
stages, giving three models of increasingly complex nature of a network of routers
and computing agents that are interconnected via the routers. In the first (x3.1),
which we call Stat, computing agents are not mobile. We extend Stat to a system
Mob where agents can move from a router to another (x3.2). Finally, to reduce
indirection and to avoid excessive centralization and traffic congestion, we extend
the system Mob to a system CMob, where the current router of an agent may be
cached by its correspondent agents (x3.3). In x4, we analyze these three different
models, and establish the correspondence between them by showing that systems
Stat, Mob and CMob are barbed bisimilar, with respect to a suitable notion of
observation. We conclude in x5, by summarizing our contributions, recalling the
simplifications we have made, and reporting on some simulation and automatic
verification experiments on a finite-state formulation of the protocol.
The paper need not be read sequentially. By reading x3.1, x3.2 and glancing
In IPv6, messages can in fact travel in cycles. Consider the case where a node keeps moving
in a ring and a message is always being forwarded to it one step behind.
at x4.1, x4.2 the reader will have a general idea of the basic systems Stat and
Mob (in particular Figure 4 should provide a good operational intuition) and of
the techniques we apply in their analysis. The more challenging system CMob,
whose size is about twice the size of Mob, is described in x3.3 and analyzed in
x4.3. The reader motivated by the formal analysis, will have to take a closer look
at the invariants described in Figures 10, 12. Understanding the invariants is
the demanding part, the related proofs (which are in Appendix are basically
large case analyses that require little mathematical sophistication. The footnotes
comment on the relationship between our model and the IP informal specification.
They are addressed mainly to the reader familiar with IP, and they can be skipped
at a first reading. Finally, the reader interested in the finite state version of
the protocol, can get a general overview of the issues in x5, and more details in
Appendix
B.
2 The Process Description Language
We describe the systems in a standard process description language. The notation
we use is intended to be accessible to a general reader, but can be considered an extension
of a name-passing process calculus with syntactic sugar. A system consists
of some asynchronous processes that interact by exchanging messages over channels
with unlimited capacity (thus sending is a non-blocking operation, whereas
attempting to receive on an empty channel causes the process to block). Messages
in the channels can be reordered in arbitrary ways. Processes are described as
a system of parametric equations. The basic action performed by a process in
a certain state is: (i) (possibly) receiving a message, (ii) (possibly) performing
some internal computation, (iii) (possibly) emitting a multiset of messages, and
(iv) going to another state (possibly the same).
There are several possible notations for describing these actions; we follow the
notation of CCS with value passing. We assume a collection of basic sorts, and
allow functions between basic sorts to represent cache memories abstractly. The
functions we actually need have a default value almost everywhere and represent
finite tables.
Let names, values of basic sorts, and functions from
basic sorts to sorts. x stands for a tuple x . The expressions T;
over name equality tests; are process identifiers, and V;
value domains. Processes are typically denoted by are specified by
the following grammar:
Here, 0 is the terminated process, x(x):p is the input prefix, xx is a message, j is
the (asynchronous) parallel composition operator, and [T ]p; p 0 is a case statement.
We write \Pi i2I p i to denote an indexed parallel composition of processes. X(x) is
a process identifier applied to its actual parameters; as usual for every process
identifier X, there is a unique defining equation such that all variables
are contained in fxg. \Phi is the internal choice operation, where in p
we substitute a non-deterministically chosen tuple of values (from appropriate
domains, possibly infinite) for the specified tuple of variables. We use internal
choice to abstract from control details (note that internal choice can be defined
in CCS as the sum of processes guarded by a - action).
Being based on "asynchronous message passing over channels", our process description
language could be regarded as a fragment of an Actor language [AMST97]
or of an asynchronous (polyadic) -calculus [HT91]. The main feature missing is
the dynamic generation of names. As we will see in x3.2, it is possible to foresee
the patterns of generation, and thus model the system by a static network of
processes.
We define a structural equivalence j on processes as the least equivalence relation
that includes: ff-renaming of bound names, associativity and commutativity
of j, the equation unfolding and:
Reduction is up to structural equivalence and is defined by the following rules:
These rules represent communication, internal choice and compatibility of reduction
with parallel contexts, respectively. Using these rules, we can reduce a
process if and only if (i) employing structural equivalence we can bring the process
to the form p j q, and (ii) the first or second rule applies to p.
We introduce the following abbreviations:
if
3 The Model
We describe three systems Stat , Mob, and CMob. All three consist of a collection
of communicating agents that may interact with one another over the network.
Each agent is attached to a router, with possibly many agents attached to the
same router. We assume that each entity, agent or router, has a globally unique
identifying name. For simplicity, we assume a very elementary functionality for
the agents - they can only communicate with other agents, sending or receiving
messages via the routers to which they are attached. The agents cannot communicate
directly between themselves, all communication being mediated by the
routers. We assume that routers may directly communicate with one another,
abstracting away the details of message delivery across the network. The communication
mechanism we assume is an asynchronous one, involving unbounded
buffers and allowing message overtaking.
We assume a collection of names defined as the union of pairwise disjoint sets
Agent Names
The set CN of Control Directives has the following elements (the Control Directives
that we consider in Stat consist of exactly msg, to indicate a data message;
the directives fwdd and upd will be used only in CMob):
msg message regd registered infmd informed fwdd forwarded
immig immigrating repat repatriating mig migrating upd update
The sets AN and DN are assumed to be non-empty. The elements of RN and
LAN are channel names that can carry values of the following domain (note that
the sort corresponding to the set RN is recursively defined):
AN \Theta RN \Theta AN \Theta RN \Theta DN
Elements of this domain may be interpreted as:
[control directive; to agent ; at router ; from agent ; from router ; data ]
We often write x to stand for the tuple [x
indicates that the name is irrelevant ("don't care''). The tables L and H are used
for the address translation necessary to route a message to its destination. L is
an injective function that gives the local address for an agent at a given router,
H computes the "home router" of an agent.
Tables
We denote with obs(x) an atomic observation. If z[x] is a message, we call
the triple [x its observable content (original sender, addressee and data).
We assume a distinguished channel name o on which we can observe either the
reception of a message or anomalous behavior, represented by a special value ffl.
3.1 The system without mobility Stat
In
Figure
1, we present (formally) the system Stat .
Agents An agent A(a) either receives a message from its home router on its local
address and observes it, or it generates a message to a correspondent agent that
it gives to its home router for delivery to the correspondent agent via the latter's
AN \Theta RN \Theta AN \Theta RN \Theta DN
in (a)
in r 0 [msg;
A in (a)
in
Router
in lx j Router (r)
Figure
1: System without mobility
home router. L(H(a); a) represents the local address of the agent a in its home
subnet 3 .
Router The router examines an incoming message, and if it is the destination
router mentioned in the message, accordingly delivers it to the corresponding
agent. Otherwise it sends it to the appropriate router. L(r; x 2 ) is the local address
of x 2 , the addressee of the message, whereas x 3 is the destination router.
3.2 The system with mobility Mob
We now allow agents to migrate from one router (i.e., subnet) to another. While
doing so, the agents and routers engage in a handover protocol [JP96]. When
an agent moves to another router, a proxy "home agent" at its home router 4
forwards messages intended for the mobile agent to a "care-of address" 5 , the
agent's current router. To avoid message loss, the forwarding home agent should
have an up-to-date idea of the current router of the mobile agent. Hence when
a mobile agent moves, it must inform the home agent of its new coordinates. In
the first approximation, we model all messages addressed to a mobile agent being
3 We have used nondeterminism to model actions arising from the transport or higher layers
corresponding to processing a received message, or generating a message to a correspondent
agent. Communication on the LAN channels abstracts link-level communication between the
router and the agent. We have a simplifying assumption that a node can be on-link to only one
router.
4 For simplicity, we identify the routers serving as mobility agents / proxies with the routers
administering a subnet. We also assume that each router is always capable of acting as a home
or foreign agent.
5 We model only what are called "foreign agent care-of addresses" and not "co-located care-of
addresses" in IPv6 parlance.
forwarded via the home agent; later we will consider correspondent agents caching
the current router of a mobile agent. The router description remains unchanged.
We observe that the migration of a mobile agent from one router to another
can be modelled "statically": for each router, for each agent, we have a process
that represents the behavior of a mobile agent either being present there or absent
there, or that of a router enacting the role of a forwarder for the agent, routing
messages addressed to that agent to its current router. Migration may now be described
in terms of a coordinated state change by processes at each of the locations
involved 6 .
Although the model involves a matrix of shadow agents running at each router,
it has the advantage of being static, in terms of processes and channels, requiring
neither dynamic name generation nor dynamic process generation. The conceptual
simplicity of the model is a clear advantage when carrying on proofs which
have a high combinatorial complexity, as well as when attempting verification
by automated or semi-automated means. For instance, the only aspect of the
modelling that brings us outside the realm of finite control systems is the fact
that channels have an infinite capacity, and there is no bound on the number of
messages generated. Starting from this observation, it is possible to consider a
revised protocol which relies on bounded channels (see Appendix B).
In the commentary below, we refer to various processes as agents. Note that
only the agents Ah, Ah in , Ma and Ma in correspond to "real" agents, i.e., the
behavior of mobile nodes. The others may be regarded as roles played by a router
on a mobile node's behalf. Their analogues in IPv6 are implemented as routers'
procedures that use certain tables.
States of the agent at home We describe an agent at its home router in
Figure
2.
Ah The mobile agent is at its home base. It can receive and send messages,
as in the definition of A(a) in Figure 1, and can also move to another router.
When the agent "emigrates", say, to router u, it changes state to Ham(a). We
model the migration by the agent intimating its "shadow" at router u that it is
"immigrating" there, and to prepare to commence operation 7 .
Ham The mobile agent during emigration. We model the agent during "emigra-
tion" by the state Ham(a). During migration, messages addressed to the agent
may continue to arrive; eventually, these messages should be received and handled
by the mobile agent. The emigration completes when the shadow agent at the
target site registers (by sending control message regd) its new care-of router
at the home base. The agent is ready to operate at that foreign subnet once it
6 Thus, our formalization of the migration of an agent, involving the small coordinated state
change protocol, may be considered an abstraction (rather than a faithful representation) of some
of the actions performed when a mobile node attaches itself to a new router and disengages itself
from an old one.
7 Registration is treated in a simple fashion using the immig and repat messages, ignoring
details of Agent Discovery, Advertisement, Solicitation, and protocols for obtaining care-of ad-
dresses. Deregistration is automatic rather than explicit. The issue of re-registration is totally
ignored.
in r 0 [msg;
in if
Ah in
in
in
Haf (a;
in
Router(r) (as in Figure 1)
Figure
2: States of the agent at home
receives an acknowledgement from the home agent (control message infmd). The
control messages (regd and infmd) are required to model the coordinated change
of state at the two sites 8 . The home agent filters messages while waiting for the
regd message; this filtration can be expressed in our asynchronous communication
model by having other messages "put back" into the message buffer, and
remaining in state Ham(a).
Haf The home agent as a forwarder. The home agent forwards messages to the
mobile agent at its current router 9 (via the routers of course), unless informed
by the mobile agent that it is moving from that router. There are two cases
we consider: either the mobile agent is coming home ("repatriation") or it is
migrating elsewhere.
States of the agent away from home We describe the agents at a foreign
router in Figure 3.
Idle If the agent has never visited. The Idle state captures the behavior of the
shadow of an agent at a router it has never visited. If the agent moves to that
8 These messages may be likened to the "binding update for home registration" and "binding
acknowledgement from home".
9 This is the primary care-of address.
in
in
in
\Phi c2fin;out;mvg;y2AN;w2DN ;u2RN
in r[msg;
in if
if u 6=
Ma in (a;
in
Figure
3: States of the agent away from home
router, indicated by the control message immig, then the shadow agent changes
state to Bma(a; r), from where it will take on the behavior of mobile agent a
at the foreign router r. Any other message is ignored, and indeed it should be
erroneous to receive any other message in this state.
Fwd If the agent is not at foreign router r, but has been there earlier. This state
is similar to Idle , except that any delayed messages that had been routed to the
agent while it was at r previously are re-routed via the home router 10 . This state
may be compared to Haf , except that it does not have to concern itself with the
agent migrating elsewhere.
Bma Becoming a foreign mobile agent. Once the protocol for establishing movement
to the current router is complete, the agent becomes a foreign mobile agent.
Messages are filtered looking for an acknowledgement from the home agent that
it is aware of the mobile agent's new current router. Once the home agent has acknowledged
that it has noted the new coordinates, the mobile agent may become
operational 11 .
Ma The mobile agent at a foreign router. As with the mobile agent at its home
base Ah(a), the mobile agent may receive messages, send messages, or move away.
The behavior of the mobile agent in state Ma is similar to that of Ah except that
during movement, different control messages need to be sent to the target site
depending on whether it is home or another site. If the target site is the home
base, then a repat message is sent. Otherwise the target site is intimated of the
wish to "immigrate". The agent goes into the state Fwd .
In the upper part of Figure 4 we describe the possible transitions that relate
to control messages, not including filtering, forwarding, and erroneous situations.
We decorate the transitions with the control messages that are received (-) and
emitted (+). In the lower part of Figure 4 we outline the three basic movements
of an agent a: leaving the home router, coming back to the home router, and
moving between routers different from the home router.
messages forwarded by the home agent may get arbitrarily delayed in transit, it is
important that the mobile agent, in addition to informing its home agent of its current router,
arrange for a forwarder at its prior router to handle such delayed messages. This point is the
only major difference between our model and the IPv6 proposal. In order not to lose messages,
we require a forwarder at any router where the mobile agent has previously visited. The default
target for forwarding is the home router. In the Mobile IPv6 proposal, however, it is not
mandatory for the mobile agent to arrange for a forwarder at the previous router, and if a
message reaches a router that had previously served as a foreign agent, the message may be
dropped. This is permissible in the context of IP since dealing with lost messages is left to the
transport and higher layers. Our analysis shows that Mobile IP can use our default policy of
forwarding to the home router, without messages traversing cycles indefinitely, but at the cost
of some increase in the number of hops for a message. The need for forwarders is, of course,
well known in the folklore regarding implementation of process migration.
In IPv6 a mobile node may begin operation even before it has registered its new location
with the home agent or received an acknowledgement from the home router. Correct updates of
the primary care-of address at the home router are achieved using time-stamping of messages,
which in turn requires synchronized clocks. In contrast, our asynchronous communication model
makes no timeliness assumptions and permits message overtaking. Hence our protocol requires
an acknowledgement from home before permitting further migrations.
(1) Ah(a) +immig
\Gamma! Ham(a) \Gammaregd +infmd
\Gamma! Haf (a; r)
\Gammaimmig +mig
\Gamma! Bma(a; r)
\Gammainfmd
\Gamma! Ma(a; r)
\Gammaimmig +regd
\Gamma! Bma(a; r)
\Gammainfmd
\Gamma! Ma(a; r)
\Gamma! Fwd(a; r)
+repat
\Gamma! Fwd(a; r)
(4:1) Haf (a; r)
\Gammamig +infmd
\Gamma! Haf (a; r 0 )
(4:2) Haf (a; r)
\Gammarepat
\Gamma! Ah(a)
I-Leaving home
Ham(a) Idle=Fwd(a; r) immig
Ham(a) Bma(a; r) regd
Haf (a; r) Bma(a; r) infmd
Haf (a; r) Ma(a; r)
II-Coming home
Ma(a; r) Haf (a; r)
Fwd(a; r) Haf (a; r) repat
Fwd(a; r) Ah(a)
III-Moving between routers different from the home router
Ma(a; r) Haf (a; r) Idle=Fwd(a; r 0 )
Figure
4: Control transitions
We briefly describe how our "static" description that requires a thread for each
agent at each router relates to a more "dynamic" model that is more natural from
a programming viewpoint. First we observe that an agent's name is obtained by
combining a router's name with a local identifying name. The computation of
function H is distributed, in that an agent's name contains sufficient information
for computing its home router's name. Further, our infinite name space of agents
is a virtual representation of a finite location name space with dynamic generation
of names at each location.
As observed earlier, the only actual processes are Ah, Ah in , Ma and Ma in ,
which run in parallel with the routers. The other "agents" are threads run on the
router. The Ham thread is spawned on the home router when Ah wishes to move
this thread becomes Haf, a thread that forwards messages to the mobile
agent and terminates when the agent returns.
Each router maintains a list of agents for whom it serves as a home router,
with their current locations as well as a list of mobile agents currently visiting.
The default policy of a router is to deliver messages to agents actually present
there, to forward messages to mobile agents for whom it serves as a home router,
and to otherwise forward the message to the target agent's home router. Messages
to a non-existent agent trigger an error.
As our analysis will show, the only message an Idle thread can receive is an
immig message. So this thread need not exist. Instead, on receiving an immig
message, the router spawns a Bma thread, updating the list of agents actually
present there. When Ma moves away from a router, it notifies the router to spawn
a Fwd thread. In practice, the Fwd thread will synchronize with the router
to empty the buffer of messages left behind by the agent and then terminate.
Following this implementation, the number of threads running at a router r is
proportional to the number of agents whose home is r or who are currently visiting
r.
3.3 The system with caching CMob
The previous system suffers from overcentralization. All traffic to an agent is
routed through its home router, thus creating inefficiencies as well as poor fault
tolerance. So, correspondent agents can cache the current router of a mobile agent
[JP96]. The agents' definitions are parametric in a function f : AN ! RN , which
represents their current cache. The cache is used to approximate knowledge of
the current location of an agent; this function parameter can be implemented by
associating a list with each agent 12 .
We now use control directives fwdd and upd; the former indicates that the
current data message has been forwarded thus pointing out a "cache miss", the
latter suggests an update of a cache entry, following a cache miss 13 . An agent may
also decide to reset a cache entry to the home router 14 . Note that the protocol does
not require the coherence of the caches. In case of cache miss, we may forward the
message either to the home router (which, as in the previous protocol, maintains
an up-to-date view of the current router) or to the router to which the agent has
moved.
We present in Figure 5, the new definitions of the agent at home. Note the use
of the directives fwdd and upd to update the cache and to suggest cache updates.
In
Figures
6,7 we present the modified definitions for the agent away from
home. We note the introduction of two extra states: Fwd in (a;
To model timing out of cached entries by a forwarder, an extra state Fwd in (a;
is introduced. Non-determinism is used in Fwd and Ma in to model possible resetting
or updating cache entries 15 . Mam(a; r) is an extra state that we need when
an agent moves from a router different from the home router. Before becoming
When moving to another router, we could deliver the current cache with the message immig.
In the presented version we always re-start with the default cache H.
13 A fwdd message can be regarded as having been tunnelled, while a upd message is a binding
update.
14 In IPv6, the validity of a cache entry may expire. In the informal description of the protocol,
the update and deletion of a cache entry are often optional operations. We model this by using
internal choice. No messages to reset a cache entry (binding deletion updates) are ever sent
out, nor are negative acknowledgements sent out. We also note that maintaining the "binding
update list" is not essential to the protocol, but is only a pragmatic design choice. Instead, an
agent may non-deterministically decide to reset a cache entry, thus abstracting from a particular
cache management mechanism.
Timing out of cache entries is modelled using non-determinism, rather than by explicit
representation of time stamps in a message. Note that in the Mobile IP protocol no hypothesis
is made regarding the coordination of the clocks of the agents, so it seems an overkill to introduce
time to speak about these time stamps.
in r 0 [msg;
in if
in Ah(a; f [r 0 =y])
Ah in (a; f; c
let r
in
let r
in
Haf (a;
let r
in
Router(r) (as in Figure 1)
Figure
5: Modified control for agent at home with caching
in
\Phi c2f0;1g
let r
in Fwd(a;
Fwd in (a;
in
in
Figure
Modified control for agent away from home with caching, part I
a forwarder to the router to which the agent moved, we have to make sure that
the agent has arrived there, otherwise we may forward messages to an Idle(a; r 0 )
process, thus producing a run-time error (this situation does not arise in system
Mob because we always forward to the home router).
4 Analysis
We now analyze the three different systems Stat, Mob, and CMob. In each case, the
first step is to provide a schematic description of the reachable configurations, and
to show that they satisfy certain desirable properties. Technically, we introduce a
notion of admissible configuration, i.e., a configuration with certain properties, and
go on to show that the initial configuration is admissible, and that admissibility
is preserved by reduction.
A crucial property of admissible configurations for Mob and CMob is control
stabilization. This means that it is always possible to bring these systems to a
situation where all migrations have been completed (we can give precise bounds
on the number of steps needed to achieve this). We call these states stable. Other
interesting properties we show relate to the integrity and delivery of messages. The
Ma(a;
in r[msg;
in if
if
in if y
Ma in (a;
in
in
Figure
7: Modified control for agent away from home with caching, part II
control stabilization property of admissible configurations is also exploited to build
(barbed) bisimulation relations, with respect to a suitable notion of observation,
between Stat and Mob, and between Stat and CMob.
4.1 Analysis of Stat
Figure
8 presents the notion of admissible configuration for Stat. We will write
s:Rt, s:Ob, s:Ag, and s:Ms to denote the state of the routers, atomic observa-
tions, agents, and data messages, respectively, in configuration s. We will abuse
notation, and regard products of messages as multisets, justified since parallel
composition is associative and commutative. When working with multisets we
will use standard set-theoretic notation, though operations such as union and
difference are intended to take multiplicity of the occurrences into account.
We assume that ]RN - 3, to avoid considering degenerate cases when es-
Ag
j A in (a);
Figure
8: Admissible configurations for Stat
tablishing the correspondence between Stat and Mob (if ]RN - 2, then the
transitions III in Figure 4 cannot arise).
Proposition 4.1 The initial configuration Stat is admissible, and admissible configurations
are closed under reduction.
By the definition of admissible configuration, we can conclude that the error
message offl is never generated (a similar remark can be made for the systems Mob
and CMob, applying theorems 4.5 and 4.13, respectively).
Messages do not get lost or tampered with.
Corollary 4.2 (message integrity) Let s be an admissible configuration for
Stat, let zx 2 s:Ms and suppose s
when the message gets received by its intended
addressee.
Corollary 4.3 (message delivery) Let s be an admissible configuration for Stat
such that zx 2 s:Ms. Then the data message can be observed in at most 4 reductions
4.2 Analysis of Mob
The table in Figure 9 lists the situations that can arise during the migration of an
agent from a router to another. k is the case number, Ag(a; k; denotes the
shadow agents of a not in a Fwd or Idle state in situation k, CMs(a; k; the
migration protocol control messages at z involving at most the sites H(a);
that situation, and R(a; k; denotes the routers involved in situation k of the
protocol at which a's shadow is not in a Fwd or Idle state.
Relying on this table, we define in Figure 10 a notion of admissible function
fl. Intuitively, the function fl associates with each agent a its current migration
control (the state and the protocol messages), the routers already visited, and the
data messages in transit that are addressed to a.
We denote with P fin (X) and M fin (X) the finite parts, and finite multisets of X,
respectively, and with fl(a) i the i-th projection of the tuple fl(a). Then Act(a; fl)
denotes the routers where a has visited, which are not in an Idle state. Condition
states that at most finitely many agents can be on the move ("deranged") at
2 Ah in (a) 0 fr 0 g
3 Ham(a) z[immig; a;
5 Haf (a; r) j Bma(a; r) z[infmd; a;
8 Haf (a; r) j Ma in (a; r) 0 fr
9 Haf (a; r) z[immig; a; r
Figure
9: Control migration (r
M fin ((RN [ LAN ) \Theta RN \Theta AN \Theta RN \Theta DN ) (data messages)
Admissibility conditions on fl:
(CMs(a; k;
Figure
10: Admissible configurations for Mob
any instant. (C 2 ) is a hygiene condition on migration control messages, indicating
that they may be at exactly one of three positions, and that if an agent is on the
move (cases 7 and 9), the home forwarder always points to an active router, where
a proxy agent will return delayed data messages back to a's home; after receiving
the pending control message, the home forwarder will deliver the data message
to the current (correct) location of the mobile agent. Thus, although there may
apparently be forwarding cycles, these will always involve the home forwarder and
will be broken immediately on receipt of the pending control message. Condition
explicitly indicates where a control message involving a may be.
Definition 4.4 (admissible configuration) An admissible configuration for Mob,
m, is generated by a pair (fl; Ob) comprising an admissible function and a process
as follows:
where Rt,Ob are as in Figure 8 and
Ag(a; k;
Let m be an admissible configuration for Mob, generated by (fl; Ob). Further,
let m:DMs(a) denote \Pi (z;r 1 ;a 2 ;r 2 ;d)2fl(a) 6
d], the data messages in
state m addressed to a, and let m:DMs denote all data messages in state m.
We will write m:CMs and m:Ob to denote the state of the control messages and
atomic observations, respectively, in configuration m.
Theorem 4.5 The initial configuration Mob is admissible, and admissible configurations
are closed under reduction.
From this result, it is possible to derive an important property of system Mob:
it is always possible to bring the system to a stable state.
Corollary 4.6 (control stabilization) Let m be an admissible configuration for
Mob generated by (fl; Ob) and let
g.
that m 0 is determined by (fl In
particular, if fl(a)
6g.
The analogies of corollaries 4.2 and 4.3 can be stated as follows.
Messages neither get lost nor is their observable content tampered with.
Corollary 4.7 (message integrity) Let m be an admissible configuration for
Mob, generated by (fl; Ob), and suppose
either or else there exists a z 0
l x 0
that obs(x 0
l
Corollary 4.8 (message delivery) Let a 2 AN and let m be an admissible
configuration for Mob generated by (fl; Ob) such that fl(a) 1 2 K s and (z; r 1 ; a
fl(a) 6 . Then this data message can be observed in at most 10 reductions.
We now introduce a notion of what is observable of a process and a related
notion of barbed bisimulation (cf. [Par81, Pnu85]).
Definition 4.9 Let p be a process. Then O(p) is the following multiset (y can
be ffl):
We note that on an admissible configuration s,
can be applied to an admissible configuration m or c (cf. following definition 4.12).
Definition 4.10 A binary relation on processes R is a barbed bisimulation if
whenever p R q then the following conditions hold:
and symmetrically. Two processes p; q are barbed bisimilar, written p ffl
they are related by a barbed bisimulation.
We use the notion of barbed bisimulation to relate the simple system Stat
(viewed as a specification) to the more complex systems Mob and CMob. Note
that each process p has a unique commitment O(p). Taking as commitments the
atomic observations would lead to a strictly weaker equivalence.
Theorem 4.11 Stat ffl
Mob.
Proofhint. We define the relevant observable content of data messages in an
admissible configuration s for Stat and m for Mob as the following multisets,
respectively:
O
O
Next, we introduce a relation S between admissible configurations for Stat and
Mob as:
We show that S is a barbed bisimulation. 2
4.3 Analysis of CMob
The analysis of CMob follows the pattern presented above for Mob. The statement
of the invariant however is considerably more complicated. The table in Figure
11 lists the situations that can arise during the migration of an agent from a
router to another. Relying on this table, we define in Figure 12 a notion of
admissible function fl. Intuitively, the function fl associates with each agent a
its current migration control (state and protocol messages), the routers already
visited (either Fwd's or Mam's), and the data messages and update messages in
transit which are addressed to a.
3 Ham(a) z[immig; a;
5 Haf (a; r) j Bma(a; r) z[infmd; a;
6 Haf (a; r) j Ma(a;
7 Haf (a; r) j Ma in (a;
9 Haf (a; r) j Mam(a; r) z[immig; a; r
Figure
11: Control migration with caching (r
Again, Act(a; fl) denotes the routers where a has visited, which are not in an
Idle state. Condition (C 1 ), as before, states that at most finitely many agents can
be on the move ("deranged") at any instant. (C 2 ) is, as before, an invariant on
control messages and the forwarder caches, indicating that there are no forwarding
cycles and the cached entries for each agent a always point to routers where
the mobile agent has visited. M serves to indicate the router at which there is a
Mam(a) when there is a pending regd message whose current location is indicated
using Z. (C 3 ) is an invariant dealing with data messages or forwarded data mes-
sages, which indicates that such messages may never arise from, be addressed to,
or be present at agents located at as yet unvisited routers. (C 4 ) is a condition on
update messages, stating that such messages are only sent between shadow agents
of two different agents, and that they may only originate, be present at and be
targetted to routers where the two agents have been active.
Definition 4.12 (admissible configuration with caching) An admissible configuration
with caching cm is generated by a pair (fl; Ob), consisting of an admissible
function with caching and a process, as follows:
where Rt and Ob are as in Figure 8, and
Ag(a; k;
(K \Theta RN \Theta RN \Theta (AN ! RN ) \Theta (RN [ LAN ) \Theta f0; 1g 2 (control migration)
(RN * (RN \Theta fin; ning))\Theta
(RN * (RN \Theta (RN [ LAN )))\Theta (Mam's)
M fin (fmsg; fwddg \Theta (RN [ LAN ) \Theta RN \Theta AN \Theta RN \Theta DN )\Theta (data messages)
Admissibility conditions on fl:
(CMs(a; k;
dom(F
Acyclic(F; M); and
defined
where Acyclic(F; M) means:
Figure
12: Admissible configurations with caching
Let c be an admissible configuration with caching, generated by (fl; Ob). Fur-
ther, let c:DMs(a) denote \Pi (ddir ;z;r 1 ;a 2 ;r
d], the data messages
addressed to a in configuration c, and let c:DMs stand for all data messages
in configuration c. For convenience, we will write c:CMs and c:Ob to denote the
state of the control messages and atomic observations, respectively, in configuration
c.
Theorem 4.13 The initial configuration CMob is admissible, and admissible configurations
with caching are closed under reduction.
As in Mob, it is possible to bring CMob to a stable state.
Corollary 4.14 (control stabilization) Let c be an admissible configuration
with caching generated by (fl; Ob) and let
g. Then c ! -(10\Lambdan) c 0
such that c 0 is generated by (fl
In particular, if fl(a)
6g.
As in Stat and Mob, it is easy to derive corollaries concerning message integrity
and message delivery.
Corollary 4.15 (message integrity) Messages do not get lost nor is their observable
content tampered with. Let c be an admissible configuration with caching,
generated by (fl; Ob), and suppose c ! c 0 . Then for all z
or else there exists a z 0
l x 0
l 2 c 0 :DMs such that
l
Corollary 4.16 (message delivery) Let a 2 AN and let c be an admissible
configuration with caching generated by (fl; Ob) such that (ddir ; z; r 1 ; a
Then the data message can be "delivered" in a number of
reductions proportional to the length of the longest forwarding chain.
The analysis of the invariant allows us to extract some general principles for
the correct definition of the protocol (note that these principles are an output of
the analysis of our protocol model, they are not explicitly stated in the informal
description of the protocol).
ffl Cache entries and Fwd 's always point to routers which have been visited by
the agent.
Any message from agent a to agent a 0 comes from a router r and is directed to
a router r 0 , which have been visited, respectively, by agent a and a 0 .
ffl Agent a never sends update messages to its own shadow agents.
ffl The protocol for moving an agent a from a router to another terminates in a
fixed number steps.
ffl Given an agent a, the forwarding proxy agents never form forwarding cycles.
This ensures that once the agent a has settled in one router, data messages and
update messages in transit can reach it in a number of steps which is proportional
to the length of the longest chain of Fwd 's.
The bottom line of our analysis for system CMob is the analogue of theorem 4.11.
Theorem 4.17 Stat ffl
Conclusions
We have described in a standard process description language a simplified version
of the Mobile IP protocol. We believe that a precise yet abstract model is useful in
establishing the correctness of the protocol, as well as providing a basis for simulation
and experimentation. Our modelling uses non-determinism and asynchronous
communication (with unbounded and unordered buffers). Non-determinism serves
as a powerful abstraction mechanism, assuring us of the correctness of the protocol
for arbitrary behaviors of the processes, even if we try different instances
of particular management policies (e.g., routing and cache management policies)
provided they maintain the same invariants as in the non-deterministic model.
Asynchronous communication makes minimal assumptions on the properties of
the communication channels and timeliness of messages. All we require is that
messages are not lost and in particular we assume there is a mechanism for avoiding
store-and-forward deadlocks. Our analysis shows that message loss can be
avoided by a router forwarding messages addressed to a mobile agent that is no
longer present in that subnet to its home router or to a router to which it has
moved. Moreover, these forwarding links never form cycles. Control Stabilization
is a key property, since cycles that a message may potentially traverse are broken
on stabilization. Furthermore, any (reasonable) cache update policy can be used
provided messages to an agent are forwarded to routers it has previously visited.
Our model allows mobility protocol designers explore alternative policies and
mechanisms for message forwarding and cache management. A concrete suggestion
is that rather than dropping a data message (delayed in transit) for an agent
that has moved away from a router, IPv6 designers could examine the tradeoff between
increased traffic and employing a default policy of tunneling the message to
the home subnet of the agent - particularly for applications where message loss
is costly, or in the context of multi-layer protocols. Other concrete applications
include designing mobility protocols where losing messages may be unacceptable,
e.g., forwarding signals in process migration mechanisms.
In our modelling, we have greatly simplified various details. On the one hand,
this simplification is useful, since it again serves as a way of abstracting from particular
protocols for establishing connections (e.g., Neighbor Discovery, etc. On
the other hand, we have assumed that our so-called "control messages" eventually
reach their destination without getting lost or corrupted. A future direction of
work may be to model protocols that cope with failures, or to model security and
authentication issues.
By concentrating on an abstract and simple model we have been able to specify
the protocol and by a process of analysis to discover and explicate some of its
organizing principles. The specification and combinatorial analysis of the protocol
is sufficiently complicated to preclude leaving it "implicit" in the informal protocol
description. By a careful analysis we have been able to carry out a hand proof. A
direction for further research is the formal development of the proof using a proof
assistant.
Finally, we report on a finite state formulation of the protocol for which automatic
simulation and verification tools are available. The sets RN ; AN ; DN are
assumed finite, so that there are finitely many entities in the systems. Ensuring
that the number of messages does not grow in an unbounded manner also requires
that communication is over bounded capacity channels. In particular we will consider
the limit case where all communications are synchronous (we expect that a
protocol which works with synchronous communication can be easily adapted to
a situation where additional buffers are added).
The main difficulty lies in understanding how to transform asynchronous communication
into synchronous communication without introducing deadlocks. The
synchronous version seems to require a finer, more detailed description of the
protocol and makes the proof of correctness much more complicated. In retro-
spect, this fact justifies the use of an asynchronous communication model with
unbounded and unordered buffers. The systems FStat and FMob with synchronous
communication are described in Appendix B. We have compiled these descriptions
in the modelling language Promela of the simulation and verification tool SPIN
[Hol91]. Extensive simulations on configurations including three routers and three
agents have revealed no errors. We have been able to complete a verification for
the FStat system with two routers and two agents. The size of the verification
task and the complexity of the system FMob make verification of larger systems
difficult. The Promela sources for FMob are available at URL http://protis.univ-
mrs.fr/-amadio/fmob.
--R
A foundation for actor computation.
Orca: a language for parallel programming of distributed systems.
Design of a distributed object manager for Smalltalk-80 system
Design and validation of computer protocols.
An object calculus for asynchronous communication.
A nitpick analysis of mobile IPv6.
Mobility support in IPv6 (RFC).
A Calculus of Mobile Process
Mobile Unity coordination constructs applied to packet forwarding.
The sprite network operating system.
Concurrency and automata on infinite sequences.
Indirect distributed garbage collection: handling object migration.
Process migration in demos/mp.
Linear and branching systems in the semantics and logics of reactive systems.
Expressing code mobility in mobile UNITY.
The Locus Distributed System Architecture.
Mobile UNITY: reasoning and specificaton in mobile computing.
Vip: a protocol providing host mobility.
Mobile objects in distributed Oz.
--TR
The LOCUS distributed system architecture
Design of a distributed object manager for the Smalltalk-80 system
Fine-grained mobility in the Emerald system
The Sprite Network Operating System
Design and validation of computer protocols
IP-based protocols for mobile internetworking
Orca
A calculus of mobile processes, I
VIP: a protocol providing host mobility
Indirect distributed garbage collection
Mobile UNITY
Mobile objects in distributed Oz
Expressing code mobility in mobile UNITY
An Object Calculus for Asynchronous Communication
Linear and Branching Structures in the Semantics and Logics of Reactive Systems
Modelling IP Mobility
Mobile UNITY Coordination Constructs Applied to Packet Forwarding for Mobile Hosts
Concurrency and Automata on Infinite Sequences
Process migration in DEMOS/MP
A foundation for actor computation | internet protocols;formal methods;mobility;bisimulation;modelling;process description languages;protocol analysis;verification |
354864 | Design and Evaluation of a Switch Cache Architecture for CC-NUMA Multiprocessors. | AbstractCache coherent nonuniform memory access (CC-NUMA) multiprocessors provide a scalable design for shared memory. But, they continue to suffer from large remote memory access latencies due to comparatively slow memory technology and large data transfer latencies in the interconnection network. In this paper, we propose a novel hardware caching technique, called switch cache, to improve the remote memory access performance of CC-NUMA multiprocessors. The main idea is to implement small fast caches in crossbar switches of the interconnect medium to capture and store shared data as they flow from the memory module to the requesting processor. This stored data acts as a cache for subsequent requests, thus reducing the need for remote memory accesses tremendously. The implementation of a cache in a crossbar switch needs to be efficient and robust, yet flexible for changes in the caching protocol. The design and implementation details of a CAche Embedded Switch ARchitecture, CAESAR, using wormhole routing with virtual channels is presented. We explore the design space of switch caches by modeling CAESAR in a detailed execution driven simulator and analyze the performance benefits. Our results show that the CAESAR switch cache is capable of improving the performance of CC-NUMA multiprocessors by up to 45 percent reduction in remote memory accesses for some applications. By serving remote read requests at various stages in the interconnect, we observe improvements in execution time as high as 20 percent for these applications. We conclude that switch caches provide a cost-effective solution for designing high performance CC-NUMA multiprocessors. | Introduction
To alleviate the problem of high memory access latencies, shared memory multiprocessors employ
processors with small fast on-chip caches and additionally larger off-chip caches. Symmetric multiprocessor
(SMP) systems are usually built using a shared global bus. However the contention on
the bus and memory heavily constrains the number of processors that can be connected to the bus.
To build high performance systems that are scalable, several current systems [1, 10, 12, 13] employ
the cache coherent non-uniform memory access (CC-NUMA) architecture. In such a system, the
shared memory is distributed among all the nodes in the system to provide a closer local memory
and several remote memories. While local memory access latencies can be tolerated, the remote
memory accesses generated during the execution can bring down the performance of applications
drastically.
To reduce the impact of remote memory access latencies, researchers have proposed improved
caching strategies [14, 18, 27] within each cluster of the multiprocessor. These caching techniques
are primarily based on data sharing among multiple processors within the same cluster. Nayfeh
et al. [18] explore the use of shared L2 caches to benefit from the shared working set between the
processors within a cluster. Another alternative is the use of network caches or remote data caches
[14, 27]. Network caches reduce the remote access penalty by serving capacity misses of L2 caches
and by providing an additional layer of shared cache to processors within a cluster. The HP Exemplar
[1] implements the network cache as a configurable partition of the local memory. Sequent's
NUMA-Q [13] dedicates a 32MB DRAM memory for the network cache. The DASH multiprocessor
[12] has provision for a network cache called the remote access cache. A recent proposal by Moga et
al.[14] explores the use of small SRAM (instead of DRAM) network caches integrated with a page
cache. The use of 32KB SRAM chips reduces the access latency of network caches tremendously.
Our goal is to reduce remote memory access latencies by implementing a global shared cache abstraction
central to all processors in the CC-NUMA system. We observe that network caches
provide such an abstraction limited to the processors within a cluster. We explore the implementation
issues and performance benefits of a multi-level caching scheme that can be incorporated
into current CC-NUMA systems. By embedding a small fast SRAM cache within each switch in
the interconnection network, called switch cache, we capture shared data as it flows through the
interconnect and provide it to future accesses from processors that re-use this data. Such a scheme
can be considered as a multi-level caching scheme, but without inclusion property. Our studies on
application behavior indicate that there is enough spatial and temporal locality between requests
from processors to benefit from small switch caches. Recently, a study by Mohapatra et al. [15]
used synthetic workloads and showed that increasing the buffer size in a crossbar switch beyond a
certain value does not have much impact on network performance. Our application-based study [4]
confirms that this observation holds true for the CC-NUMA environment running several scientific
applications. Thus we think that the large amount of buffers in current switches, such as SPIDER
[7], is an overkill. A better utilization of these buffers can be accomplished by organizing them as
a switch cache.
There are several issues to be considered while designing such a caching technique. These include
cache design issues such as technology & organization, cache coherence issues, switch design issues
such as arbitration, and message flow control issues such as appropriate routing strategy, message
layout etc. The first issue is to design and analyze a cache organization that is large enough to hold
the reusable data, yet fast enough to operate during the time a request passes through the switch.
The second issue involves modifying the directory-based cache protocol to handle an additional
caching layer at the switching elements, so that all the cache blocks in the system are properly
invalidated on a write. The third issue is to design buffers and arbitration in the switch which
will guarantee certain cache actions within the switch delay period. For example, when a read
request travels through a switch cache, it must not incur additional delays. Even in the case of
a switch cache hit, the request must pass on to the home node to update the directory, but not
generate a memory read operation. The final issue deals with message header design to enable
request encoding, network routing etc.
The contribution of this paper is the detailed design and performance evaluation of a switch cache
interconnect employing CAESAR, a CAche Embedded Switch ARchitecture. The CAESAR switch
cache is made up of a small SRAM cache operating at the same speed as a wormhole routed cross-bar
switch with virtual channels. The switch design is optimized to maintain crossbar bandwidth
and throughput, while at the same time providing sufficient switch cache throughput and improved
remote access performance. The performance evaluation of the switch cache interconnect is conducted
using six scientific applications. We present several sensitivity studies to cover the entire
design space and to identify the important parameters. Our experiments show that switch caches
offer a great potential for use in future CC-NUMA interconnects for some of these applications.
The rest of the paper is organized as follows. Section 2 provides a background on the remote access
characteristics of several applications in a CC-NUMA environment and builds the motivation behind
our proposed global caching approach. The switch cache framework and the caching protocol are
presented in Section 3. Section 4 covers the design and implementation of the crossbar switch cache
architecture called CAESAR. Performance benefits of the switch cache framework are evaluated
and analyzed in great detail in Section 5. Sensitivity studies over various design parameters are
also presented in Section 5. Finally, Section 6 summarizes and concludes the paper.
Memory
Interconnection Network
Memory
Network Interface/Router
Network Interface/Router
Remote
Memory
Node X's
Figure
1: CC-NUMA system & memory hierarchy
Application Characteristics and Motivation
Several current distributed shared memory multiprocessors have adopted the CC-NUMA architecture
since it provides transparent access to data. Figure 1 shows the disparities in proximity and
access time in the CC-NUMA memory hierarchy of such systems. A load or store issued by processor
X can be served in a few cycles upon L1 or L2 cache hits, in less than a hundred cycles for
local memory access or incurs few hundreds of cycles due to a remote memory access. While the
latency for stores to the memory (write transactions) can be hidden by the use of weak consistency
models, the stall time due to loads (read transactions) to memory can severely degrade application
performance.
2.1 Global Cache Benefits: A Trace Analysis
To reduce the impact of remote read transactions, we would like to exploit the locality in sharing
patterns between the processors. Figure 2 plots the read sharing pattern for six applications with
processors using a cache line size of 32 bytes. The six applications used in this evaluation are the
Floyd-Warshall's Algorithm (FWA), Gaussian Elimination (GAUSS), Gram-Schmidt (GS), Matrix
Multiplication (MM), Successive Over Relaxation (SOR) and the Fast Fourier Transform (FFT).
The x-axis represents the number of sharing processors (X) while the y-axis denotes the number of
accesses to blocks shared by X number of processors. From the figure, we observe that for four out
of the six applications (FWA, GAUSS, GS, MM), multiple processors read the same block between
two consecutive writes to that block. These shared reads form a major portion (35 to 80%) of the
application's read misses. To take advantage of such read-sharing patterns across processors, we
(a) FWA (b) SOR (c) GAUSS
(d) GS (e) FFT (f) MM
Figure
2: Application read sharing characteristics
introduce the concept of an ideal global cache that is centrally accessible to all processors. When
the first request is served at the memory, the data sent back in the reply message is stored in a
global cache. Since the cache is accessible by all processors, subsequent requests to the data item
can be satisfied by the global cache at low latencies. There are two questions that arise here:
ffl What is the time lag between two accesses from different processors to the same block? We define this
as temporal read sharing locality between the processors, somewhat equivalent to temporal locality
in a uniprocessor system. The question raised here particularly relates to the size and organization
of the global cache. In general, it would translate to: the longer the time lag, the bigger the size of
the required global cache.
ffl Given that a block can be held in a central location, how many requests can be satisfied by this
cached block? We call this term attainable read sharing to estimate the performance improvement
by employing a global cache. Again, this metric will give an indication of the required size for the
global cache.
To answer these questions, we instrumented a simulator to generate an execution trace with
information regarding each cache miss. We then fed these traces through a trace analysis tool,
Sharing Identifier and Locality Analyzer. In order to evaluate the potential of a global
QWHU#DUULYDO#7LPH#SF\FOHV#
(a) FWA (b) GS (c) GAUSS
(d) SOR (e) FFT (f) MM
Figure
3: The temporal locality of shared accesses
cache, SILA generates two different sets of data: temporal read shared locality (Figure 3), and
attainable sharing (Figure 4). The data sets can be interpreted as follows.
Temporal Read Sharing Locality: Figure 3 depicts the temporal read sharing locality as a
function of different block sizes. A point fX,Yg from this data set indicates that Y is the probability
that two read transactions (from different processors) to the same block occur within a time distance
of X or lower. (i.e. is the average inter-arrival time between two consecutive
read requests to the same block). As seen in the figure, most applications have an inherent temporal
re-use of the cached block by other processors. The inter-arrival time between two consecutive
shared read transactions from different processors to the same block is found to be less than 500
processor cycles (pcycles) for 60-80% of the shared read transactions for all applications. Ideally,
this indicates a potential for atleast one extra request to be satisfied per globally cached block.
Attainable Read Sharing: Figure 4 explores this probability of multiple requests satisfied by
the global caching technique termed as attainable sharing degree. A point fX,Yg in this data set
indicates that if each block can be held for X cycles in the global cache, the average number of
subsequent requests per block that can be served is Y . The figure depicts the attainable read
sharing degree for each application based on the residence time for the block in the global cache.
The residence time of a cache block is defined as the amount of time the block is held in the cache
before it is replaced or invalidated. While invalidations cannot be avoided, note that the residence
time directly relates to several cache design issues such as cache size, block size and associativity.
QWHU#DUULYDO#7LPH#SF\FOHV#
(a) FWA (b) GS (c) GAUSS
(d) SOR (e) FFT (f) MM
Figure
4: The attainable sharing degree
From
Figure
2 we observed that FWA, GS and GAUSS have high read sharing degrees close to
the number of processors in the system (in this case, 16 processors). However, it is found that
the attainable sharing degree varies according to the temporal locality of each application. While
GAUSS can attain a sharing degree of 10 in the global cache with a residence time of 2000 processor
cycles, GS requires that the residence time be 5000 and FWA requires that this time be 7000. The
MM application has a sharing degree of approximately four to five, whereas the attainable sharing
degree is much lower. SOR and FFT are not of much interest since they have a very low percentage
(1-2%) of shared block accesses (see Figure 2).
2.2 The Interconnect as a Global Cache
In section 2.1, we identified the need for global caching to improve the remote access performance
of CC-NUMA systems. In this section, we explore the possible use of the interconnect medium as
a central caching location:
ffl What makes the interconnect medium a suitable candidate for central caching?
Which interconnect topology is beneficial for incorporating a central caching scheme?
Communication in a shared memory system occurs in transactions that consist of requests and
replies. The requests and replies traverse from one node to another through the network. The
Node A
Node C
Interconnect
Overlapping
Path
Caching
Potential
Figure
5: The caching potential of the interconnect medium
interconnection network becomes the only global yet distributed medium in the entire system that
can keep track of all the transactions that occur between all nodes. A global sharing framework
can efficiently employ the network elements for coherent caching of data. The potential for such
a network caching strategy is illustrated in Figure 5. The path of two read transactions to the
same memory module that emerge from different processors overlap at some point in the network.
The common elements in the network can act as small caches for replies from memory. A later
request to the recently accessed block X can potentially find the block cached in the common
routing element (illustrated by a shaded circle). The benefit of such a scheme is two-fold. From
the processor point of view, the read request gets serviced at a latency much lower than the remote
memory access latency. Secondly, the memory is relieved of servicing the requests that hit in the
global interconnect cache, thus improving the access times of other requests that are sent to the
same memory module.
From the example, we observe that incorporating such schemes in the interconnect requires a significant
amount of path overlap between processor to memory requests and that replies follow the
same path as requests in order to provide an intersection. The routing and flow of requests and
replies depends entirely upon the topology of the interconnect. The ideal topology for such a system
is the global bus. However, the global bus is not a scalable medium and bus contention severely
affects the performance when the number of processors increases beyond a certain threshold. Con-
sequently, multiple bus hierarchies and fat-tree structures have been considered effective solutions
to the scalability problem. The tree structure provides the next best alternative to hierarchical
caching schemes.
3 The Switch Cache Framework
In this section, we present a new hardware realization of the ideal global caching solution for
improving the remote access performance of shared memory multiprocessors.
3.1 The Switch Cache Interconnect
Network topology plays an important role in determining the paths from a source to a destination in
the system. Tree-based networks like the fat tree [11], the heirarchical bus network [23, 3] and the
multistage interconnection network (MIN) [17] provide hierarchical topologies suitable for global
caching. In addition, the MIN is highly scalable and it provides a bisection bandwidth that scales
linearly with the number of nodes in the system. These features of the MIN make it very attractive as
scalable high performance interconnects for commercial systems. Existing systems such as Butterfly
[2], CM-5 [11] and IBM SP2 [21] employ a bidirectional MIN. The Illinois Cedar multiprocessor [22]
employs two separate uni-directional MINs (one for requests and one for replies). In this paper, the
switch cache interconnect is a bidirectional MIN to take advantage of the inherent tree structure.
Note, however, that logical trees can also be embedded on other popular direct networks like the
mesh and the hypercube [10, 12].
The baseline topology of the 16-node bi-directional MIN (BMIN) is shown in Figure 6(a). In general,
an N-node system using a BMIN comprises of N=k switching elements (a 2k \Theta 2k crossbar) in each of
the log k N stages connected by bidirectional links. We chose wormhole routing with virtual channels
as the switching technique because it is prevalent in current systems such as the SGI Origin[10].
In a shared memory system, communication between nodes is accomplished via read/write transactions
and coherence requests/acknowledgments. The read/write requests and coherence replies
from the processor to the memory use forward links to traverse through the switches. Similarly,
read/write replies with data and coherence requests from memory to the processor traverse the
backward path, as shown in bold in Figure 6(a). Separating the paths enables separate resources
and reduces the possibility of deadlocks in the network. At the same time, routing them through
the same switches provides identical paths for requests and replies for a processor-memory pair
that is essential to develop a switch cache hierarchy. The BMIN tree structure that enables this
hierarchy is shown in Figure 6(b).
The basic idea of our caching strategy is to utilize the tree structure in the BMIN and the path
overlap of requests, replies, and coherence messages to provide coherent shared data in the intercon-
nect. By incorporating small fast caches in the switching elements of the BMIN, we can serve the
sharing processors at these switching elements. We use the term switch cache to differentiate these
caches from the processor caches. An example of a BMIN employing switch caches that can serve
Invalidation
Response
Ack, WriteBack
Request
(a) Bidirectional MIN - Structure and Routing
Processor/Cache Interface
Memory Interface
(b) Multiple Tree Structures in the BMIN
Processor/Cache Interface
Memory Interface
(c) Switch Caching in the BMIN
serviced
by memory
Switch Cache
serviced
by switch caches at
different levels
Figure
The Switch Cache Interconnect
multiple processors is shown in Figure 6(c). An initial shared read request from processor P i
to a
block is served at the remote memory M j
. When the reply data flows through the interconnection
network, the block is captured and saved in the switch cache at each switching element along the
path. Subsequent requests to the same block from sharing processors, such as P j
and P k
, take
advantage of the data blocks in the switch cache at different stages, thus incurring reduced read
latencies.
3.2 The Caching Protocol
The incorporation of processor caches in a multiprocessor introduces the well-known cache coherence
problem. Many hardware cache-coherent systems employ a full-map directory scheme [6], In this
scheme, each node maintains a bit vector to keep track of all the sharers of each block in its local
shared memory space. On every write, an ownership request is sent to the home node, invalidations
are sent to all the sharers of the block and the ownership is granted only when all the corresponding
acknowledgments are received. At the processing node, each cache employs a three-state (MSI)
protocol to keep track of the state of each cache line. Incorporating switch caches comes with the
requirement that these caches remain coherent, and data access remain consistent with the system
consistency model.
Cannot reach these 2 states
Write Initiated
by another processor
Expected State
SHARED DIRTY UNCACHED
ReadReply*
ReadReply* InvType
INVALID
SHARED ReadReply
ReadReply
(a) Switch Cache State Diagram
Add sharer to
directory vector
Send invalidation
to requestor
Increment ack_counter
to wait for additional ack
Protocol
(b) Change in Directory Protocol
Switch Hit Read
Figure
7: Switch Cache Protocol Execution
We adopt a sequential consistency model in this paper. Our basic caching scheme can be represented
by the state diagram in Figure 7 and explained as follows: The switch cache stores only
blocks in a shared state in the system. When a block is read to the processor cache in a dirty state,
it is not cached in the switch. Effectively, the switch cache needs to employ only a 2-state protocol
where the state of a block can be either SHARED or NOT VALID. The transitions of blocks from
one state to another is shown in Figure 7a. To illustrate the difference between block invalidations
(Inv Type) and block replacements (ReadReply ), the figure shows the NOT VALID state
conceptually separated into two states INVALID and NOT PRESENT respectively.
Read Requests: Each read request message that enters the interconnect checks the switch caches
along its path. In the event of a switch cache hit , the switch cache is responsible for providing the
data and sending the reply to the requestor. The original message is marked as switch hit and it
continues to the destination memory (ignoring subsequent switch caches along its path) with the
sole purpose of informing the home node that another sharer just read the block. Such a message is
called a marked read request. This request is necessary to maintain the full-map directory protocol.
Note that memory access is not needed for these marked requests and no reply is generated. At
the destination memory, a marked read request can find the block in only two states, SHARED, or
TRANSIENT (see Figure 7b). If the directory state is SHARED, then this request only updates the
sharing vector in the directory. However, it is possible that a write has been initiated to the same
cache line by a different processor and the invalidation for this write has not yet propagated to the
switch cache. This can only be present due to false sharing or an application that allows data race
conditions to exist. If this occurs, then the marked read request observes a TRANSIENT state at
Physical Address
Swc Hit
Dest
bits
Flits Req
Age
4bits 2bits1
Addr (contd) Src
4bits
6bits
6bits
8bits
Figure
8: Message Header Format
the directory. In such an event, the directory sends an invalidation to the processor that requested
the read and waits for this additional acknowledgment before committing the write.
Read Replies: Read replies can originate from two different sources, namely, the memory node
or an owner's cache in dirty state. Both read replies enter the interconnect following the backward
path. The read reply originating from the memory node should check the switch cache along its
path. If the line is not present in a switch cache, the data carried by the read reply is written into
the cache. The state of the cache line is marked SHARED. For messages originating from an owner's
cache, only those replies whose home node and requester node are not identical can be allowed to
check the switch caches along the way. Those replies that find the line absent in the cache will
enter the data in the cache. If the home node and the requester are the same, the reply should
ignore the switch cache. This reply is not allowed to enter the switch cache because subsequent
modification of the block by the owner will not be visible to the interconnection network. Thus the
path from the owner to the requester is not coherent and does not overlap with the write request
or the invalidation request responsible for coherence (as explained next). In summary, only read
replies with non-identical home node and requester node should enter data into the switch cache,
if not already present.
Writes, Write-backs and Coherence Messages: These requests flow through the switches,
check the switch cache and invalidate the cache line if present in the cache. By doing so, the switch
cache coherence is maintained somewhat transparently.
Message Format: The messages are formatted at the network interface where requests, replies
and coherence messages are prepared to be sent over the network. In a wormhole-routed network,
messages are made up of flow control digits or flits. Each flit is 8 bytes as in Cavallino [5]. The
message header contains the routing information, while data flits follow the path created by the
header. The format of the message header is shown in Figure 8. To implement the caching technique,
we require that the header consists of 3 additional bits of information. Two bits (Reqtype
are encoded to denote the switch cache access type as follows:
cache line from the switch cache. If present, mark read header and generate
reply message.
4w
4w
4w
Forward
Link
Inputs
Backward
Link
Inputs
Forward
Link
Outputs
Link
Outputs
Backward
Input Block
Input Block
Input Block
Input Block
Arbiter
Crossbar
4w
Routing Tables
4w
4w
4w
4w
Figure
9: A conventional crossbar switch
line into the switch cache.
Invalidate the cache line, if present in the cache.
switch cache, no processing required.
Note from the above description and the caching protocol that read requests are encoded as sc read
requests. Read replies whose home node and requestor id are different are encoded as sc write. Coherence
messages, write ownership requests and write-back requests are encoded as sc inv requests.
All other requests can be encoded as sc ignore requests. An additional bit is required to mark
sc read requests as earlier switch cache hit. Such a request is called a marked read request. This
is used to avoid multiple caches servicing the same request. As discussed, such a marked request
only updates the directory and avoids a memory access.
4 Switch Cache Design and Implementation
Crossbar switches provide an excellent building block for scalable high performance interconnects.
Crossbar switches mainly differ in two design issues: switching technique and buffer management.
We use wormhole routing as the switching technique and input buffering with virtual channels since
these are prevalent in current commercial crossbar switches [5, 7]. We begin by presenting the
organization and design alternatives for a 4 \Theta 4 crossbar switch cache. In a later subsection, the
extensions required for incorporating a switch cache module into a larger (8 \Theta 8) crossbar switch
are presented.
4w
4w
4w
Forward
Link
Inputs
Backward
Link
Inputs
Forward
Link
Outputs
Link
Outputs
Backward
Input Block
Input Block
Input Block
Input Block
Routing Tables
Crossbar
4w
Arbiter
Input Block
Switch Cache
Module
4w
4w
4w
4w
4w
4w
4w
Forward
Link
Inputs
Backward
Link
Inputs
Forward
Link
Outputs
Link
Outputs
Backward
Input Block
Input Block
Input Block
Input Block
Crossbar
4w
Routing Tables
4w
4w
4w
4w
Switch Cache
Module
Arbiter
Input Block
(a) Arbitration-Independent (b) Arbitration-Dependent
Figure
10: Crossbar Switch Cache Organization
4.1 Switch Cache Organization
Our base bi-directional crossbar switch has four inputs and four outputs as shown in Figure 9. Each
input link in the crossbar switch has two virtual channels thus providing 8 possible input candidates
for arbitration. The arbitration process is the age technique, similar to that employed in the SGI
Spider Switch [7]. At each arbitration cycle, a maximum of 4 highest age flits are selected from 8
possible arbitration candidates. The flit size is chosen to be 8 bytes (4w) with links similar
to the Intel Cavallino [5]. Thus it takes four link cycles to transmit a flit from output of one switch
to the input of the next. Buffering in the crossbar switch is provided at the input block at each
link. The input block is organized as a fixed size FIFO buffer for each virtual channel that stores
flits belonging to a single message at a time. The virtual channels are also partitioned based on
the destination node. This avoids out-of-order arrival of messages originating from the same source
to the same destination. We also provide a bypass path for the incoming flits that can be directly
transmitted to the output if the input buffer is empty.
While organizing the switch cache, we are particularly interested in maximizing performance by
serving flits within the cycles required for the operation of the base crossbar switch. Thus the
organization depends highly on the delay experienced for link transmission and crossbar switch
processing. Here we present two different alternatives for organizing the switch cache module within
conventional crossbar switches. The arbitration-independent organization is based on a crossbar
switch operation similar to the SGI Spider [7]. The arbitration-dependent organization is based on
a crossbar switch operation similar to the Intel Cavallino [5].
Arbitration-Independent Organization: This switch cache organization is based on link and
switch processing delays similar to those experienced in the SGI Spider. The internal switch core
runs at 100 MHz, while the link transmission operates at 400MHz. The switch takes four 100MHz
clocks to move flits from the input to the link transmitter at the output. The link, on the other
hand, can transmit a 8byte flit in a single 100 MHz clock (four 400 MHz clocks). Figure 10a
shows the arbitration-independent organization of the switch cache. The organization is arbitration
independent because the switch cache performs the critical operations in parallel with the arbitration
operation. At the beginning of each arbitration cycle, a maximum of four input flits stored in
the input link registers are transmitted to the switch cache module. In order to maintain flow
control, all required switch cache processing should be performed in parallel with the arbitration
and transmission delay of the switch (4 cycles).
Arbitration-Dependent Organization: This switch cache organization is based on link and
switch processing delays similar to those experienced in the Intel Cavallino [5]. The internal switch
core and link transmission operate at 200MHz. It takes 4 cycles for the crossbar switch to pass the
flit from its input to the output transmitter and 4 cycles for the link to transmit one flit from one
switch to another. Figure 10b shows the arbitration-dependent organization of the switch cache.
The organization is arbitration dependent because it performs the critical operations at the end
of the arbitration cycle and in parallel with the output link transmission. At the end of every
arbitration cycle, a maximum of four flits passed through the crossbar from input buffers to the
output link transmitters are also transmitted to the switch cache. Since the output transmission
takes four 200MHz cycles, the switch cache needs to process a maximum of four flits within 4 cycles.
Each organization has its advantages/disadvantages. In the arbitration-independent organization,
the cache operates at the switch core frequency and remains independent of the link speed. On the
other hand, this organization lacks arbitration information which could be useful for completing
operations in an orderly manner. While this issue does not affect 4 \Theta 4 switches, the drawback will
be evident when we study the design of larger crossbar switches. The arbitration-dependent organization
benefits from the completion of the arbitration phase and can use the resultant information
for timely completion of switch cache processing. However in this organization, the cache is required
to run at link transmission frequency in order to complete critical switch cache operations. As in
the Intel Cavallino, it is possible to run the switch core, the switch cache and the link transmission
at the same speed.
Finally, note that, in both cases the reply messages from the switch cache module are stored in
another input block as shown in Figure 10. With two virtual channels per input block, the crossbar
switch size expands to 4. Also, in both cases, the maximum number of switch cache inputs are
4 requests and processing time is limited to 4 switch cache cycles.
Cache Size (in bytes)
Cache
Access
Time
(in
Direct
Cache Size (in bytes)
Cache
Access
Time
(in
Direct
(a) 32-byte cache lines (b) 64-byte cache lines
Figure
11: Cache Access Time (in FO4)
4.2 Cache Design: Area and Access Time Issues
The access time and area occupied by an SRAM cache depends on several factors such as asso-
ciativity, cache size, number of wordlines and number of bitlines[25, 16]. In this section, we study
the impact cache size and associativity on access time and area constraints. Our aim is to find the
appropriate design parameters for our crossbar switch cache.
Cache Access Time: The CACTI model [25] quantifies the relationship between cache size,
associativity and cache access time. We ran the CACTI model to measure the switch cache access
time for different cache sizes and set associativity values. In order to use the measurements in
a technology independent manner, we present the results using the delay measurement technique
known as the fan-out-of-four One FO4 is the delay of a signal passing through an inverter
driving a load capacitance that is four times larger than its input. It is known that a 8 Kbyte data
cache has a cycle time of 25 FO4 [9].
Figure
11 shows the results obtained in FO4 units. In Figure 11, the x-axis denotes the size of the
cache, and each curve represents the access time for a particular set associativity. We find that
direct mapped caches have the lowest access time since a direct indexing method is used to locate
the line. However, a direct mapped cache may exhibit poor hit ratios due to mapping conflicts in
the switch cache. Set-associative caches can provide improved cache hit ratios, but have a longer
cycle time due to a higher data array decode delay. Most current processors employ multi-ported
set associative L1 caches operating within a single processor cycle. We have chosen a 2-way set
associative design for our base crossbar switch cache to maintain a balance between access time and
hit ratio. However, we also study the effect of varied set associativity on switch cache performance.
Cache output width is also an important issue that primarily affects the data read/write delay.
As studied by Wilton et al. [25], the increase in data array width increases the number of sense
amplifiers required. The organization of the cache can also make a significant difference in terms of
(a) 32-byte cache lines (b) 64-byte cache lines
Figure
12: Cache Relative Area
chip area. Narrower caches provide data in multiple cycles, thus increasing the cache access time for
an average read request. For example, a cache with 32-byte blocks and a width of 64 bits decreases
the cache throughput to one read in four cycles. Within the range of 64 to 256 bits of data output
width, we know that 64 bits will provide the worst possible performance scenario. We designed our
switch cache using a 64-bit output width and show that the overall performance is not affected by
this parameter.
Cache Relative Area: In order to determine the impact of cache size and set associativity on
the area occupied by an on-chip cache, we use the area model proposed by Mulder et al.[16]. The
area model incorporates overhead area such as drivers, sense amplifiers, tags and control logic to
compare data buffers of different organizations in a technology independent manner using register-
bit equivalent or rbe. One rbe equals the area of a bit storage cell. We used this area model and
obtained measurements for different cache sizes and associativities.
Figure
12 shows the results obtained in relative area. The x-axis denotes the size of the cache and
each curve represents different set associativity values. For small cache sizes ranging from 512 bytes
to 4KB, we find that the amount of area occupied by the direct mapped cache is much lower than
that for an 8-way set associative cache. From the figure, we find that an increase in associativity
from 1 to 2 has a lower impact on cache area than an increase from 2 to 4. From this observation,
we think that a 2-way set associative cache design would be the most suitable organization in terms
of cache area and cache access time, as measured earlier.
4.3 CAche Embedded Switch ARchitecture (CAESAR)
In this section, we present a hardware design for our crossbar switch cache called CAESAR, (CAche
Embedded Switch ARchitecture). A block diagram of the CAESAR implementation is shown in
Figure
13. For a 4 \Theta 4 switch, a maximum of 4 flits are latched into switch cache registers at each
4w
4w
F
4w
F
4w 4w
R
Y
U
Switch Cache Module
Cache Access Control
Switch Cache
Module
Arbiter
Crossbar
Select
Source
Link/Buffer
Information
Process Incoming Flits
Flits Transmitted from Crossbar
header vector To
Input
Block
Blocking
Info
Update
Read
Header
Cache
Data
Unit
Cache
Tag
Unit
Snoop Registers
RI
Buffer
WR
Buffers
Figure
13: Implementation of CAESAR
arbitration cycle. The operation of the CAESAR switch cache can be divided into (1) process
incoming flits, (2) switch cache access, (3) switch cache reply generation, and (4) switch cache
feedback. In this section, we cover the design and implementation issues for each of these operations
in detail.
Process Incoming Flits: Incoming flits stored into the registers can belong to different request
types. The request type of the flit is identified based on the 2 bits (R 1 R 0 ) stored in the header.
Header flits of each request contain the relevant information including memory address required for
processing reads and invalidations. Subsequent flits belonging to these messages carry additional
information not essential for the switch cache processing. Write requests to the switch cache require
both the header flit for address information and the data flits to be written into the cache line.
Finally ignore requests need to be discarded since they do not require any switch cache processing.
An additional type of request that does not require processing is the marked read request. This
read request has the swc hit bit set in the header to inform switch caches that it has been served
at a previous switch cache. Having classified the types of flits entering the cache, the switch cache
processing can be broken into two basic operations.
The first operation performed by the flit processing unit is that of propagating the appropriate flits
to the switch cache. As mentioned earlier, the flits that need to enter the cache are read headers,
invalidation headers and all write flits. Thus, the processing unit masks out ignore flits, marked
read flits and the data flits of invalidation and read requests. This is done by reading the R 1 R 0 bits
from the header vector and the swc hit bit. To utilize this header information for the subsequent
data flits of the message, the switch cache maintains a register that stores these bits.
Flits requiring cache processing are passed to the request queue one in every cycle. The request
queue is organized as two buffers, the RI buffer and the set of WR buffers shown in Figure 13.
The RI buffer holds the header flits of read and invalidation requests. The WR buffers store all
write flits and are organized as num vc \Theta k=2 different buffers. Here multiple buffers are required
to associate data flits with the corresponding headers. When all data flits of a write request have
accumulated into a buffer, the request is ready to initiate the cache line fill operation.
The second operation to complete the processing of incoming flits is as follows. All unmarked read
header flits need to snoop the cache to gather hit=miss information. This information is needed
within the 4 cycles of switch delay or link transmission to be able mark the header by setting the last
bit (swc hit). To perform this snooping operation on the cache tag, the read headers are also copied
to the snoop registers (shown in Figure 13). We require two snoop registers because a maximum of
two read requests can enter the cache in a single arbitration cycle.
Switch Cache Access: Figure 14 illustrates the design of the cache subsystem. The cache module
shown in the figure is that of a 2-way set associative SRAM cache The cache operates at the same
frequency as the internal switch core. The set associative cache is organized using two sub-arrays
for tag and data. The cache output width is 64 bits, thus requiring 4 cycles of data transfer for
reading a line. The tag array is dual ported to allow two independent requests to
access the tag at the same time. We now describe the switch cache access operations and their
associated access delays. Requests to the switch cache can be broken into two types of requests:
snoop requests and regular requests.
Snoop Requests: Read requests are required to snoop the cache to determine hit or miss before
the outgoing flit is transmitted to the next switch. For the arbitration independent switch cache
organization (Figure 10a), it takes a minimum of four cycles for moving the flit from the switch
input to the output. Thus we need the snoop operation within the last cycle to mark the message
before link transmission. Similarly, for the arbitration dependent organization (Figure 10b), it takes
4 cycles to transmit a 64-bit header on a 16-bit output link after the header is loaded into the 64-bit
(4w) output register. From the message format in Figure 8, the phit containing the swc hit bit to
be marked is transmitted in the fourth cycle. Thus it is required that the cache access be completed
within a maximum of 3 cycles. From Figure 13, copying the first read to the snoop registers is
performed by the flit processing unit and is completed in one cycle. By dedicating one of the ports
of the tag array primarily for snoop requests, each snoop in the cache takes only an additional
cycle to complete. Since a maximum of 2 read headers can arrive to the switch cache in a single
arbitration cycle, we can complete the snoop operation in the cache within 3 cycles. Note from
Figure
14 that the snoop operation is done in parallel with the pending requests in the RI buffer
Qsize
Gen.
Reply
Data Buffer
Header
Bit
Directory 0
Queue Status Send ReplyREPLY UNIT
Request
Address
Address
Write
Data
Data In/Out
Data In/Out
Hit/Miss
BLOCKING INFO TO INPUT BLOCK
Figure
14: Design of the CAESAR Cache Module
and the WR buffers. When the snoop operation completes, the hit/miss information is propagated
to the output transmitter to update the read header in the output register. If the snoop operation
results in a switch cache miss, the request is also dequeued from the RI buffer.
Regular Requests: A regular request is a request chosen from either the RI buffer or the WR buffers.
Such a request is processed in a maximum of 4 cycles in the absence of contention. Requests from
the RI buffer are handled on a FCFS basis. This avoids any dependency violation between read
and invalidation requests in that buffer. However, we can have a candidate for cache operation from
the RI buffer as well as from one or more of the WR buffers. In the absence of address dependencies,
the requests from these buffers can progress in any order to the switch cache. When a dependency
exists between two requests, we need to make sure that cache state correctness is preserved. We
identify two types of dependencies between a request from the RI buffer and a request from the
WR buffer:
ffl An invalidation (from the RI buffer) to a cache line X and a write (from the the WR buffer) to the
same cache line X . To preserve consistency, the simplest method is to discard the write to the cache
line, thus avoiding incorrectness in the cache state. Thus, when invalidations enter the switch cache,
write addresses of pending write requests in the WR buffer are compared and invalidated in parallel
with the cache line invalidation.
ffl A read (from the RI buffer) to a cache line X and a write (from the WR buffer) to a cache line Y
that map on to the same cache entry. If the write occurs first, then cache line X will be replaced.
In such an event, the read request cannot be served. Since such an occurrence is rare, the remedy is
to send the read request back to the home node destined to be satisfied as a typical remote memory
read request.
Switch Cache Reply Generation: While invalidations and writes to the cache do not generate
any replies from the switch cache, read requests need to be serviced by reading the cache line from
the cache and sending the reply message to the requesting processor. The read header contains all
the information required to send out the reply to the requester. The read header and cache line data
are directly fed into the reply unit shown in Figure 14. The reply unit gathers the header at the
beginning of the cache access and modifies the source/destination and request/reply information in
the header in parallel with the cache access. When the entire cache line has been read, the reply
packet is generated and sent to switch cache output block. The reply message from the switch
cache now acts as any other message entering a switch in the form of flits and gets arbitrated to
the appropriate output link and progresses using the backward path to the requesting processor.
Switch Cache Feedback: Another design issue for the CAESAR switch is the selection of queue
sizes. In this section, we identify methods to preserve crossbar switch throughput by blocking only
those requests that violate correctness. As shown in Figure 13 and 14, finite size queues exist at
the input of the switch cache (RI buffer and WR buffer) and at the reply unit (virtual channel
queues in the switch cache output block). When any limited size buffer gets full, we have two
options for the processing of read/write requests. The first option is to block the requests until a
space is available in the buffer. The second option, probably the wiser one, is to allow the request
to continue on its path to memory. The performance of the switch cache will be dependent on the
chosen scheme only when buffer sizes are extremely limited. Finally, invalidate messages have to be
processed through the switch cache since they are required to maintain coherence. These messages
need to be blocked only when the RI buffer gets full. The modification required to the arbiter to
make this possible is quite simple. To implement the blocking of flits at the input, the switch cache
needs to inform the arbiter of the status of all its queues. At the end of each cycle, the switch
cache informs the crossbar about the status of its queues in the form of free space available in each
queue. The modification to the arbiter to perform the required blocking is minor. Depending on
the free space of each queue, appropriate requests (based on R 1 R 0 ) will be blocked while others
will traverse through the switch in a normal fashion.
4.4 Design of a 8 \Theta 8 Crossbar Switch Cache
In the previous sections, we presented the design and implementation of a 4 \Theta 4 cache embedded
crossbar switch. Current commercial switches such as SGI Spider and Intel Cavallino have six
Hit/Miss
Way
BankBank Select
Addr/Data
Pair
(1)
Way
Way
Way
BankBankBankQsize
Gen.
Reply
REPLY UNIT
Queue
Status
Reply
Addr/Data
Pair
Address
(1)
Address
Figure
15: Design of the CAESAR
bidirectional inputs, while the IBM SP2 switch has 8 inputs and 8 outputs. In this section, we
present extensions to the earlier design to incorporate a switch cache module in a 8 \Theta 8 switch. We
maintain the same base parameters for switch core frequency, core delay, link speed, link width and
flit size.
The main issue when expanding the switch is that the number of inputs to the switch cache module
doubles from 4 to 8. Thus, in each arbitration cycle, a maximum of four read requests can come
into the switch. They require snoop operation on the switch cache within 4 cycles of switch core
delay or link transmission depending on the switch cache organization shown in Figure 10. As
shown in Figure 13, it takes one cycle to move the flits to the snoop registers. Thus we require to
complete the snoop operation for 4 requests within 2 cycles and mark the header flit in the last
cycle depending on the snoop result.
In order to perform four snoops in two cycles, we propose to use a multiported CAESAR cache
module. Multiporting can be implemented either by duplicating the caches or interleaving it into
two independent banks. Since duplicating the cache consumes tremendous amount of on-chip area,
we propose to use a 2-way interleaved CAESAR (CAESAR as shown in Figure 15.
Interleaving splits the cache organization into two independent banks. Current processors such as
MIPS R10000 [26] use even and odd addressed banked caches. However, the problem remains that
four even addressed or four odd addressed requests will still require four cycles for snooping due
to bank conflicts. We propose to interleave the banks based on the destination memory by using
the outgoing link ids. In a 8 \Theta 8 switch there are four outgoing links that transmit flits from the
switch towards the destination memory and vice-versa. Each cache bank will serve requests flowing
through two of these links, thus partitioning the requests based on the destination memory. In an
arbitration-independent organization (Figure 10a), it is possible that four incoming read requests
are directed to the same memory module and result in bank conflicts. However, in the arbitration-
dependent organization (Figure 10b), the conflict gets resolved during the arbitration phase. This
guarantees that the arbitrated flits flow through different links. For 8 \Theta 8 switches, it would be
more advantageous to use an arbitration dependent organization, thus assuring a maximum of 2
requests per bank in each arbitration cycle. As a result, the snoop operation of four requests can
be completed in the required two cycles. Finally, note that only a few bits from the routing tag are
needed to identify the bank in the cache.
Such an interleaved organization changes the aspect ratio of the cache [25], and may affect the cycle
time of the cache. Wilson et al.[24] showed that the increase in cycle time measured using the
fan-out-of-four banked or interleaved caches over single ported caches was minimal.
The 2-way interleaved implementation also doubles the cache throughput. Since two requests can
simultaneously access the switch cache, the reply unit needs to provide twice the buffer space for
storing the data from the cache. Similarly the header flit of the two read requests also need to be
stored. As shown in Figure 15, the buffers are connected to outputs from different banks to gather
the cache line data.
Performance Evaluation
In this section, we present a detailed performance evaluation of the switch cache multiprocessor
based on execution-driven simulation.
5.1 Simulation Methodology
To evaluate the performance impact of switch caches on the application performance of CC-NUMA
multiprocessors, we use a modified version of the Rice Simulator for ILP Multiprocessors (RSIM)
[19]. RSIM is an execution driven simulator for shared memory multiprocessors with accurate
models of current processors that exploit instruction-level parallelism. In this section, we present
the various system configurations and the corresponding modifications to RSIM for conducting
simulation runs.
The base system configuration consists of 16 nodes. Each node consists of a 200MHz processor
Multiprocessor System - processors
Processor Memory
Speed 200MHz Access time 40
Issue 4-way Interleaving 4
Cache Network
L1 Cache 16KB Switch Size 8x8
line size 32bytes Core delay 4
set size 2 Core Freq 200MHz
access time 1 Link width 16 bits
L2 Cache 128KB Xfer Freq 200MHz
line 32bytes Flit length 8bytes
set size 4 Virtual Chs. 2
access time 8 Buf. Length 4 flits
Switch/Network Caches
Switch Cache 128bytes-8KB Network Cache 4KB
Application Workload
FWA 128x128 GE 128x128
GS 96x128 MM 128x128
Table
1: Simulation parameters
capable of issuing 4 instructions per cycle, a 16KB L1 cache a 128KB L2 cache, a portion of the
local memory, directory storage and a local bus interconnecting these components. The L1 cache
is 2-way set associative and an access time of a single cycle. The L2 cache is 4-way set associative
and has an access time of 8 cycles. The raw memory access time is 40 cycles, but it takes more
than 50 cycles to submit the request to the memory subsystem and read the data over the memory
bus. The system employs the full-map three-state directory protocol [6] and the MSI cache protocol
to maintain cache coherence. The system uses a release consistency model. We modified RSIM to
employ a wormhole routed bidirectional MIN using 8 \Theta 8 switches organized in 2 stages as shown
earlier in Figure 6. Virtual channels were also added to the switching elements to simulate the
behavior of commercial switches like Cavallino and Spider. Each input link to the switch is provided
with 2 virtual channel buffers capable of storing a maximum of 4 flits from a single message. The
crossbar switch operation is similar to the description in Section 4.1. A detailed list of simulation
parameters is also shown in Table 1.
Figure
Percentage Reduction in Memory Reads
To evaluate switch caches, we further modified the simulator to incorporate switch caches into each
switching element in the IN. The switch cache system improves on the base system in the following
respects. Each switching element of the bidirectional MIN employs a variable size cache that models
the functionality of the CAESAR switch cache presented in Section 4. Several parameters such as
cache size and set associativity are varied for evaluating the design space of the switch cache.
We have selected some numerical applications to investigate the potential performance benefits
of the switch cache interconnect. These applications are Floyd-Warshall's all-pair-shortest-path
algorithm, Gaussian elimination (GE), QR factorization using the Gram-Schmidt Algorithm (GS)
and the multiplication of 2D matrices (MM), successive over-relaxation of a grid (SOR) and the
six-step 1D fast fourier transform (FFT) from SPLASH [20]. The input data sizes are shown in
Table
1 and the sharing characteristics were discussed in Section 2.1.
5.2 Base Simulation Results
In this subsection, we present and analyze the results obtained through extensive simulation runs to
compare three systems: the base system (Base), network cache (NC) and switch cache (SC). The
Base system does not employ any caching technique beyond the L1 and L2 caches. We simulate
a system with NC by enabling 4KB switch caches in all the switching elements of stage 0 in the
MIN. Note that stage 0 is the stage close to the processor, while stage 1 is the stage close to the
remote memory as shown in Figure 6. The SC system employs switch caches in all the switching
elements of the MIN.
The main purpose of switch caches in the interconnect is to serve read requests as they traverse
to memory. This enhances the performance by reducing the number of read misses served at the
remote memory. Figure 16 presents the reduction in the number of read misses to memory by
Appl Hit Distribution Sharing
GS 69.02 30.98 1.94 2.37
GE 59.55 40.45 1.58 2.66
Table
2: Distribution of switch cache accesses
employing network caches (NC) and switch caches (SC) over the base system (Base). In order to
perform a fair comparison, here we compare a SC system with 2KB switch caches at both stages
(overall 4KB cache space) to a NC system with 4KB network caches. Figure 16 shows that network
caches reduce remote read misses by 6-20% for all the applications, except FFT. The multiple layers
of switch caches are capable of reducing the number of memory reads requests by upto 45% for
FWA, GS and GE applications.
Table
2 shows the distribution of switch cache hits across the two stages (St0 and St1) of the
network. From the table, we note that a high percentage of requests get satisfied in the switch
caches present at the lowest stage in the interconnect. Note however, that for three of the six
applications, roughly 30-40% of the requests are switch cache hits in the stage close to the memory
(St1). It is also interesting to note the number of requests satisfied by storing each block in the
switch cache. Table 2 presents this data as sharing, which is defined as the number of different
processor requests served after a block is encached in the switch cache. We find that this sharing
degree ranges from 1.0 to 2.7 across all applications. For applications with high overall read sharing
degrees (FWA, GS and GE), we find that the the degree of sharing is approximately 1:7 in the stage
closer to the processor. With only 4 of 16 processors connected to each switch, many read requests
do not find the block in the first stage but get satisfied in the subsequent stage. Thus we find a
higher (approximately 2:5) read sharing degree for the stage closer to the remote memory for these
applications. The MM application has an overall sharing degree of approximately 4 (see Figure 2).
The data is typically shared by four processors physically connected to the same switch in the first
stage of the network. Thus most of the requests (88:2%) get satisfied in the first stage and attain
a read sharing degree of 1:8. Finally, the SOR and FFT applications have very few read shared
requests, most of which are satisfied in the first stage of the network.
Figure
17: Impact on average read latency
Figure
Application execution time improvements
Figure
17 shows the improvement in average memory access latency for reads for each application
by using switch caching in the interconnect. For each application, the figure consists of three bars
corresponding to the Base, NC and SC systems. The average read latency comprises of processor
cache delay, bus delay, network data transfer delay, memory service time and queueing delays at
the network and memory module. As shown in the figure, by employing network caches, we can
improve the average read latency by atmost 15% for most of the applications. With switch caches
in multiple stages of the interconnect, we find that the average read latency can be improved by
as high as 35% for FWA, GS and GE applications. The read latency reduces by about 7% for the
MATMUL application. Again, SOR and FFT are unaffected by network caches or switch caches
due to negligible read sharing.
The ultimate parameter for performance is execution time. Figure 18 shows the execution time
improvement. Each bar in the figure is divided into computation and synchronization time, read
stall time and write stall time. In a release consistent system, we find that the write stall time is
negligible. However, the read stall time in the base system comprises as high as 50% of the overall
Figure
19: Impact of cache size on the number of memory reads
Figure
20: Impact of cache size on execution time
execution time. Using network caches, we find that the read stall time reduces by a maximum of 20%
(for the FWA, GS and GE applications) and thus translates to an improvement in execution time
by up to 10%. Using switch caches over multiple stages in the interconnect, we observe execution
time improvements as high as 20% in the same three applications. The execution time of the MM
application is comparable to that with network caches. SOR and FFT are unaffected by switch
caches.
5.3 Sensitivity Studies
Sensitivity to Cache Size
In order to determine the effect of cache size on performance, we varied the switch cache size from
a mere 128 bytes to a large 8KB. Figures 19 & 20 show the impact of switch cache size on the
number of memory reads and the overall execution time. As the cache size is increased, we find that
a switch cache size of 512 bytes provides the maximum performance improvement (up to 45% reads
5HSO ,QY
Figure
21: Effect of cache size on the eviction rate
Figure
22: Effect of cache size on switch cache hits across stages
and 20% execution time) for three of the six applications. The MM and SOR applications require
larger caches for additional improvement. The MM application attains a performance improvement
of 7% in execution time at a switch cache size of 2KB. Increasing the cache size further has negligible
impact on performance. For SOR, we found some reduction in the number of memory reads, contrary
to the negligible amount of sharing in the application (shown in Figure 2). Upon investigation, we
found that the switch cache hits come from replacements in the L2 caches. In other words, blocks
in the switch cache are accessed highly by the same processor whose initial request entered the
block into the switch cache. The switch cache acts as a victim cache for this application. The use
of switch caches does not affect the performance of the FFT application.
Figure
21 investigates the impact of cache size on the eviction rate and type in the switch cache for
the FWA application. The x-axis in the figure represents the size of the cache in bytes. A block in
the switch cache can be evicted either due to replacement or due to invalidation. Each bar in the
figure is divided into two portions to represent the amount of replacements versus invalidations in
the switch cache. The figures are normalized to the number of evictions for a system with 128 byte
Figure
23: Effect of line size on the number of memory reads
switch caches. The first observation from the figure is the reduction in the number of evictions as
the cache size increases. Note that the number of evictions remains constant beyond a cache size
of 1KB. With small caches, we also observe that roughly 10-20% of the blocks in the switch cache
are invalidated while all others are replaced. In other words, for most blocks, invalidations are not
processed through the switch cache since they have already been evicted through replacements due
to small capacity. As the cache size increases, we find the fraction of invalidations increase, since
fewer replacements occur in larger caches. For the 8KB switch cache, we find that roughly 50% of
the blocks are invalidated from the cache.
We next look at the impact of cache size on the amount of sharing across stages. Figure 22 shows
the amount of hits obtained in each stage of the network for the FWA application. Each bar is
divided into two segments, representing each stage of switch caches, denoted by the stage number.
Note that Stage0 is the stage closest to the processor interface. From the figure, it is interesting to
note that for small caches, an equal amount of hits are obtained from each stage in the network.
On the other hand, as the cache size increases, we find that a higher fraction of the hits are due to
switch caches closer to the processor interface (60-70% from St0). This is beneficial, because fewer
hops are required in the network to access the data, thereby reducing the read latency considerably.
Sensitivity to Cache Line Size
In the earlier sections, we analyzed data with lines. In this section, we vary the cache
line size to study its impact on switch cache performance. Figures 23 & 24 show the impact of
a larger cache line (64 bytes) on the switch cache performance for three applications
and GE). We vary the cache size from 256 bytes to 16KB and compare the performance to the
base system with lines and 64 byte cache lines. Note that the results are normalized
to the base system with 64 byte cache lines. We found that the number of memory reads were
Figure
24: Effect of line size on execution time
Figure
25: Effect of associativity on switch cache hits
reduced by 37 to 45% when we increase the cache line size in the base system. However, the use
of switch caches still has significant impact on application performance. With 1KB switch caches,
we can reduce the number of read requests served at the remote memory by as high as 45% and
the execution time by as high as 20%. In summary, the switch cache performance does not depend
highly on cache line size for highly read shared applications with good spatial locality.
Sensitivity to Set Associativity
In this section, we study the impact of cache set associativity on application performance. Figure 25
shows the percentage of switch cache hits as the cache size and associativity are varied. We find that
set associativity has no impact on switch cache performance. We believe that frequently accessed
blocks need to reside in the switch cache only for a short amount of time, as we observed earlier
from our trace analysis. A higher degree of associativity tries to prolong the residence time by
reducing cache conflicts. Since we do not require a higher residence time in the switch cache, the
performance is neither improved nor hindered.
Figure
26: Effect of application size on execution time
Sensitivity to Application Size
Another concern for the performance of switch caches is the relatively small data set that we have
used for faster simulation. In order to verify that the switch cache performance does not change
drastically for larger data sets, we used the FWA application and increased the number of vertices
from 128 to 192 and 256. Note that the data set size increases by a square of the number of vertices.
The base system execution time increases by a factor of 2.3 and 4.6 respectively. With 512 byte
switch caches, the execution time reduces by 17% for 128 vertices, 13% for 192 vertices and 10%
for 256 vertices. In summary, we believe that switch caches require small cache capacity and can
provide sufficient performance improvements for large applications with frequently accessed read
shared data.
6 Conclusions
In this paper, we presented a novel hardware caching technique, called switch cache, to improve
the remote memory access performance of CC-NUMA multiprocessors. A detailed trace analysis of
several applications showed that accesses to shared blocks have a great deal of temporal locality.
Thus remote memory access performance can be greatly improved by caching shared data in a global
cache. To make the global cache accessible to all the processors in the system, the interconnect
seems to be the best location since it has the ability to monitor all inter-node transactions in the
system in an efficient, yet distributed fashion.
By incorporating small caches within each switching element of the MIN, shared data was captured
as they flowed from the memory to the processor. In designing such a switch caching framework,
several issues were dealt with. The main hindrance to global caching techniques is that of maintaining
cache coherence. We organized the caching technique in a hierarchical fashion by utilizing
the inherent tree structure of the BMIN. By doing so, caches were kept coherent in a transparent
fashion by the regular processor invalidations sent by the home node and other such control infor-
mation. To maintain full-map directory information, read requests that hit in the switch cache were
marked and allowed to continue on their path to the memory for the sole purpose of updating the
directory. The caching technique was also kept non-inclusive and thus devoid of the size problem
in a multi-level inclusion property.
The most important issue while designing switch caches was that of incorporating a cache within
typical crossbar switches (such as SPIDER and CAVALLINO) in a manner such that requests are
not delayed at the switching elements. A detailed design of a cache embedded switch architecture
(CAESAR) was presented and analyzed. The size and organization of the cache depends heavily
on the switch transmission latency. We presented a dual-ported 2-way set associative SRAM cache
organization for a 4 \Theta 4 crossbar switch cache. We also proposed a link-based interleaved cache
organization to scale the size of the CAESAR module for 8 \Theta 8 crossbar switches. Our simulation
results indicate that a small cache of size 1 KB bytes is sufficient to provide up to 45% reduction in
memory service and thus a 20% improvement in execution time for some applications. This relates
to the fact that applications have a lot of temporal locality in their shared accesses. Current switches
such as SPIDER maintain large buffers that are under-utilized in shared memory multiprocessors.
It seems that by organizing these buffers as a switch cache, the improvement in performance can
be realized.
In this paper, we studied the use of switch caches to store recently accessed data in the shared state
to be re-used by subsequent requests from any processor in the system. In addition to these requests,
applications also have a significant amount of accesses to blocks in the dirty state. To improve the
performance of such requests, directories have to be embedded within the switching elements. By
providing shared data through switch caches and ownership information through switch directories,
the performance of the CC-NUMA multiprocessor can be significantly improved. Latency hiding
techniques such as data prefetching or forwarding can also utilize the switch cache and reduce
the risk of processor cache pollution. The use of switch caches along with the above latency
hiding techniques can further improve the application performance on CC-NUMA multiprocessors
tremendously.
--R
"An Overview of the HP/Convex Exemplar Hardware,"
"Butterfly Parallel Processor Overview, version 1,"
"Performance of the Multistage Bus Networks for a Distributed Shared Memory Multiprocessor,"
"The Impact of Switch Design on the Application Performance of Shared Memory Multiprocessors,"
"Cavallino: The Teraflops Router and NIC,"
"A New Solution to Coherence Problems in Multicache Sys- tems,"
"Scalable Pipelined Interconnect for Distributed Endpoint Routing: The SGI SPIDER Chip,"
"Tutorial on Recent Trends in Processor Design: Reclimbing the Complexity Curve,"
"High Frequency Clock Distribution,"
"The SGI Origin: A ccNUMA Highly Scalable Server,"
"The Network Architecture of the Connection Machine CM-5,"
"The Stanford DASH Multiprocessor,"
"STiNG: A CC-NUMA Computer System for the Commercial Mar- ketplace.,"
"The Effectiveness of SRAM Network Caches on Clustered DSMs,"
"A Performance Model for Finite-Buffered Multistage Interconnection Networks,"
"An Area Model for On-Chip Memories and its Applica- tion,"
"Design and Analysis of Cache Coherent Multistage Interconnection Networks,"
"The Impact of Shared-Cache Clustering in Small-Scale Shared-Memory Mul- tiprocessors,"
"RSIM Reference Manual. Version 1.0,"
"SPLASH: Stanford Parallel Applications for Shared- Memory,"
"The SP2 High Performance Switch,"
"The Performance of the Cedar Multistage Switching Network,"
"Hierarchical cache/bus architecture for shared memory multiprocessors,"
"Designing High Bandwidth On-Chip Caches,"
"An Enhanced Access and Cycle Time Model for On-Chip Caches,"
"The MIPS R10000 Superscalar Microprocessor,"
"Reducing Remote Conflict Misses: NUMA with Remote Cache versus COMA,"
--TR
--CTR
Takashi Midorikawa , Daisuke Shiraishi , Masayoshi Shigeno , Yasuki Tanabe , Toshihiro Hanawa , Hideharu Amano, The performance of SNAIL-2 (a SSS-MIN connected multiprocessor with cache coherent mechanism), Parallel Computing, v.31 n.3+4, p.352-370, March/April 2005 | shared memory multiprocessors;wormhole routing;crossbar switches;scalable interconnects;cache architectures;execution-driven simulation |
354866 | Properties of Rescheduling Size Invariance for Dynamic Rescheduling-Based VLIW Cross-Generation Compatibility. | AbstractThe object-code compatibility problem in VLIW architectures stems from their statically scheduled nature. Dynamic rescheduling (DR) [1] is a technique to solve the compatibility problem in VLIWs. DR reschedules program code pages at first-time page faults, i.e., when the code pages are accessed for the first time during execution. Treating a page of code as the unit of rescheduling makes it susceptible to the hazards of changes in the page size during the process of rescheduling. This paper shows that the changes in the page size are only due to insertion and/or deletion of NOPs in the code. Further, it presents an ISA encoding, called list encoding, which does not require explicit encoding of the NOPs in the code. Algorithms to perform rescheduling on acyclic code and cyclic code are presented, followed by the discussion of the property of rescheduling-size invariance (RSI) satisfied by list encoding. | Introduction
The object-code compatibility problem in VLIW architectures stems from their statically
scheduled nature. The compiler for a VLIW machine schedules code for a specific machine
model (or a machine generation), for precise, cycle-by-cycle execution at run-time. The
machine model assumptions for a given code schedule are unique, and so are its semantics.
Thus, code scheduled for one VLIW is not guaranteed to execute correctly on a different
VLIW model. This is a characteristic of VLIWs often cited as an impediment to VLIWs
becoming a general-purpose computing paradigm [2]. An example to illustrate this is shown
in
Figures
1, 2, and 3. Figure 1 shows an example VLIW schedule for a machine model which
has two IALUs, one Load unit, one Multiply unit, and one Store unit. Execution latencies
of these units are as indicated. Let this machine generation be known as generation X.
Figure
2 shows the next-generation (generation where the Multiply and Load
d
s
s
s
s
cycle latency 1 cycle latency
IALU IALU MUL
G:R8 R4 R3
1 cycle latency 1 cycle latency 3 cycle latency
Figure
1: Scheduled code for VLIW Generation X.
latencies have changed to 4 and 3 cycles respectively. The generation X schedule will not
execute correctly on this machine due to the flow dependence between operations B and C,
between D and H, and between E and F. Figure 3 shows the schedule for a generation X+n
machine which includes an additional multiplier. The latencies of all FUs remain as shown
in
Figure
1. Code scheduled for this new machine will not execute correctly on the older
machines because the code has been moved in order to take advantage of the additional
multiplier. (In particular, E and F have been moved.) There is no trivial way to adapt
this schedule to the older machines. This is the case of downward incompatibility between
generations. In this situation, if different generations of machines share binaries (e.g., via a
file server), compatibility requires either a mechanism to adjust the schedule or a different
set of binaries for each generation. One way to avoid the compatibility problem would be
to maintain binary executables customized to run on each new VLIW generation. But this
would not only violate the copy-protection rules, but would also increase the disk-space usage.
Alternatively, program executables may be translated or rescheduled for the target machine
model to achieve compatibility. This can be done in hardware or in software. The hardware
approach adds superscalar-style run-time scheduling hardware to a VLIW [3], [4], [5], [6], [7].
The principle disadvantage of this approach is that it adds to the complexity of the hardware
and may potentially stretch cycle time of the machine if the rescheduling hardware falls in
the critical path. The software approach is to perform off-line compilation and scheduling
of the program from the source code or from decorated object modules (.o files). Code
rescheduled in this manner yields better relative speedups, but the technique is cumbersome
to use due to its off-line nature. It could also imply violation of copy protection. Dynamic
Rescheduling (DR) [1], is a third alternative to solve the compatibility problem. Under
dynamic rescheduling, a program binary compiled for a given VLIW generation machine
model is allowed to run on a different VLIW generation. At each first-time page fault
(a page-fault that occurs when a code page is accessed for the first time during program
d
s
s
s
R4 ready
s
cycle latency 1 cycle latency
IALU IALU MUL
G:R8 R4 R3
R6 ready
R7 ready
1 cycle latency 1 cycle latency 4(3) cycle latency
Figure
2: Generation Incompatibility due to changes in functional
unit latencies (shown by arrows). The old latencies are shown in parentheses. Operations
now produce incorrect results because of the new latencies for operations B,
D, and E.
s
s
s
R4 ready
s
cycle latency 1 cycle latency
IALU IALU MUL
G:R8 R4 R3
R6 ready
1 cycle latency 1 cycle latency 3 cycle latency
MUL
R7 ready
3 cycle latency
Figure
3: Generation X+n schedule: downward incompatibility due to change in the VLIW
machine organization. No trivial way to translate new schedule to older generation.
execution), the page fault handler invokes a module called the rescheduler, to reschedule the
page for that host. Rescheduled code pages are cached in a special area of the file system
for future use to avoid repeated translations.
Since the dynamic rescheduling technique translates the code on a per-page basis, it
is susceptible to the hazard of changes in the page-size due to the process of reschedul-
ing. If the changes in the machine model across the generation warrant addition and/or
deletion of NOPs to/from the page, it would lead to page overflow or an underflow. This
paper discusses a technique called list encoding for the ISA and proves the property of
rescheduling-size invariance (RSI), which guarantees that there is no code-size change due
to dynamic rescheduling. The organization of this paper is as follows. Section 2 presents
the terminology used in this paper. Section 3 briefly describes dynamic rescheduling and
demonstrates the problem of code-size change with an example. Section 4 introduces the
concept of rescheduling-size invariance (RSI), presents the list encoding, and then proves the
RSI properties of list encoding. Section 5 presents concluding remarks and directions for
future research.
The terminology used in this paper is originally from Rau [3] [8], and is introduced here for
the discussion that follows. Each wide instruction-word, or MultiOp, in a VLIW schedule,
consists of several operations, or Ops. All Ops in a MultiOp are issued in the same cycle.
VLIW programs are latency-cognizant, meaning that they are scheduled with the knowledge
of functional unit latencies. An architecture which runs latency-cognizant programs is
termed a Non-Unit Assumed Latency (NUAL) architecture. A Unit Assumed Latency (UAL)
architecture assumes unit latencies for all functional units. Most superscalar architectures
are UAL, whereas most VLIWs are NUAL. The machine models discussed in this paper are
NUAL.
There are two scheduling models for latency-cognizant programs: the Equals model and
the Less-Than-or-Equals (LTE) model [9]. Under the equals model, schedules are built such
that each operation takes exactly as much as its specified execution latency. In contrast,
under the LTE model an operation may take less than or equal to its specified latency.
In general, the equals model produces slightly shorter schedules than the LTE model; this
is mainly due to register re-use possible in the equals model. However, the LTE model
simplifies the implementation of precise interrupts and provides binary compatibility when
latencies are reduced. Both the scheduler (in the back-end of the compiler) and the dynamic
rescheduler (in the page-fault handler) presented in this paper follow the LTE scheduling
model.
For the purposes of this paper, it is assumed that all program codes can be classified
into two broad categories: acyclic code and cyclic code. Cyclic code consists of short inner
loops in the program which typically are amenable to software pipelining [10]. On the other
hand, acyclic code contains a relatively large number of conditional branches, and typically
has large loop bodies. This makes the acyclic code un-amenable to software pipelining.
Instead, the body of the loop is treated as a piece of acyclic code, surrounded by the loop
control Ops. Examples of cyclic code are the inner loops like counted DO-loops found
in scientific code. Examples of acyclic code are non-numeric programs, and interactive
programs. This distinction between the types of code is made because the scheduling and
rescheduling algorithms for cyclic and acyclic code differ considerably, because of which the
dynamic rescheduling technique treats each separately.
It is also assumed that the program code is structured in the form of superblocks [11]
or the hyperblocks [12]. Hyperblocks are constructed by if-conversion of code using predication
[13], [14]. Support for predicated execution of Ops is also assumed. Both superblocks
and hyperblocks have a single entry point into the block (at the beginning of the block) and
may have multiple side-exits. This property is useful in bypassing the problems introduced
by speculative code motion in DR, discussion of which can be found elsewhere (see [15]).
Overview
Reschedule
Buffer cache
Resume
Execution
First-time
page fault
Context switch
Figure
4: Dynamic Rescheduling: Sequence of events.
The technique of dynamic rescheduling performs translation of code pages at first-time page
faults and stores the translated pages for subsequent use. Figure 4 shows the sequence of
events that take place in Dynamic Rescheduling. Event 1 indicates a first-time page-fault.
On a page-fault, the OS switches context and fetches the requested page from the next level
of the memory hierarchy; this is shown shown as Events 2 & 3 respectively. Events 1, 2, 3
are standard in the case of every page fault encountered by the OS. What is different in the
case of DR is the invocation of a module called the rescheduler at each first-time page fault.
The rescheduler operates on the newly fetched page to reschedule it to execute correctly on
the host machine. This is shown as event 4. Event 5 shows that the rescheduled page is
written to an area of the file system for future use, and in event 6, the execution resumes.
To facilitate the detection of a VLIW generation mismatch at a first-time page fault,
each program binary holds a generation-id in its header. The machine model for which the
binary was originally scheduled and the boundaries to identify the pieces of cyclic code in
the program are also stored in the program binary. This information is made available to
the rescheduler it while performs rescheduling. A page of the rescheduled code remains in
the main memory until it is displaced (as any other page in the memory), at which time
it is written to a special area of the file system called text swap. All subsequent accesses
to the page during the lifetime of the program are fulfilled from the text swap. Text swap
may be allocated on a per-executable basis at compile time, or be allocated by the OS as
a system-wide global area shared by all the active processes. The overhead of rescheduling
can be quantitatively expressed in terms of the following factors: (1) the time spent at the
first-time page-faults to reschedule the page, (2) the time spent in writing the rescheduled
pages to the text swap area, and, (3) the amount of disk space used to store the translated
pages. Further discussion of the overhead introduced by DR and an investigation of trade-offs
involved in the design of the text-swap used to reduce the overhead are beyond the scope
of this paper (see [16] for more details).
3.1 Insertion and deletions of NOPs
When the compiler schedules code for a VLIW, independent Ops which can start execution
in the same machine cycle are grouped together to form a single MultiOp; each Op in a
MultiOp is bound to execute on a specific functional unit. Often, however, the compiler
cannot find enough Ops to keep all the FUs busy in a given cycle. These empty slots in a
MultiOp are filled with NOPs. In some machine cycles the compiler cannot schedule even a
single Op for execution; NOPs are scheduled for all FUs in such a cycle and the instruction
is called as an empty MultiOp.
Logically, the rescheduler in DR can be thought of as performing the following steps
to generate the new code 1 , no matter what the type of code. First, it breaks down each
MultiOp into individual Ops, to create an ordered set of Ops. Second, it discards the NOPs
from this set. The ordered set of Ops thus obtained is a UAL schedule. In the third step,
depending upon the resource constraints and the data dependence constraints, it re-arranges
the Ops in the UAL schedule to create the new, NUAL schedule. In the fourth and last
step, new NOPs and empty MultiOps are inserted as required to preserve the semantics of
the computation. Note that the number of NOPs and the empty MultiOps that are newly
inserted may not be the same as that in the old code, which may lead to the problem that
the size of the code may change due to rescheduling. It is important to note at this time that
the change in the code-size, if any, is only due to the NOPs and the empty MultiOps. An
example of changes in code-size is illustrated in Figure 5. In the left portion of the Figure,
the old code is shown. Assume that the execution latency of Ops A; D; E; F; G; H is 1-cycle
each; that of Op B is 3-cycles and of the LOAD Op C is 2-cycles. Further, Ops E; F are
dependent on the result of Op C, hence should not begin execution before Op C finishes
execution. In a newer generation of the architecture shown on right, one IALU is removed
from the machine, while increasing the latency of the LOADs by 1 (to 3-cycles). When the
old code is executed on the newer generation, DR invokes the rescheduler, which generates
the new code as shown. To account for the new, longer latency of the LOAD unit, it inserts
1 The terms "new code" and "old code" do not necessarily mean that the code input to the rescheduler
belongs to the older machine generation, or similar for the counterpart. "Old code" as used here means any
code input to the rescheduler, and new code means the code output by the rescheduler.
IALU IALU FPAdd FPMul Load Store Cmpp Br
nop nop nop nop nop nop nop nop
G nop nop nop nop nop nop H
nop
Load latency increases,
one less IALU
F dependent on C, C takes 2 cycles
bytes total FPAdd FPMul Load Store Cmpp Br
IALU
nop nop nop nop nop nop
nop nop nop nop nop nop
nop nop nop nop nop
nop nop
A
nop
F
nop nop nop nop nop nop
nop
G nop nop nop nop nop H
nop
336 bytes total (10 extra nops)
Figure
5: Example illustrating the insertion/deletion of NOPs and empty MultiOps due to
Dynamic Rescheduling.
an empty MultiOp in the third cycle. Also, the old MultiOp consisting of operations E and
F is broken into two consecutive MultiOps due to the reduction in the number of IALUs.
Observe that the new code is bigger than the old code. Assuming all the Ops are 64-bits
each, the net increase in the size of the code is 80 bytes, corresponding to the 10-extra NOPs
inserted during rescheduling.
The page size with which a computer system operates is usually dictated by the hardware
or the OS or both. It is non-trivial for the OS to handle any changes in the page sizes at run-
time. Previous work done in this area by Talluri and Hill attempts to support multiple page-
sizes, where each page-size is an integral multiple of a base page-size [17] [18] [19]. Enhanced
VM hardware (the Translation-Lookaside Buffer (TLB), and an enhanced VM management
policy must be available in the to support the proposed technique. It is possible that with
the help of this extra hardware, multiple code page sizes can be used to handle variations in
page-size due to DR, but this would lead to a multitude of problems. The first problem is
that of inefficient memory usage: if a new page is created to accommodate the "spill-over"
generated by the rescheduler, the remainder of the new page remains unused. On the other
hand, if the code in a page shrinks due to DR, that leads to a hole in the memory. The
second problem arises due to control restructuring: when a new page is inserted, it must be
placed at the end of the code address space. The last MultiOp in the original page must
then be modified to jump to the new page, and the last MultiOp on the new page must be
modified to jump to the page which lies after the original page. Now, if a code positioning
optimization was performed on the old code in order to optimize for I-Cache accesses, this
process could violate the ordering, potentially leading to performance degradation. Perhaps
the most serious problem is that the code movement within the old page or into the new
page could alter branch target addresses (merge points) in the old code, leading to incorrect
code. It may not even be possible to repair this code, because the code which jumps to the
altered branch targets may not be visible to the rescheduler at rescheduling time.
One solution to avoid the problem of code-size change is to use a specialized ISA encoding
which "hides" the NOPs and the empty MultiOps in the code. Since all code-size changes in
DR are due to the addition/deletion of NOPs, such an encoding circumvents the problem.
An encoding called the list encoding which has this ability is discussed in detail in Section 4,
along with the rescheduling algorithms for cyclic and acyclic code.
Rescheduling Size-Invariance
empty MultiOps in the object code, and hence it is a zero-NOP encoding. This property
of List encoding is used to support DR. This section presents a formal definition of list
encoding, followed by an introduction to the concept of Rescheduling Size-Invariance (RSI).
It will also be shown that any list-encoded schedule of code is rescheduling size-invariant .
4.1 List encoding and RSI
Operation (Op) is defined by a 6-tuple: fH, p n , s pred , FU-
type, opcode, operandsg, where, H 2 f0; 1g is a 1-bit field called header-bit, p n is an n-bit
field called pause, s.t. pred is a stage predicate (discussed further in
Section 4.3), FUtype uniquely identifies the FU instance where the Op must execute, opcode
uniquely identifies the task of the Op, and operands is the set of valid operands defined for
Op. All Ops have a constant width. 2
O, is a Header Op iff the value of the header-bit field
in O is 1. 2
Definition 3 (VLIW MultiOp) A VLIW MultiOp, M , is defined as an unordered sequence
of Ops fO is the number of hardware
resources to which the Ops are issued concurrently, and O 1 is a Header Op. 2
Definition 4 (VLIW Schedule) A VLIW schedule, S, is defined as an ordered sequence
of MultiOps fM
A discussion of the list encoding is now in order. All Ops in this scheme of encoding are
fixed-width. In a given VLIW schedule, a new MultiOp begins at a Header Op and ends
exactly at the Op before the next Header Op; the MultiOp fetch hardware uses this rule to
identify and fetch the next MultiOp. The value of the p n field in an Op is referred to as
the pause, because it is used by the fetch hardware to stop MultiOp fetch for the number
of machine cycles indicated by p n . This is a mechanism devised to eliminate the explicit
encoding of empty MultiOps in the schedule. The FUtype field indicates the functional unit
where the Op will execute. The FUtype field allows the elimination of NOPs inserted by
the compiler in an arbitrary MultiOp. Prior to the execution of a MultiOp, its member Ops
are routed to their appropriate functional units based on the value of their FUtype field.
This scheme of encoding the components of a VLIW schedule is termed as List encoding.
Since the size of every Op is the same, the size of a given list encoded schedule, S, can be
expressed in terms of the number of Ops in it:
O j
where i is the number of MultiOps in S, O j is an Op, and j is the number of Ops in a given
MultiOp.
Definition 5 (VLIW Generation) A VLIW generation G is defined by the set fR; Lg,
where R is a set of hardware resources in G, and L 2 is the set of execution
latencies of all the Ops in the operation set of G. R itself is a set consisting of pairs
where r is a resource type and n r is the number of instances of r.
This definition of a VLIW generation does not model complex resource usage patterns
for each Op, as used in [20], [21] and [22]. Instead, each member of the set of machine
resources R, presents a higher-level abstraction of the "functional units" found in modern
processors. Under this abstraction, the low-level machine resources such as the register-file
ports and operand/result busses required for the execution of an Op on each functional unit
are bundled with the resource itself. All the resources indicated in this manner are assumed
to be busy through the period of time equal to the latency of the executing Op, indicated
by the appropriate member of set L.
Definition 6 (Rescheduling Size-Invariance (RSI)) A VLIW schedule S is said to satisfy
the RSI property iff sizeof(S Gn are the versions of the
original schedule S prepared for execution on arbitrary machine generations G n and Gm re-
spectively. Further, schedule S is said to be rescheduling size-invariant iff it satisfies the RSI
property. 2
The proof that list encoding is RSI will be presented in two parts. First, it will be shown that
acyclic code in the program is RSI when list encoded, followed by the proof that the cyclic
code is RSI when list encoded. Since all code is assumed to be either acyclic or cyclic, the
result that list encoding makes it RSI will follow. In the remainder of this section, algorithms
to reschedule each of these types of codes are presented, followed by the proofs themselves.
4.2 Rescheduling Size-Invariant Acyclic Code
The algorithm to reschedule acyclic code from VLIW generation G old to generation G new
is shown in Algorithm Reschedule Acyclic Code. It is assumed that both the old and new
schedules are LTE schedules (see Section 2), and that both have the same register file architecture
and compiler register usage convention.
Algorithm Reschedule Acyclic Code
input
old , the old schedule, (assumed no more than n old cycles long);
old g, the machine model description for the old VLIW;
new g, the machine model description for the new VLIW;
output
new , the new schedule;
var
old , the length of S old ;
new , the length of S new ;
Scoreboard[number of registers], to flag the registers "in-use" in S old .
RU [n new ][
the resource usage matrix, where:
r represents all the resource types in G new , and,
n r is the number of instances of each resource type r in G new ;
UseInfo[n new ][number of registers], to mark the register usage in S new ;
the cycle in which an Op can be scheduled while
satisfying the data dependence constraints;
the cycle in which an Op can be scheduled while satisfying
the data dependence and resource constraints;
functions
RU loopkup (O (T ffi )) returns the earliest cycle, later than cycle T ffi , in which Op O
can be scheduled after satisfying the data dependence and resource constraints;
RU update (oe; O) marks the resources used by Op O in cycle oe of S new ;
dest register (O) returns the destination register of Op O;
source register (O) returns a list of all source registers of Op O;
latest use time (OE) returns the latest cycle in S new that register OE was used in;
most recent writer (ae) returns the id of the Op which modified register ae latest in S old ;
begin
for each MultiOp M old [c] 2 S old , 0 - c - n old do
begin
- resource constraint check:
for each Op Ow 2 S old that completes in cycle c do
begin
new [new cycle] /M new [new cycle] j Ow ;
RU update (T rc+ffi ; Ow );
- update the scoreboard:
for each Op O i 2 S old which is unfinished in cycle c do
begin
Scoreboard[dest register (O r )] / reserved;
do data dependence checks:
for each Op O r 2 M old [c] do
begin
O r (T ffi ) / 0;
anti-dependence:
for each OE 2 dest register (O r ) do
O latest use time (OE));
pure dependence:
for each OE 2 source register (O r ) do
O r (T ffi
- output dependence:
for each OE 2 dest register (O r ) do
O r (T ffi
The RSI property for list encoded acyclic code schedule will now be proved.
Theorem 1 An arbitrary list encoded schedule of acyclic code is RSI.
Proof: The proof will be presented using induction over the number of Ops in an
arbitrary list encoded schedule. Let L i be an arbitrary, ordered sequence of i Ops (i - 1)
that occur in a piece of acyclic code. Let F i denote a directed dependence graph for the Ops
in L i , i.e. each Op in L i is a node in F i , and the data- and control-dependences between the
Ops are indicated by directed arcs in F i . Let SGn be the list encoded schedule for L i generated
using the dependence graph F and designed to execute on a certain VLIW generation G n .
Also, let Gm denote another VLIW generation which is the target of rescheduling under DR.
Induction Basis. L 1 is an Op sequence of length 1. In this case, sizeof (S Gn
the dependence graph has a single node. It is trivial in this case that SGn is RSI, because
after rescheduling to generation Gm , the number of Ops in the schedule will remain 1, or,
Induction Step. L p is an Op sequence of length p, where p ? 1. Assume that SGn is RSI.
In other words,
Now consider the Op sequence L p+1 , which is of length p+ 1, such that to L p+1 was obtained
from L p by adding one Op from the original program fragment. Let this additional Op be
denoted by z. Op z can be thought of as borrowed from the original program, such that
the correctness of the computation is not compromised. L p is an ordered sequence of Ops,
and Op z must then be either a prefix of L p , or a suffix to it. Also, let TGn denote the list
encoded schedule for sequence L p+1 , which means sizeof (T Gn 1. In order to prove
the current theorem, it must now be proved that TGn is RSI if SGn is RSI.
The addition of Op z to L p may change the structure of the dependence graph F p in two
ways: (1) if the Op z adds one or more data dependence arcs to F p , or (2) the Op z does
not add any data dependence arcs to F p .
ffl Op z adds dependence(s):
This case corresponds to the fact that Op z is control- and/or data-dependent on one
or more of the Ops in L p , or vice versa. Following are the two sub-cases in which a
schedule will be constructed which includes the Op z: (1) construction of TGn using the
dependence graph F , and, (2) rescheduling of TGn to TGm . In both these cases, all the
dependences introduced by Op z must be honored. Further, any resource constraints
must be satisfied as well. This is done using the well-known list scheduling algorithm
(in the first sub-case), and the Reschedule Acyclic Code algorithm (in the second sub-
case). Appropriate NOPs and empty MultiOps will be inserted in the schedule by
both these algorithms. However, when the schedules TGn and TGm are list encoded,
the empty MultiOps will be made implicit using the pause field in the Header Op of the
previous MultiOp, and the NOPs in a MultiOp will be made implicit via the FUtype
field in the Ops. Thus, the only source of size increase in schedules TGn and TGm is
due to the newly added Op z.
ffl Op z does not add any dependences:
In this case, only the resource constraints, if any, would warrant the insertion of empty
MultiOps. By an argument similar to that in the previous case, it is trivial to see that
the only source of size increase in schedules TGn and TGm is the newly added Op z.
Thus, in both the cases, sizeof (T Gn 1, from which and from Equation 2, it
Similarly, for both the cases, sizeof (T Gm
From Equations 3 and 4, and by induction, it is proved that an arbitrary list encoded schedule
of acyclic code is RSI.
An example of the transition of the code previously shown in Figure 5, by application of
algorithm Reschedule Acyclic Code is shown in Figure 6 (assuming that the original schedule
belonged to the acyclic category). It can be observed that the size of the original code (on the
left) is the same as that of the rescheduled code (on the right). The NOPs and the empty
MultiOps have been eliminated in the list encoded schedules; the rescheduling algorithm
merely re-arranged the Ops, and adjusted the values of the H and the p n (pause) fields
within the Ops to ensure the correctness of execution on G new .
4.3 Rescheduling Size-Invariant Cyclic Code
Most programs spend a great deal of time executing the inner loops, and hence the study of
scheduling strategies for inner loops has attracted great attention in literature [23], [24], [25],
[8], [26], [27], [28], [29]. Inner loops typically have small bodies (relatively fewer Ops) which
header bit pause
optype
rest of the Op
Dynamic
Rescheduling
Figure
Example: List encoded schedule for acyclic code is RSI.
makes it hard to find ILP within these loop-bodies. Software pipelining is a well-understood
scheduling strategy used to expose the ILP across multiple iterations of the loop [30], [25].
There are two ways to perform software pipelining. The first one uses loop unrolling , in
which the loop body is unrolled a fixed number of times before scheduling. Loop bodies
scheduled via unrolling can be subjected to rescheduling via the Reschedule Acyclic Code
algorithm described in Section 4.2. The code expansion introduced due to unrolling, however,
is often unacceptable, and hence the second technique, Modulo Scheduling [30], is employed.
Modulo-scheduled loops have very little code expansion (as the prologue and epilogue of
the loop) which makes it very attractive. In this paper, only modulo-scheduled loops are
examined for the RSI property; unrolled-and-scheduled loops are covered by the acyclic RSI
techniques presented previously. First, some discussion of the structure of modulo-scheduled
loops in presented, followed by an algorithm to reschedule modulo scheduled code. The
section ends with a formal treatment to show the list-encoded modulo-scheduled cyclic code
is RSI. Concepts from Rau [29] are used as a vehicle for the discussion in this section.
Assumptions about the hardware support for execution of modulo scheduled loops are
as follows. In some loops, a datum generated in one iteration of the loop is consumed
in one of the successive iterations (an inter-iteration data dependence). Also, if there is
any conditional code in the loop body, multiple, data-dependent paths of execution exist.
Modulo-scheduling such loops is non-trivial 2 . This paper assumes three forms of hardware
support to circumvent these problems. First, register renaming via rotating registers [29] in
order to handle the inter-iteration data dependencies in loops is assumed. Second, to convert
the control dependencies within a loop body to data dependencies, support for predicated
2 See [31] and [32] for some of the work in this area.
execution [14] is assumed. Third, support for sentinel scheduling [33] to ensure correct
handling of exceptions in speculative execution is assumed. Also, the pre-conditioning [29]
of counted-DO loops is presumed to have been performed by the modulo scheduler when
necessary.
A modulo scheduled
loop,\Omega Gn , consists of three parts: a prologue (- Gn ), a kernel (- Gn ),
and an epilogue (" Gn ), where G n is the machine generation for which the loop was scheduled.
The prologue initiates a new iteration every II cycles, where II is known as the initiation
interval. Each slice of II cycles during the execution of the loop is called a stage. In the
last stage of the first iteration, execution of the kernel begins. More iterations are in various
stages of their execution at this point in time. Once inside the kernel, the loop executes
in a steady state (so called because the kernel code branches back to itself). In the kernel,
multiple iterations are simultaneously in progress, each in a different stage of execution. A
single iteration completes at the end of each stage. The branch Ops used to support the
modulo scheduling of loops have special semantics, by which the branch updates the loop
counts and enables/disables the execution of further iterations. When the loop condition
becomes false, the kernel falls through to the epilogue, which allows for the completion of the
stages of the unfinished iterations. Figure 7 shows an example modulo schedule for a loop
and identifies the prologue, kernel, and the epilogue. Each row in the schedule describes a
cycle of execution. Each box represents a set of Ops that execute in a same resource (e.g.
functional unit) in one stage. The height of the box is the II of the loop. All stages belonging
to a given iteration are marked with a unique alphabet 2 fA; B; C; D; E; Fg.
Figure
7 also shows the loop in a different form: the kernel-only (KO) loop [26] [29].
In a kernel-only loop, the prologue and the epilogue of the loop "collapse" into the kernel,
without changing the semantics of execution of the loop. This is achieved by predicating the
execution of each distinct stage in a modulo scheduled loop on a distinct predicate called a
stage predicate. A new stage predicate is asserted by the loop-back branch. Execution of
the stage predicated on the newly asserted predicate is enabled in the future executions of
the kernel. When the loop execution begins, stages are incrementally enabled, accounting
for the loop prologue. When all the stages are enabled, the loop kernel is in execution and
the loop is in the steady state. When the loop condition becomes false, the predicates for
the stages are reset, thus disabling the stages one by one. This accounts for the iteration
of the epilogue of the loop. A modulo scheduled loop can be represented in the KO form,
if adequate hardware (predicated execution) and software (a modulo scheduler to predicate
the stages of the loop) support is assumed. Further discussion of KO loop schedules can be
found in [29]. All modulo-scheduled loops can be represented in the KO form. The KO form
thus has the potential to encode modulo schedules for all classes of loops, a property which
is useful in the study of dynamic rescheduling of loops, as will be shown shortly.
The size of a modulo scheduled loop is larger than the original size of the loop, if the
modulo schedule has an explicit prologue, a kernel, and an epilogue. In contrast, a KO loop
schedule has exactly one copy of each stage in the original loop body, and hence has the same
size as the original loop body, provided the original loop was completely if-converted 3 . This
property of the KO loops is useful in performing dynamic rescheduling of modulo scheduled
3 For any pre-conditioned counted DO-loops, the size is the same as the size of loop body after pre-conditioning
A
if (p1)
if (p2)
if (p3)
if (p4)
if (p5)
F
if (p6)
A
A
F
A
F
A
F
A
F
A
F
Resources
Cycle
Prologue
Epilogue
Kernel1111
Cycle
Figure
7: A Modulo scheduled loop: on the left, a modulo scheduled loop (with the prologue,
kernel , and the epilogue marked) is shown. The same schedule is shown on the right, but
in the "collapsed" kernel-only (KO) form. Stage predicates are used to turn the execution
of the Ops ON or OFF in a given stage. The table shows values that the stage predicates
would take for this loop.
loops. Algorithm Reschedule KO Loop details the steps. The input to the algorithm is the
modulo scheduled KO loop, and the machine models for the old and the new generations
G old and G new . Briefly, the algorithm works as follows: identification of the predicates that
enable individual stages is performed first. An order is imposed on them, which then allows
for the derivation of the order of execution of stages in a single iteration. The ordering on
the predicates may be implicit in the predicate-id used for a given stage (increasing order
of predicate-ids). Alternatively, the order information could be stored in the object file and
made available at the time DR is invoked, without substantial overhead. Once the order of
execution of the stages of the loop is obtained, the reconstruction of the loop in its original,
unscheduled form is performed. At this time, the modulo scheduler is invoked on it to arrive
at the new KO schedule for the new generation.
Algorithm Reschedule KO Loop
input
\Omega old =the KO (kernel-only) modulo schedule such that:
number of stages
old g, the machine model description for the old VLIW;
new g, the machine model description for the new VLIW;
output
schedule for G new ;
var
old ], the table of n old buckets each holding the Ops from a unique stage, such that
the relative ordering of Ops in the bucket is retained;
functions
FindStagePred (O) returns the stage predicate on which Op O is enabled or disabled;
puts the Op O into the bucket B [p];
OrderBuckets (B; func) sorts the table of buckets B according to
the ordering function func;
StagePredOrdering () describes the statically imposed order on the stage predicates;
begin
- unscramble the old modulo schedule:
for all MultiOps M
2\Omega old do
for each Op O 2 M
begin
FindStagePred (O);
- order the buckets:
OrderBuckets (B; StagePredOrdering ());
- perform modulo scheduling:
Perform modulo scheduling on the sorted table of buckets B, using the algorithm
described by Rau in [30] to generate KO
schedule\Omega new ;
The RSI nature of List encoded modulo scheduled KO loop will now be proved.
Theorem 2 An arbitrary List encoded Kernel-Only modulo schedule of a loop is RSI.
Proof: Let L i be an arbitrary, ordered sequence of i Ops (i - 1) that represents the
loop body. Let F i denote a directed dependence graph for the Ops in L i , i.e. each Op in
L i is a node in F i , and the data- and control-dependences between the Ops are indicated by
directed arcs in F i . Note that the inter-iteration data dependences are also indicated in F i .
Let\Omega Gn denote a list encoded KO modulo schedule for generation G n . Also, let Gm denote
the VLIW generation for which rescheduling is performed.
Induction Basis. L 1 is a loop body of length 1. In this case,
and the
dependence graph has a single node. It is trivial in this case
that\Omega Gn is RSI, because after
rescheduling to generation Gm , the number of Ops in the schedule will remain 1, or,
(Note that a loop where
is the degenerate case).
Induction Step. L p is a loop body of length p, where p ? 1. Assume
that\Omega Gn is RSI. In
other words,
Now consider another loop body L p+1 , which is of length p + 1. Let (p st Op be denoted
by z. Also, let \Theta Gn denote the list encoded KO modulo schedule for L p+1 , which means
1. In order to prove the theorem at hand, it must now be proved that
\Theta Gn is RSI
if\Omega Gn is RSI.
It is possible that due to Op z in L p+1 , the nature of the graph F p could be different from
that of the graph F p+1 in two ways: (1) Op z is data dependent on one or more Ops in L p+1
or vice versa, or (2) the Op z is independent of all the Ops in L p+1 . In both of these cases,
the data dependences and the resource constraints are honored by the modulo scheduling
algorithm via appropriate use of NOPs and/or empty MultiOps within the schedule. When
this schedule is list encoded, the NOPs and the empty MultiOps are made implicit via the
use of pause and the FUtype fields within the Ops. Hence,
In other words,
From this result and from Equation 6, it follows that:
Similarly, for both the cases, sizeof (\Theta Gm
From Equations 9 and 10, and by induction, it is proved that an arbitrary list encoded KO
modulo schedule is RSI.
Corollary 1 A List encoded schedule is RSI.
Proof: All program codes can be divided into the two categories: the acyclic code and
cyclic code as defined in Section 2. Hence, It follows from Theorem 1 and Theorem 2 that
a list encoded schedule is RSI.
Conclusions
This paper has presented the highlights of a solution for the cross-generation compatibility
problem in VLIW architectures. The solution, called Dynamic Rescheduling, performs
rescheduling of program code pages at first-time page faults. Assistance from the compiler,
the ISA, and the OS is required for dynamic rescheduling. During the process of reschedul-
ing, NOPs must be added to/deleted from the page to ensure the correctness of the schedule.
Such additions/deletions could lead to changes in the page size. The code-size changes are
hard to handle at run-time, would and require extra support in hardware (TLB extensions)
and software (VM management extension).
An ISA encoding called List Encoding, which encodes the NOPs in the program implicitly,
was presented. The list encoded ISA has fixed-width Ops. The Header Op (first Op) in a
MultiOp indicates the number of empty MultiOps (if any) following it. This information
eliminates the need to explicitly encode the empty MultiOps in the schedule. The OpType
field encoded in each Op eliminates the need of explicitly encoding the NOPs within the
MultiOp, because the decode hardware can use this information to expand and route the Op
to appropriate execution resource. A property of the list encoding called Rescheduling-Size
Invariance (RSI) was proved for the acyclic and cyclic (for kernel-only modulo-scheduled)
codes. A schedule of code is RSI iff the code size remains constant across the dynamic
rescheduling transformation.
A study of the instruction fetch hardware and I-Cache organizations required to support
the list encoding has previously been studied [34]. The work presented in this paper can
be extended with a study of other encoding techniques which may not be rescheduling-size
invariant (non-RSI encodings). Also, a study of rescheduling algorithms which operate on
non-RSI encodings can be conducted. These topics are currently being investigated by the
authors.
--R
"Dynamic rescheduling: A technique for object code compatibility in VLIW architectures,"
"Dynamically scheduled VLIW processors,"
"Hardware support for large atomic units in dynamically scheduled machines,"
"An architectural framework for supporting heterege- neous instruction-set architectures,"
"A fill-unit approach to multiple instruction issue,"
"An architectural framework for migration from CISC to higher performance platforms,"
"The Cydra 5 departmental supercomputer,"
"HPL PlayDoh architecture specification: version 1.0,"
"An approach to scientific array processing: the architectural design of the AP-120B/FPS-164 family,"
"The Superblock: An effective structure for VLIW and superscalar compilation,"
"Effective compiler support for predicated execution using the Hyperblock,"
"Conversion of control dependence to data dependence,"
"On predicated execution,"
"Optimization of VLIW compatibility systems employing dynamic rescheduling."
"A Persistent Rescheduled-Page Cache for low-overhead object-code compatibility in VLIW architectures,"
"Tradeoffs in supporting two page sizes,"
"Virtual memory support for multiple page sizes,"
"Surpassing the TLB performance of superpages with less operating system support,"
"A reduced multipipeline machine description that preserves scheduling constraints,"
"Optimization of machine descriptions for efficient use,"
"Efficient instruction scheduling using finite state automata,"
"Some scheduling techniques and an easily schedulable horizontal architecture for high performance scientific computing,"
"Efficient code generation for horizontal architec- tures: Compiler techniques and architectural support,"
"Software pipelining: An effective scheduling technique for VLIW machines,"
"Overlapped loop support in the Cydra 5,"
"Realistic scheduling: compaction for pipelined architec- tures,"
new global software pipelining algorithm,"
"Code generation schemas for modulo scheduled DO-loop and WHILE-loops,"
"Iterative modulo scheduling: An algorithm for software pipelining loops,"
Modulo Scheduling with Isomorphic Control Transformations.
"Software pipelining of loops with conditional branches,"
"Sentinel scheduling: A model for compiler-controlled speculative execution,"
"Instruction fetch mechanisms for VLIW architectures with compressed encodings,"
--TR
--CTR
Masahiro Sowa , Ben A. Abderazek , Tsutomu Yoshinaga, Parallel Queue Processor Architecture Based on Produced Order Computation Model, The Journal of Supercomputing, v.32 n.3, p.217-229, June 2005
Jun Yan , Wei Zhang, Hybrid multi-core architecture for boosting single-threaded performance, ACM SIGARCH Computer Architecture News, v.35 n.1, p.141-148, March 2007 | list encoding;instruction cache;VLIW;microarchitecture;processor architecture;instruction-set encoding |
354867 | Computing Orthogonal Drawings with the Minimum Number of Bends. | AbstractWe describe a branch-and-bound algorithm for computing an orthogonal grid drawing with the minimum number of bends of a biconnected planar graph. Such an algorithm is based on an efficient enumeration schema of the embeddings of a planar graph and on several new methods for computing lower bounds of the number of bends. We experiment with such algorithm on a large test suite and compare the results with the state-of-the-art. The experiments show the feasibility of the approach and also its limitations. Further, the experiments show how minimizing the number of bends has positive effects on other quality measures of the effectiveness of the drawing. We also present a new method for dealing with vertices of degree larger than four. | Introduction
Various graphic standards have been proposed to draw graphs, each one devoted
to a specific class of applications. An extensive literature on the subject can be
found in [4, 23, 2]. In particular, an orthogonal drawing maps each edge into a
chain of horizontal and vertical segments and an orthogonal grid drawing is an
orthogonal drawing such that vertices and bends along the edges have integer
coordinates. Orthogonal grid drawings are widely used for graph visualization
in many applications including database systems (entity-relationship diagrams),
software engineering (data-flow diagrams), and circuit design (circuit schemat-
ics).
Research supported in part by the ESPRIT LTR Project no. 20244 - ALCOM-IT.
y IASI, CNR, viale Manzoni 30, 00185 Roma Italy.
z Dipartimento di Informatica e Automazione, Universit'a di Roma Tre, via della Vasca
Navale 84, 00146 Roma, Italy.
x Dipartimento di Informatica e Automazione, Universit'a di Roma Tre, via della Vasca
Navale 84, 00146 Roma, Italy.
Many algorithms for constructing orthogonal grid drawings have been proposed
in the literature and implemented into industrial tools. They can be
roughly classified according to two main approaches: the topology-shape-metrics
approach determines the final drawing through an intermediate step in which
a planar embedding of the graph is constructed (see, e.g., [21, 19, 22, 20]); the
draw-and-adjust approach reaches the final drawing by working directly on its
geometry (see, e.g., [1, 18]). Since a planar graph has an orthogonal grid drawing
iff its vertices have degree at most 4, both the approaches assume that vertices
have degree at most 4. Such a limitation is usually removed by "expanding"
higher degree vertices into two or more vertices. Examples of expansion techniques
can be found in [20].
In the topology-shape-metrics approach, the drawing is incrementally specified
in three phases. The first phase, planarization, determines a planar embedding
of the graph, by possibly adding ficticious vertices that represent crossings.
The second phase, orthogonalization, receives as input a planar embedding and
computes an orthogonal drawing. The third phase, compaction, produces the
final orthogonal grid drawing trying to minimize the area.
The orthogonalization step is crucial for the effectiveness of the drawing and
has been extensively investigated. A very elegant O(n 2 log n) time algorithm
for constructing an orthogonal drawing with the minimum number of bends of
an n-vertex embedded planar graph has been presented by Tamassia in [19]. It
is based on a minimum cost network flow problem that considers bends along
edges as units of flow.
However, the algorithm in [19] minimizes the number of bends only within the
given planar embedding. Observe that a planar graph can have an exponential
number of planar embeddings and it has been shown [6] that the choice of the
embedding can deeply affect the number of bends of the drawing. Namely,
there exist graphs that, for a certain embedding have a linear number of bends,
and for another embedding have only a constant number. Unfortunately, the
problem of minimizing the number of bends in a variable embedding setting
is NP-complete [10, 11]. Optimal polynomial-time algorithms for subclasses of
graphs are shown in [6].
Because of the tight interaction between the graph drawing area and applica-
tions, the attention to experimental work on graph drawing is rapidly increasing.
In [5] it is presented an experimental study comparing topology-shape-metrics
and draw-and-adjust algorithms for orthogonal grid drawings. The test graphs
were generated from a core set of 112 graphs used in "real-life" software engineering
and database applications with number of vertices ranging from 10 to 100.
The experiments provide a detailed quantitative evaluation of the performance
of seven algorithms, and show that they exhibit trade-offs between "aesthetic"
properties (e.g., crossings, bends, edge length) and running time. The algorithm
GIOTTO (topology-shape-metrics with the Tamassia's algorithm in the orthogonalization
step) performs better than the others at the expenses of a worst time
performance.
Other examples of experimental work in graph drawing follow. The performance
of four planar straight-line drawing algorithms is compared in [14].
Himsolt [12] presents a comparative study of twelve graph drawings algorithms;
the algorithms selected are based on various approaches (e.g., force-directed, lay-
ering, and planarization) and use a variety of graphic standards (e.g., orthogonal,
straight-line, polyline). The experiments are conducted with the graph drawing
system GraphEd [12]. The test suite consists of about 100 graphs. Brandenburg
and Rohrer [3] compare five "force-directed" methods for constructing straight-line
drawings of general undirected graphs. Juenger and Mutzel [15] investigate
crossing minization strategies for straight-line drawings of 2-layer graphs, and
compare the performance of popular heuristics for this problem.
In this paper, we present the following results. Let G be a biconnected planar
graph such that each vertex has degree at most 4 (4-planar graph).
ffl We describe a branch-and-bound algorithm called BB-ORTH that computes
an orthogonal grid drawing of G with the minimum number of bends in the
variable embedding setting. The algorithm is based on: several new methods
for computing lower bounds on the number of bends of a planar graph
(Section 3). Such methods give new insights on the relationships between
the structure of the triconnected components and the number of bends;
a new enumeration schema that allows to enumerate without repetitions
all the planar embeddings of G (Section 4). Such enumeration schema
exploits the capability of SPQR-trees [7, 8] in implicitely representing the
embeddings of a planar graph.
ffl We present a system that implements BB-ORTH. Such a system is provided
with a graphical interface that animates all the phases of the algorithm
and displays partial results, exploiting existing graph drawing algorithms
to represent all the graphs involved in the computation. The interaction
with the system allows to stop the computation once a sufficiently good
orthogonal drawing is displayed. (Section 5).
ffl We test BB-ORTH against a large test suite of randomly generated graphs
with up to 60 vertices and compare the experimental results with the best
state-of-the-art results (algoritm GIOTTO) (Section 6). Our experiments
show: an improvement in the number of bends of 20-30%; an improvement
of several other quality measures of the drawing that are affected from the
number of bends. For example the length of the longest edge of the drawings
obtained with BB-ORTH is about 50% smaller than the length of the
longest edge of the drawings obtained with GIOTTO; a sensible increasing of
the CPU-time that however is perfectly affordable within the typical size
of graphs of real-life applications [5].
ffl Also, BB-ORTH can be easily applied on all biconnected components of
graphs. This yields a powerful heuristic for reducing the number of bends
in connected graphs. Further, the limitation on the degree of the vertices
can be easily removed by using the expansion techniques cited above.
Preliminaries
We assume familiarity with planarity and connectivity of graphs [9, 17]. Since
we consider only planar graphs, we use the term embedding instead of planar
embedding.
An orthogonal drawing of a 4-planar graph G is optimal if it has the minimum
number of bends among all the possible orthogonal drawings of G. When
this is not ambiguous, given an embedding OE of G, we also call optimal the
porthogonal drawing of G with the minimum number of bends that preserves
the embedding OE.
Let G be a biconnected graph. A split pair of G is either a separation-pair
or a pair of adjacent vertices. A split component of a split pair fu; vg is either
an edge (u; v) or a maximal subgraph C of G such that C contains u and v,
and fu; vg is not a split pair of C. A vertex w distinct from u and v belongs to
exactly one split component of fu,vg.
are some pairwise edge disjoint split components of G
with split pairs respectively. The graph G 0 obtained by substituting
each of G
partial graph of G. We denote E virt (E nonvirt ) the set of (non-)virtual edges of
G 0 . We say that G i is the pertinent graph of e
is the representative edge of G i .
Let OE be an embedding of G and let OE 0 be an embedding of G 0 . We say that
OE preserves OE 0 if G 0
can be obtained from G OE by substituting each component
G i with its representative edge.
In the following we summarize SPQR-trees. For more details, see [7, 8].
SPQR-trees are closely related to the classical decomposition of biconnected
graphs into triconnected components [13].
Let fs; tg be a split pair of G. A maximal split pair fu; vg of G with respect
to fs; tg is a split pair of G distinct from fs; tg such that for any other split pair
of G, there exists a split component of fu containing vertices u, v,
s, and t.
be an edge of G, called reference edge. The SPQR-tree T of G
with respect to e describes a recursive decomposition of G induced by its split
pairs. Tree T is a rooted ordered tree whose nodes are of four types: S, P, Q,
and R. Each node of T has an associated biconnected multigraph, called the
skeleton of , and denoted by skeleton(). Also, it is associated with an edge
of the skeleton of the parent of , called the virtual edge of in skeleton().
Tree T is recursively defined as follows.
If G consists of exactly two parallel edges between s and t, then T consists
of a single Q-node whose skeleton is G itself.
If the split pair fs; tg has at least three split components G
the root of T is a P-node . Graph skeleton() consists of k parallel edges
between s and t, denoted e
Otherwise, the split pair fs; tg has exactly two split components, one of them
is the reference edge e, and we denote with G 0 the other split component. If G 0
has cutvertices c that partition G into its blocks G
in this order from s to t, the root of T is an S-node . Graph skeleton() is the
cycle t, and e i connects c i\Gamma1 with c i
If none of the above cases applies, let fs be the maximal
split pairs of G with respect to fs; tg (k 1), and for be the
union of all the split components of fs but the one containing the reference
edge e. The root of T is an R-node . Graph skeleton() is obtained from G
by replacing each subgraph G i with the edge e i between s i and t i .
Except for the trivial case, has children k in this order, such that
i is the root of the SPQR-tree of graph G i [ e i with respect to reference edge
said to be the virtual edge of node i in skeleton()
and of node in skeleton( i ). Graph G i is called the pertinent graph of node
and of edge e i .
The tree T so obtained has a Q-node associated with each edge of G, except
the reference edge e. We complete the SPQR-tree by adding another Q-node,
representing the reference edge e, and making it the parent of so that it
becomes the root.
Let be a node of T . We have: if is an R-node, then skeleton() is a triconnected
if is an S-node, then skeleton() is a cycle; if is a P-node,
then skeleton() is a triconnected multigraph consisting of a bundle of multiple
edges; and if is a Q-node, then skeleton() is a biconnected multigraph
consisting of two multiple edges.
The skeletons of the nodes of T are homeomorphic to subgraphs of G. The
SPQR-trees of G with respect to different reference edges are isomorphic and are
obtained one from the other by selecting a different Q-node as the root. Hence,
we can define the unrooted SPQR-tree of G without ambiguity.
The SPQR-tree T of a graph G with n vertices and m edges has m Q-nodes
and O(n) S-, P-, and R-nodes. Also, the total number of vertices of the skeletons
stored at the nodes of T is O(n).
A graph G is planar if and only if the skeletons of all the nodes of the SPQR-
tree T of G are planar. An SPQR-tree T rooted at a given Q-node represents all
the planar embeddings of G having the reference edge (associated to the Q-node
at the root) on the external face.
3 Lower Bounds for Orthogonal Drawings
In this section we propose some new lower bounds on the number of bends of a
biconnected planar graph. The proofs of the theorems are omitted due to space
limitation.
E) be a biconnected 4-planar graph and H be an orthogonal
drawing of G, we denote by b(H) the total number of bends of H and by b E 0
the number of bends along the edges of E 0
subgraphs of G such that
be an optimal orthogonal drawing
of G i and let H be an orthogonal drawing of G. Then, b(H)
be an embedded partial graph of G and let H 0
be an orthogonal
drawing of G 0
. Suppose H 0
is such that b E nonvirt (H 0
an embedding OE of G that preserves OE 0 and an optimal orthogonal drawing H OE
of G OE .
From Property 1 and Lemma 1 it follows a first lower bound.
Theorem 1 Let E) be a biconnected 4-planar graph and G 0
an embedded
partial graph of G. For each virtual edge e i of G 0 ,
a lower bound on the number of bends of the pertinent graph G i of e i . Consider
an orthogonal drawing H 0
of G 0
such that b E nonvirt (H 0
be an embedding of G that preserves OE 0 , consider any orthogonal drawing H OE of
G OE . We have that:
Remark 1 An orthogonal drawing H 0
of G 0
such that b E nonvirt(HOE 0
can be easily obtained by using the Tamassia's algorithm [19]. Namely,
when two faces f and g share a virtual edge, the corresponding edge of the dual
graph in the minimum cost flow problem of [19] is set to zero.
A second lower bound is described in the following theorem.
Theorem 2 Let G OE be an embedded biconnected 4-planar graph and G 0
an
embedded partial graph of G, such that OE preserves OE 0 . Derive from G 0
the
embedded graph G OE 0 by substituting each virtual edge e i of G 0 ,
any simple path from u i to v i in G i . Consider an optimal orthogonal drawing
and an orthogonal drawing H OE of G OE . Then we have that:
The next corollary allows us to combine the above lower bounds into a hybrid
technique.
Corollary 1 Let G OE be an embedded biconnected 4-planar graph and G 0
an
embedded partial graph of G. Consider a subset F virt of the set of the virtual
edges of G 0
and derive from G 0
the graph G 00
by substituting each edge e i 2
F virt with any path from u i to v i in G i . Denote by E p the set of edges of G
introduced by such substitution. For each e be a lower
bound on the number of bends of the pertinent graph G j of e j . Consider an
orthogonal drawing H 00
of G 00
, such that b E nonvirt(HOE 0
Let H OE be an orthogonal drawing of G OE , we have that:
4 A Branch and Bound Strategy
Let G be a biconnected 4-planar graph. In this section we describe a technique
for enumerating all the possible orthogonal drawings of G and rules to avoid
examining all of them in the computation of the optimum.
The enumeration exploits the SPQR-tree T of G. Namely, we enumerate all
the orthogonal drawings of G with edge e on the external face by rooting T at e
and exploiting the capacity of T rooted at e in representing all the embeddings
having e on the external face. A complete enumeration is done by rooting T at
all the possible edges.
We encode all the possible embeddings of G, implicitely represented by T
rooted at e, as follows.
We visit the SPQR-tree such that a node is visited after its parent, e.g. depth
first or breadth first. This induces a numbering r of the P- and R-nodes
of T . We define a r-uple of variables r that are in one-to-one
correspondence with the P- and R-nodes of T . Each variable x i of X
corresponding to a R-node i can be set to three values corresponding to two
swaps of the pertinent graph of i plus one unknown value. Each variable x j of
X corresponding to a P-node j can be set to up to seven values corresponding
to the possible permutations of the pertinent graphs of the children of j plus
one unknown value. Unknown values represent portions of the embedding that
are not yet specified.
A search tree B is defined as follows. Each node fi of B corresponds to a
different setting X fi of X . Such setting is partitioned into two contiguous (one
of them possibly empty) subsets x Elements of the
first subset contain values specifying embeddings while elements in the second
subset contain unknown values. Leaves of B are in correspondence with settings
of X without unknown values. Internal nodes of B are in correspondence with
settings of X with at least one unknown value. The setting of the root of B
consists of unknown values only. The children of fi (with subsets x
and x child for each
possible value of x h+1 .
Observe that there is a mapping between embedded partial graphs of G and
nodes of B. Namely, the embedded partial graph G fi of G associated to node fi
of B with subsets x obtained as follows. First, set
G fi to skeleton( 1 ) embedded according to x 1 . Second, substitute each virtual
edge e i of 1 with the skeleton of the child i of 1 , embedded according to x i ,
only for 2 i h. Then, recursively substitute virtual edges with embedded
skeletons until all the skeletons in fskeleton( 1 are used.
We visit B breadth-first starting from the root. At each node fi of B with
setting X fi we compute a lower bound and an upper bound of the number of
bends of any orthogonal drawing of G such that its embedding is (partially)
specified by X fi . The current optimal solution is updated accordingly. Children
are not visited if the lower bound is greater than the current optimum.
For each fi, lower bounds and upper bounds are computed as follows by using
the results presented in Section 3.
1. We construct G fi using an array of pointers to the nodes of T . Observe
that at each substitution of a virtual edge with a skeleton a reverse of the
adjacency lists might be needed.
2. We compute lower bounds. G fi is a partial graph of G, with embedding
derived from X fi . Let E virt be the set of virtual edges of G fi . For each
consider the pertinent graph G i and compute a lower bound b i
on the number of bends of G i using Theorem 1. For each e i such that the
value of such lower bound is zero we substitute in G fi edge e i with any
simple path from u i to v i in G i , so deriving a new graph G 0
fi . Denote by
F virt the set of such edges and by E p the set of edges of G introduced by
the substitution (observe that E p may be empty). By Remark 1 we apply
Tamassia's algorithm on G 0
, assigning zero costs to the edges of the dual
graph associated to the virtual edges. We obtain an orthogonal drawing
of G 0
with minimum number of bends on the set
) be such a number of bends. Then, by Corollary 1 we compute the
lower bound L fi at node fi, as
Lower bounds (b i ) can be pre-computed with a suitable pre-processing by
visiting T bottom-up. The pre-computation consists of two phases. We
apply Tamassia's algorithm on the skeleton of each R- and P-node, with
zero cost for virtual edges; in this way we associate a lower bound to each
R- and P-node of T . Note that these pre-computed bounds do not depend
on the choice of the reference edge, so they are computed only once, at
the beginning of BB-ORTH. We visit T bottom-up summing for each Rand
P-node the lower bounds of the children of to the lower bound
of . Note that these pre-computed bounds depend on the choice of the
reference edge, so they are re-computed at any choice of the reference edge.
3. We compute upper bounds. Namely, we consider the embedded partial
graph G fi and complete it to a pertinent embedded graph G OE . The embedding
of G OE is obtained by substituting the unknown values of X fi with
embedding values in a random way. Then we apply Tamassia's algorithm
to G OE so obtaining the upper bound. We also avoid multiple generations
of the same embedded graph in completing the partial graph.
We compute the optimal solution over all possible choices of the reference
edge. To do so, our implementation of SPQR-trees supports efficient evert oper-
ations. Also, to expedite the process we: reuse upper bounds computed during
computations referring to different choices of the virtual edge; avoid to compute
solutions referring to embeddings that have been already explored. If an edge e
that has already been reference edge appears on the external face of some G fi ,
since we have already explored all the embeddings with e on the external face,
we cut from the search tree all the descendants of fi.
One of the consequences of the above discussion is summarized in the following
theorem:
Theorem 3 Let G be a biconnected planar graph. The enumeration schema
adopted by BB-ORTH examines each planar embedding of G exactly once.
5 Implementation
The system has been developed with the C++ language and has full Leda [16]
compatibility. Namely, several classes of Leda have been refined into new classes:
Embedded Graphs, preserve the embedding for each update operation. We have
taken care of a new efficient implementation of several basic methods like DFS,
BFS, st-numbering, topological sorting, etc; Directed Embedded Graphs, with
several network flow facilities. In particular we implemented the minimum cost
flow algorithm that searches for minimum cost augmenting paths with the Dijkstra
algorithm, used in the original paper of Tamassia [19]. We implemented the
corresponding priority queues with the Fibonacci-heaps of Leda; Planar Embedded
Graphs, with faces and dual graph; Orthogonal planar embedded graphs, to
describe orthogonal drawings; Drawn Graphs, with several graphical attributes;
SPQR-trees, with several supporting classes, like split-components, skeletons,
etc. As far as we know, this is the first implementation of the SPQR-trees. Fur-
ther, to support experiments, we have developed a new Leda graph editor, based
on the previous Leda editor. The graphical interface shows an animation of the
algorithm in which the SPQR-tree is rooted at different reference edges. Also,
it displays the relationships between the SPQR-tree and the search tree and the
evolution of the search tree. Clicking at SPQR-tree nodes shows skeletons of
nodes. Of course, to show the animation, the graphical interface exploits several
graph drawing algorithms.
Fig. 1 shows a 17-verticed graph drawn by the GIOTTO, and by BB-ORTH,
respectively. Fig. 2 shows a 100-vertices graph drawn by the GIOTTO, and by
BB-ORTH, respectively.
Finally, we observe that a careful cases analysis of each P-node allows
faster computations of preprocessing lower bounds; such analysis is based on the
number of virtual edges of the skeleton of , on the values of the precomputed
lower bounds of the children of and on length of the shortest paths in the
pertinent graphs of the children of .
6 Experimental Setting and Computational Result
We have tested our algorithm against two randomly generated test suites
overall consisting of about 200 graphs. Such graphs are available on the web
(www.inf.uniroma3.it/people/gdb/wp12/LOG.html) and have been generated
as follows. It is well known that any embedded planar biconnected graph can
be generated from the triangle graph by means of a sequence of insert-vertex
and insert-edge operations. Insert-vertex subdivides an existing edge into two
new edges separated by a new vertex. Insert-edge inserts a new edge between
two existing vertices that are on the same face. Thus, we have implemented
a generation mechanism that at each step randomly chooses which operation
to apply and where to apply it. The two test suites differ in the probability
distributions we have given for insert-vertex and insert-edge. Also, to stress the
performance of the algorithm we have discarded graphs with "a small number"
of triconnected components. Observe that we decided to define a new test suite
instead of using the one of [5] because the biconnected components of those
graphs do not have a sufficiently large number of triconnected components (and
thus a sufficient number of different embeddings) to stress enough BB-ORTH.
For each drawing obtained by the algorithm, we have measured the following
quality parameters. bends: total number of edge bends. maxedgebends: max
number of bends on an edge. unifbends: standard deviation of the number of
edge bends. area: area of the smallest rectangle with horizontal and vertical sides
covering the drawing. totaledgelen: total edge length. maxedgelen: maximum
length of an edge. uniflen: standard deviation of the edge length. screenratio:
deviation of the aspect ratio of the drawing (width/height or height/width) from
the optimal one (4/3). Measured performance parameters are: cpu time, total
number of search tree nodes, number of visited search tree nodes.
Due to space limitations, we show the experimental results with respect to
only. They are summarized in the graphics of Fig. 3
On the x-axis we either have the number of vertices or the total number (RP)
of R- and P-nodes. All values in the graphics represent average values.
Fig. 3.A, 3.C and 3.D compare respectively the number of bends, the area and
the maxedgelen of the drawing of BB-ORTH with the ones of GIOTTO. Graphics
with white points represent the behavior of BB-ORTH. Graphics with black points
represent the behavior of GIOTTO. The improvement on the number of bends
positively affects all the quality measures. In Fig. 3.B, 3.D and 3.F we show the
percentages of the improvements.
Concerning performance, Fig. 3.G shows the CPU-time of BB-ORTH on a
RISC6000 IBM. The algorithm can take about one hour on graphs with
vertices. Finally, Fig. 3.H compares the number of nodes of the search tree and
the number of nodes that are actually visited. It shows that for high values of
RP the percentage of visited nodes is dramatically low (below 10 %).
7 Conclusions and Open Problems
Most graph drawing problems involve the solution of optimization problems that
are computationally hard. Thus, for several years the resarch in graph drawing
has focused on the development of efficient heuristics. However, the graphs to be
drawn are frequenly of small size (it is unusual to draw an Entity-Relationship
or Data Flow diagrams with more than 70-80 vertices) and the available work-stations
allow faster and faster computation. Also, the requirements of the users
in terms of aesthetic features of the interfaces is always growing.
Hence, recently there has been an increasing attention towards the development
of graph drawing tools that privilege the effectiveness of the drawing
against the efficiency of the computation (see e.g., [15]).
In this paper we presented an algorithm for computing an orthogonal drawing
with the minimum number of bends of a graph. The computational results that
we obtained are encouraging both in terms of the quality of the drawings and in
terms of the time performance.
Several problems are still open. For example: is it possible to apply variations
of the techniques presented in this paper to solve the upward drawability testing
problem? A directed graph is upward planar if it is planar and can be drawn
with all the edges following the same direction. The problem to test a directed
graph for upward drawability is NP-complete [10]; how difficult is it to find
an enumeration schema and lower bounds for the problem of computing the
orthogonal drawing in 3-space with the minimum number of bends?
Acknowledgements
We are grateful to Antonio Leonforte for implementing part of the system. We
are also grateful to Sandra Follaro, Armando Parise, and Maurizio Patrignani
for their support.
--R
A better heuristic for orthogonal graph drawings.
Graph Drawing
An experimental comparison of force-directed and randomized graph drawing algorithms
Algorithms for drawing graphs: an annotated bibliography.
An experimental comparison of three graph drawing algorithms.
Spirality of orthogonal representations and optimal drawings of series-parallel graphs and 3-planar graphs
Graph Algorithms.
On the computational complexity of upward and rectilinear planarity testing.
On the computational complexity of upward and rectilinear planarity testing.
Comparing and evaluating layout algorithms within GraphEd.
Dividing a graph into triconnected compo- nents
A note on planar graph drawing algorithms.
Exact and heuristic algorithms for 2-layer straightline crossing minimization
LEDA: a platform for combinatorial and geometric computing.
Planar graphs: Theory and algorithms.
Improved algorithms and bounds for orthogonal drawings.
On embedding a graph in the grid with the minimum number of bends.
Automatic graph drawing and readability of diagrams.
A unified approach to visibility representations of planar graphs.
Efficient embedding of planar graphs in linear time.
Graph Drawing
--TR
--CTR
Markus Eiglsperger , Michael Kaufmann , Martin Siebenhaller, A topology-shape-metrics approach for the automatic layout of UML class diagrams, Proceedings of the ACM symposium on Software visualization, June 11-13, 2003, San Diego, California
Giuseppe Di Battista , Walter Didimo , Maurizio Patrignani , Maurizio Pizzonia, Drawing database schemas, SoftwarePractice & Experience, v.32 n.11, p.1065-1098, September 2002
Markus Eiglsperger , Carsten Gutwenger , Michael Kaufmann , Joachim Kupke , Michael Jnger , Sebastian Leipert , Karsten Klein , Petra Mutzel , Martin Siebenhaller, Automatic layout of UML class diagrams in orthogonal style, Information Visualization, v.3 n.3, p.189-208, September 2004 | planar embedding;orthogonal drawings;bends;branch and bound;planar graphs;graph drawing |
354877 | Xor-trees for efficient anonymous multicast and reception. | We examine the problem of efficient anonymous multicast and reception in general communication networks. We present algorithms that achieve anonymous communication, are protected against traffic analysis, and require O(1) amortized communication complexity on each link and low computational comlexity. The algorithms support sender anonymity, receiver(s) anonymity, or sender-receiver anonymity. | Introduction
One of the primary objectives of an adversary is to locate and to destroy command-and-control
centers - that is, sites that send commands and data to various stations/agents. Hence, one
of the crucial ingredients in almost any network with command centers is to conceal and
to confuse the adversary regarding which stations issue the commands. This paper shows
how to use standard off-the-shelf cryptographic tools in a novel way in order to conceal the
command-and-control centers, while still assuring easy communication between the centers and
the recipients.
Specifically, we show efficient solutions that hide who is the sender and the receiver (or
both) of the message/directive in a variety of threat models. The proposed solutions are
efficient in terms of communication overhead (i.e., how much additional information must be
transmitted in order to confuse the adversary) and in terms of computation efficiency (i.e.,
how much computation must be performed for concealment). Moreover, we establish rigorous
guarantees about the proposed solutions.
1.1 The problem considered
Modern cryptographic techniques are extremely good in hiding all the contents of data, by
means of encrypting the messages. However, hiding the contents of the message does not hide
the fact that some message was sent from or received by a particular site. Thus, if some
location (or network node) is sending and/or receiving a lot of messages, and if an adversary
can monitor this fact, then even if an adversary does not understand what these messages
are, just the fact that there are a lot of outgoing (or incoming) messages reveals that this
site (or a network node) is sufficiently active to make it a likely target. The objective of
this paper is to address this problem - that is, the problem of how to hide, in an efficient
manner, which site (i.e. command-and-control center) transmits (or receives) a lot of data to
(or from, respectively) other sites in the network. This question was addressed previously in
the literature [Ch81, RS93] at the price of polynomial communication overhead for each bit
of transmission per edge. We show an amortized solution which after a fixed pre-processing
stage, can transmit an arbitrary polynomial-size message in an anonymous fashion using only
bits over each link (of a spanning tree) for every data bit transmission across a link.
1.2 General setting and threat model
We consider a network of processors/stations where each processor/station has a list of other
stations with which it can communicate (we do not restrict here the means of communication,
i.e. it could be computer networks, radio/satellite connections, etc.) Moreover, we do not
restrict the topology of the network - our general methodology will work for an arbitrary
network topology. One (or several) of the network nodes is a command-and-control center
that wishes to send commands (i.e. messages) to other nodes in the network. To reiterate,
the question we are addressing in this paper is how we can hide which site is broadcasting (or
multicasting) data to (a subset of) other processors in the network. Before we explore this
question further, we must specify what kind of attack we are defending against.
A simple attack to defend against is of a restricted adversary (called outside adversary) who
is allowed only to monitor communication channels, but is not allowed to infiltrate/monitor
the internal contents of any processor of the network. (As a side remark, such weak attack
is very easy to defend against: all processors simply transmit either noise or encrypted messages
on each communication channel - if noise is indistinguishable from encrypted traffic this
completely hides a communication pattern.) Of course, a more realistic adversary, (and the
one that we are considering in this paper) is the (internal) adversary that can monitor all the
communication between stations and which in addition is also trying to infiltrate the internal
nodes of the network.
That is, we consider the adversary that may mount a more sophisticated attack, where he
manages to compromise the security of one or several internal nodes of the network, whereby
he is now not only capable of monitoring the external traffic pattern but is also capable of
examining every message and all the data which passes through (or stored at) this infiltrated
node. Thus, we define an internal k-listening adversary, an adversary that can monitor all the
communication lines between sites and also manages to monitor (the internal contents of) up
to k sites of the network. (This, and similar definitions were considered before in the literature,
see, for example [RS93, CKOR97] and references therein). We remark, though, that in this
paper we restrict out attention only to listening adversary, that only monitors traffic, but does
not try to sabotage it, similar to [FGY93, KMO94], but with different objectives.
1.3 Comparison with Previous Work
One of the first works (if not the first one) to consider the problem of hiding the communication
pattern in the network is the work of Chaum [Ch81] where he introduced the concept of a mix:
A single processor in the network, called a mix, serves as a relay. A processor P that wants
to send a message m to a processor Q encrypts m using Q's public key to obtain m 0 . Then
P encrypts the pair (m using the public key of the mix. The double encrypted message is
sent to the mix. The mix decrypts the message (to get the pair (m 0 ; q)) and forwards m 0 to
q. Further work in this direction appear in [Pf85, PPW91, SGR97]. The single mix processor
is not secure when this single processor is cooperating with the (outside) adversary; If the
processor that serves as a mix is compromised, it can inform the adversary where the messages
are forwarded to. Hence, as Chaum pointed out, a sequence of "mixes" must be employed
at the price of additional communication and computation. Moreover, the single mix scheme
operates under some statistic assumption on the pattern of communication. In case a single
message is sent to the mix then an adversary that monitors the communication channels can
observe the sender and the receiver of the particular message.
An extension of the mix scheme is presented by Rackoff and Simon [RS93] who embedded
an n-element sorting network of depth polynomial in log(n) that mixes incoming messages
and requires only polynomially many (in log(n)) synchronous steps. In each such step every
message is sent from one site of the network to another site of the network. Thus, the message
delay may be proportional to log(n) times the diameter of the network. The statistic
assumptions on the pattern of communication is somewhat relaxed in [RS93] by introducing
dummy communication: Every processor sends a message simultaneously. However, the
number of (real and dummy) messages arriving to each destination is available to the traffic
analyzer. Rackoff and Simon also presented in [RS93] a scheme that copes with passive internal
adversaries by the use of randomly chosen committees and multi-party computation (e.g.,
[GMW87, BGW88, CCD88, CFGN96, CKOR97].)
More generally, secure multi-party computation can be used to hide the communication
pattern in the network (see, for example, [GMW87, Ch88, WP90, BGW88, CCD88, CFGN96,
CKOR97]) via secure function valuation. However, anonymous communication is a very restricted
form of hiding participants' input and hence may benefit from less sophisticated and
more efficient algorithms.
In particular, Chaum suggested in [Ch88] to use the dc-net approach in order to achieve
anonymous communication. Our approach is similar to the dining cryptographers solution in
[Ch88], where a graph characterization of the random bits distribution is given. We present a
specific choice (an efficient instance that satisfies Chaum's graph characterization) of selecting
(a small number of) keys for each processor, and a procedure to securely distribute the keys and
use O(1) amortized communication complexity on each link. Our algorithm is proven correct
by a new argument proving that each bit communicated has an equal probability to be 0 and
to be 1 for a particular adversary. In [Ch88] the case of anonymous sender is considered, in this
work we suggest schemes also for the cases in which the receiver is (receivers are, respectively)
anonymous, and in which both the sender and the receiver are anonymous to each other.
In [Ch88] it is assumed that the underlying communication networks is a ring or that a
back-off mechanism is repeatedly used to send data. In this work we consider the problem
of anonymous communication on a spanning tree of a general graph communication network.
We note that solutions for star and tree networks are briefly mentioned in [Pf85, PW87], with
no details for the way communication starts and terminates for this specific networks. Our
contributions is a detailed design for a (spanning) tree communication network. The details
include: a new scheme for seeds selection that ensures anonymity in the presence of an outside
adversary and k-listening internal dynamic adversary. Schemes for anonymous receiver, as well
as anonymous sender and receiver. We specify the initialization (including seed distribution),
the communication and the termination procedures that preserve anonymity for the case of
a (spanning) tree communication network. In addition, we use an extra random sequence
(that is produced by a pseudo random generator) shared by the sender (and the receiver(s)) to
encrypt (decrypt, respectively) the message, avoiding the use of additional different scheme for
encryption and decryption during the transmission of the (long) messages. This new approach
fits transmission of a very long sequence of bits, such as video information to several recipient.
Thus, it can be used for anonymous multicast such as multicast by cable TV.
Our initialization scheme is designed to cope with the problem of the information revealed
by the back-off mechanism (see [BB89]) by using a predefined ordered of transmission.
We note that in this work we do not concern ourselves with active adversary that can
corrupt the program or forge messages on the links as assumed in [Wa89]. The extensions
suggested in [Wa89] is a design of a fail-stop broadcast instead of assuming reliable broadcast.
In a network of n processors our algorithm (after a pre-processing stage) sends O(1) bits
on each tree link in order to transmit a clear-text bit of data and each processor computes
pseudo-random bits for the transmission of a clear-text bit. Multiple anonymous transmission
is possible by executing in parallel several instances of our algorithm. Each instance
uses part of the bandwidth of the communication links. Our algorithm is secure for both
outside adversary and k-listening internal dynamic adversary. (We remark, though, that we
are only considering eavesdropping "listening" adversary, similar to [FGY93, KMO94], and do
not consider a Byzantine adversary which tries to actively disrupt the communication, as in
[GMW87].)
1.4 A simple example
In this subsection, we examine a very simple special case, in order to illustrate the issues being
considered and a solution to this special case. We stress, though, that we develop a general
framework that works for the general case (e.g. the case of general communication graph,
unknown receiver, etc.) as well.
Suppose we are dealing with a network having 9 nodes:
P1 \Gamma! P2 \Gamma! P3 \Gamma! P4 \Gamma! P5 \Gamma! P6 \Gamma! P7 \Gamma! P8 \Gamma! R
where R is the "receiver" node and one of the P i is the command-and-control center which
must broadcast commands to R. The other P j 's for j 6= i are ``decoys'' which are used for
transmission purposes from P i to R and also are used to "hide" which particular P i is the real
command and control center. That is, in this simplified example, we only wish to hide from an
adversary which of the P i is the real command and control center which sends messages to R.
Before we explain our solution, we examine several inefficient, but natural to consider simple
strategies and then explain what are their drawbacks.
Communication-inefficient solution: One simple (but inefficient!) way to hide which P i is
the command-and-control center is for every P i to broadcast an (encrypted) stream of messages
to R. Thus, R receives 8 different streams of messages, ignores all the messages except those
from the real command-and-control center, and decrypts that one. Every processor P i forwards
messages of all the smaller-numbered processors and in addition sends its own message. Clearly,
an adversary who is monitoring all the communication channels and which can also monitor the
internal memory of one of the P i 's (which is not the actual command-and-control center) does
not know which P j is broadcasting the actual message. Drawback: Notice that instead of one
incoming message, R must receive 8 messages, thus the throughput of how much information
the real command-and-control center can send to R is only 1of the total capacity! As the
network becomes larger this solution becomes even more costly. Note that this solution enables
the receiver to identify the sender.
Computation-inefficient solution: In the previous example, the drawback was that the
messages from decoy command-and-control nodes were taking up the bandwidth of the channel.
In the following solution, we show how this difficulty can be avoided. In order to explain this
solution, we shall use pseudo-random generators 1 [BM84, Ha90, ILL89]. We first pick 8 seeds
for the pseudo-random generator, and give to processor P i seed s i . Processor P 1
stretches its seed s 1 into long pseudo-random sequence, and sends, at each time step the next
bit of its sequence to processor P 2 . Processor P 2 takes the bit it got from processor P 1 and
"xors" it with its own next bit from its pseudo-random sequence G(s 2 ) and sends it to P 3 and
so forth. The processor P j which is the real command-and-control center additionally "xors"
into each bit it sends out a bit of the actual message m i . Processor R is given all the 8 seeds
so it can take the incoming message, (which is the message from command-and-
control center "xored" with 8 different pseudo-random sequences.) Hence, R can compute all
the 8 pseudo-random sequences, subtract (i.e. xor) the incoming message with all the 8 pseudo-random
sequences and get the original command-and-control message m. The advantage of
this solution is that any P j which is not a command-and-control center (and not R), clearly
can not deduce which other processor is the real center. Moreover, the entire bandwidth
of the channel between command-and-control processor and the receiver is used to send the
messages from the center to the receiver. Drawback: The receiver must compute 8 different
pseudo-random sequences in order to recover the actual message. As the network size grows,
this becomes prohibitively expensive in terms of the computation that the receiver needs to
perform in order to compute the actual message m.
Our solution for this simple example: Here, we present a solution that is both computation-
efficient and communication-efficient and is secure against an adversary that can monitor all
the communication lines and additionally can learn internal memory contents of any one of
the intermediate processors. The seed distribution (for a particular communication session) is
as follows:
Pick 9 random seeds for pseudo-random generator s
ffl Give to the real command-and-control processor seed s 0 .
ffl Additionally, give to processor P 1 seed fs
processor P 3 two seeds fs 3 ; s 4 g, and so on. That is, we give to each processor P i for
the seeds fs g.
initial "seed" of truly random bits, and deterministically
expands it into a long sequence of pseudo-random bits. There are many such commercially available
pseudo-random generators, and any such "off-the-shelf " generator that is sufficiently secure and efficient will
suffice.
ffl give to receiver, R, one seed s 0
Suppose processor P 4 is the real command-and-control center. Then the distribution of seeds
is as follows:
Now, the transmission of the message is performed in the same fashion as in the previous
solution - that is, each processor receives a bit-stream from its predecessor, "xors" a single
bit from each pseudo-random sequence that it has, and sends it to the next processor. The
command-and-control center "xors" bits of the message into each bit that it sends out.
Notice, that adjacent processors "cancel" one of the pseudo-random sequences, by xoring
it twice, but introduce a new sequence. For example, processor P 2 cancels s 2 , but "introduces"
s 3 . Moreover, each processor must now only compute the output of at most three seeds. Yet,
it can be easily verified that if the adversary monitors all the communication lines and in
addition can learn seeds of any single processor P i which is not a command and control center,
then it can not gain any information as to which other P i is the real command and control
center, even after learning the two seeds that belong to processor P i .
Of course, the simplified example that we presented works only provided that the adversary
cannot monitor both the actual command-and-control center and can not monitor the memory
contents of the receiver. (We note that these and other restrictions can be resolved - we address
this further in the paper.) Moreover, it should be stressed that the restricted solution presented
above does not work if the adversary is allowed to monitor more than one decoy processor.
Note that our solution requires that the command-and-control and the receiver have a special
common seed s 0 , one obvious extension is to ensure that every two processors have a distinct
additional seed that is used for communication between themselves. We should point out that
in the rest of the paper we show how the above scheme can be extended to one that is robust
against adversaries that can monitor up-to k stations, where in our solution every processor
is required to compute the number of different pseudo-random sequences proportional to k
only (in particular, at most 2k 1). Moreover, we also show how to generalize the method
to arbitrary-topology networks/infrastructures. Additionally, we show how initial distribution
of seeds can be done without revealing the command-and-control center and how the actual
location of the command-and-control center can be hidden from the recipients of the messages
as well. At last, we show how communication from stations back to the command-and-control
center could be achieved without the stations knowing at which node of the network the center
is located and how totally anonymous communication can be achieved.
solutions vs. Public-key solutions
The above simple solution is a private-key solution, that is, we assume that before the protocol
begins, a set of seeds for pseudo-random function must be distributed in a private and
anonymous manner. Thus, we combine this solution with a preprocessing stage in which we
distribute these seeds using a public-key solution, that is, a solution where we assume that all
users/nodes only have corresponding public and private keys and do not share any information
a-priori. Thus, our overall solution is a public-key solution, where before communication
begins, we do not assume that users share any private data. As usual in many of such cryptographic
setting, our overall efficiency comes from the fact that we switch from public-key to
private key solution and then show how to (1) make an efficient private-key implement and
(2) how to set up private keys in a pre-processing stage by using public keys in an anonymous
and private manner.
The rest of the paper is organized as follows. The problem statement appears in Section
2. The anonymous communication (our Xor-Tree Algorithm) which is the heart of our scheme
appears in 3. Section 4 and 5 sketch the anonymous seeds transmission and the initialization
and termination schemes, respectively. Extensions and concluding remarks appear in Section
6.
Problem Statement
A communication network is described by a communication graph E). The nodes,
processors of the network. The edges of the graph represent
bidirectional communication channels between the processors. Let us first define the assumptions
and requirements used starting with the adversary models. The adversary is a passive
listening adversary that does not intervene in the computation, in particular it does neither
forge messages on the links, nor corrupt the program of the processors.
ffl An outside adversary is an adversary that can monitor all the communication links but
not the contents of the processors memory.
ffl An internal dynamic k-listening adversary (inside adversary, in short) is an adversary
that can choose to "bug" (i.e., listen to) the memory of up to k processors. The targeted
processors are called corrupted, compromised, or colluding processors. Corrupted processors
reveal all the information they know to the adversary, however they still behave
according to the protocol. The adversary does not have to choose the k faulty processors
in advance. While the adversary corrupts less than k processors the adversary can
choose the next processor to be corrupted using the information the adversary gained so
far from the processors that are already corrupted.
The following assumptions are used in the first phase of our algorithm which is responsible
for the seeds distribution. Each of the n processors has a public-key/private-key pair. The
public key of a processor, P , is known to all the processors while the private key of P is known
only to P .
The anonymity of the communicating parties can be categorized into four cases:
ffl Anonymous to the non participating processors: A processor P wishes to send a message
to processor Q without revealing to the rest of the processors and to the inside and
outside adversary the fact that P is communicating with Q.
ffl Anonymous to the sender and the non participating processors: P wishes to receive a
message from Q without revealing its identity to any processor including Q as well as to
an inside and outside adversary.
ffl Anonymous to the receiver(s) and the non participating processors: P wishes to send
(or multicast) a message without revealing its identity to any processor as well as to an
inside and an outside adversary.
ffl Anonymous to the sender, to the receiver, and the non participating processors: A processor
P wishes to communicate with some other processor, without knowing the identity
of the processor, and without revealing its identity to any processor including the one it
is communicating with, as well as to an inside and outside adversary. (This is similar to
the "chat-room" world-wide-web applications, where two processors wish to communicate
with one another totally anonymously, without revealing to each other or anybody
else their identity.)
The efficiency of a solution is measured by the communication overhead which is the number
of bits sent over each link in order to send a bit of clear-text data. The efficiency is also
measured by the computation overhead which is the maximal number of computation steps
performed by each processor in order to transfer a bit of clear-text data.
The algorithm is a combination of anonymous seeds transmission, initialization, communication
and termination. In the anonymous seeds transmission phase, processors that would like
to transmit, anonymously send seeds for a pseudo-random sequence generators to the rest of
the processors. The anonymous seeds transmission phase also resolves conflicts of multiple requests
for transmission by an anonymous back-off mechanism. Once the seeds are distributed
the communication can be started. Careful communication initialization (and termination)
procedure that hide the identity of the sender must be performed.
We first describe the core of our algorithm which is the communication phase. During
the communication phase seeds are used for the production of pseudo-random sequences. The
anonymous seeds distribution is presented following the description of the anonymous communication
phase.
3 Anonymous Communication
3.1 Computation-inefficient O(n) solution
The communication algorithm is designed for a spanning tree T of a general communication
graph, where the relation parent child is naturally defined by the election of a root. We start
with a simple but inefficient algorithm which requires O(n) computation steps of a processor.
(This algorithm is similar to the computation-inefficient solution presented in Section 1, but for
the general-topology graph. We then show how to make it computation-efficient as well.) In
this (computation-inefficient) solution the sender will chose a distinct seed for each processor.
Then the sender can encrypt each bit of information using the seeds of all the processors
including its own seeds. Each such seed is used for producing a pseudo-random sequence. The
details of the algorithm appear in Figure 1. The symbol \Phi is used to denote the binary xor
operation.
Note that the i'th bit produced by the root is a result of xoring twice every of the i'th bits
of the pseudo-random sequences except the senders' sequence: once by the sender and then
during the communication upwards. Each encrypted bit of data will be xored by the receiver(s)
using the senders' seed to reveal the clear-text. Note that the scheme is resilient to any number
of colluding processors as long as the sender and the receiver(s) are non-faulty. This simple
scheme requires a single node (the sender) to compute O(n) pseudo-random bits for each bit
of data. (We remark that in contrast, our Xor-Tree Algorithm, requires the computation of
only O(k) pseudo-random bits to cope with an outside adversary and an internal dynamic k-
listening adversary.) The next Lemma state the communication and computation complexities
of the algorithm presented in Figure 1.
Lemma 3.1 The next two assertions hold for every bit of data to be transmitted over each
edge of the spanning tree:
ffl The communication overhead of the algorithm is O(1) per edge.
ffl The computation overhead of our algorithm is O(n) pseudo-random bits to be computed
by each processor per each bit of data.
Proof: In each time unit two bits are sent in each link: one upwards and the other downwards.
Since a bit of data is sent every time unit (possibly except the first and last h time units, where
is the depth of the tree) the number of bits sent over a link to transmit a bit of data
is O(1). The second assertion follows from the fact that the sender computes the greatest
number of pseudo random bits in every time unit, namely O(n) pseudo-random bits in every
time units.
Seeds Distribution -
ffl Assign (anonymously) a distinct seed s i to each processor P i .
ffl Assign to the sender all the seeds s of all the processors and an
additional seed s 0 .
ffl Assign the receiver(s) with an additional seed, the seed s 0 .
Upwards Communication :
P is the sender -
ffl Let d i be the i'th bit of data.
l be the i'th bits received from the children (if any) of P .
0 be the i'th bit of the pseudo random sequence obtained from the
additional seed s 0 of P j .
n be the i'th bits of the pseudo random sequence obtained
from the seeds s
ffl The i'th bit P j sends to its parent (if any) is d i \Phi b 0
n .
P is not the sender -
l be the i'th bits received from the children (if any) of P j .
j be the i'th bit of the pseudo random sequence obtained from the seed
ffl The i'th bit that P communicants to its parent (if any) is
.
Downwards Communication -
ffl The root processor calculates an output as if it has a parent and sends the result
to every of its children.
Every processor which is not the root, sends to its children every bit received
from its parent.
ffl The receiver(s) decrypts the downward communication by xoring the i'th bit
that arrives from the parent with the i'th bit in the pseudo random sequence
obtained from s 0 .
Figure
1: O(n) Computation Steps Algorithm, for a processor P j .
3.2 Towards our O(k) solution: The choice of seeds
For the realization of the communication phase of our O(k) solution we use n(k
seeds where k is less than bn=2 \Gamma 1c. Each processor receives seeds. To describe the
seeds distribution decisions of the sender we use k each consists of two layers of
seeds. We order the processors by their (arbitrary assigned) indices we use the
relation follows in a straight forward manner.
The first level - Let L
n be the seeds that the sender (randomly)
chooses for the first level. The sender uses the sequence of seeds L 1
for the first layer of the first level and L 1
1 for the second layer. Note
that L 1
2 is obtained by rotating
seeds s 1
1 .
The l'th level - Similarly, for the l'th level 1 - l -
distinct seeds for this level L
n to be the seeds of the l'th level and
uses two sequences L l= L l and L l= s l
l
lis obtained by
rotating L l l times. receives the seeds s l
i+l and P
receives the seeds s l
Thus, at the end of this procedure every processor is assigned by 2k+2 distinct seeds.
Figure
2: The choice of seeds.
The seeds distribution procedure appears in Figure 2. An example for the choice of seeds
for the processors appears in Figure 3.
Seeds of
9 s 00
Figure
3: An example for the distribution of seeds, where 2.
The choice of seeds made by the sender has the following properties:
ffl Each seed is shared by exactly two processors.
ffl For every processor P , P shares a (distinct) seed with every of the k+1 processors that
immediately follow P , (if there are at least such k+1 processors), or with the rest of the
processors including P n , otherwise.
3.3 The Xor-Tree Algorithm
Here, we present out main algorithm, the Xor-Tree Algorithm. The Xor-Tree Algorithm appears
in Figure 4.
3.4 An abstract game
In this subsection we describe an abstract game that will serve us in analyzing and proving
the correctness of the Xor-Tree Algorithm presented in the previous subsection.
The adversary get to see the outputs of all the players. The adversary can pick k out of
the players and see their seeds. We claim, and later prove, that when the adversary does not
pick the sender then every one of the remaining (n \Gamma processors that are not picked by the
adversary is equally likely to be the sender for any poly-bounded adversary 2 .
We proceed by showing that the above assignment of seeds yields a special seed ds P for
each processor P . We choose ds P out of the seeds assigned to each non-faulty processor P .
We order the processors by their index in a cyclic fashion such that the processor that follows
the i'th processor, i 6= n, is the processor with the index and the processor that follows
the n'th processor is the first processor. Then we assign a new index for each processor such
that the sender has the index one, the processor that follows the sender has the index two and
so on and so forth. These new indices are used for the interpretation of next, follows, prior
and last in the description of the choice of special seeds that appears in Figure 6. Recall that
with overwhelming probability every two processors share at most one seed.
Note that by our special seeds selection, described in Figure 6, the special seeds are not
known to the k faulty processors.
Theorem 3.2 In the abstract game any of the (n \Gamma non-faulty processors is equally likely
to be the sender for any poly-bounded internal k-listening adversary.
Proof: We prove that the i'th bit produced by any non-faulty processors is equally likely
to be 0 or 1 (for any poly-bounded adversary). Let P be the first non-faulty processor that
follows the sender (P is among the first k processors that follow the sender). Let ds P 1
be
the special seed of the sender that is shared only with (the non-faulty processor) P . The i'th
bit that the sender outputs is a result of a xor operation with the i'th bit of the pseudo-random
2 If the adversary can predict who is the sender then we can use this adversary to break a pseudo-random
generator.
Seeds Distribution -
ffl Assign seeds to the processors as described in Figure 2.
ffl Assign the sender with one additional seed, s 0 .
ffl Assign the receiver(s) with an additional seed, the seed f the sender s 0 .
Upwards Communication :
is the sender -
ffl Let d i be the i'th bit of data.
l be the i'th bits received from the children (if any) of P j .
2k+2 be the i'th bits of the pseudo-random sequences obtained
from the seeds of P j .
be the i'th bit of the pseudo-random sequence obtained from the
additional seed s 0 of P j .
ffl The i'th bit P j sends to its parent (if any) is d
2k+3 .
is not the sender -
ffl The i'th bit that P j communicants to its parent (if any) is
Downwards Communication -
ffl The root processor calculates an output as if it has a parent and sends the result
to every of its children.
ffl Every non root processor send to its children every bit received from its parent.
ffl The receiver(s) decrypts the downward communication by xoring the i'th bit
that arrives from the parent with the i'th bit in the pseudo random sequence
obtained from s 0 .
Figure
4: The Xor-Tree Algorithm, for a processor P j .
Seeds Assignment - Assign seeds to the processors as described in Figure 2. Assign
the sender with one additional seed.
Computation - Each processor, P , uses its seeds to compute pseudo-random sequences.
At the i'th time unit the sender S computes the i'th bit of every of its pseudo-random
sequences, xors these bits and the i'th bit of data and outputs the result. At the same
time unit every other processor P computes the i'th bit of every of its pseudo-random
sequences xors these bits and outputs the result.
Figure
5: The Abstract Game.
The sender P 1 - Each of the k+1 processors that immediately follows the sender shares
exactly one seed with the sender. Since there are at most k colluding processors, one of
these k+1 processors must be non-faulty. Pick, P , the first such non-faulty processor.
Assign ds P 1
, the special seed of the sender, to be the seed that the sender shares with
processors P that is not among the k last processors - If P is not among
the last processors then P is assigned by 2k seeds of these seeds
are from the first layers of the k+1 seed levels. These k+1 seeds are new - they do not
appear in any processor prior to P . Since there are at most k colluding processors,
one of the next k processors is non-faulty. Let Q be the first such non-faulty
processor and assign ds P by the seed that P shares with Q. Repeat the procedure
until you reach a non-faulty processor that is among the last k processors.
A processors Q that is among the k last processors - Note that Q does not
new seeds since some of its seeds are assigned to the first processors
(at least the one in the k + 1'th level). Fortunately, Q shares a single new seed
with every of the last processors. This fact allows us to continue the special seed
selection procedure, by choosing the seed shared with the next non-faulty processor.
Figure
Choice of special seeds.
sequence (among other pseudo-random sequences) obtained from ds P 1
. Since only P (that is
a non-faulty processor) shares ds P 1
with the sender, it holds that the i'th bit output by the
sender is equally likely to be 0 or 1 (for any poly-bounded internal k-listening adversary). A
similar argument hold for the output of P , since there exists a special seed shared with the
next non-faulty processor Q. In general it holds for the output of every non-faulty processor.
The same argument holds if any of the non-faulty processors is the sender. Thus, for
any polynomially-bounded k-internal and external adversary, the distribution of the output is
indistinguishable of the identity of the sender.
The fact that the adversary can be a dynamic adversary is implied by the Corollary 3.3.
The proof of the corollary is similar to the proof of Theorem 3.2.
Corollary 3.3 For any k 0 - k after the adversary chooses k 0 faulty processors any of the
processors is equally likely to be the sender for any poly-bounded internal
k'-listening adversary.
3.5 Reduction to the abstract game
In this subsection we prove that if there is an algorithm that reveals information on the identity
of the sender in the tree then there exists an algorithm that reveals information on the identity
of the sender in the abstract game. The above reduction together with Theorem 3.2 yields the
proof of correctness for the Xor-Tree algorithm.
Assume that the adversary reveals information on the sender in a tree T of n processors.
Then an abstract game of n nodes is mapped to the tree as follows:
1. Each processor of the abstract game is assigned to a node of the tree T .
2. The output of every processor to its parent is computed as follows: Let the hight of
a processor P in T be the number of edges in the longest path P from P to a leaf
such that P does not traverse the root. We start with the processors that are in hight
i.e. the leaves. The output of the processors that were assigned to the leaves of
the tree is not changed i.e. it is identical to their output in the abstract game. Once
we computed the output of processors in hight h we use these computed outputs to
compute the outputs of processors in hight h+ 1. Let Q be a processor in hight h+ 1,
l be the i'th computed bits that are output by the children of Q, and
let b Q be the original i'th output bit of Q in the abstract game. The computed output
of Q is b 1 \Phi b 2 \Phi
Figure
7: The Reduction.
Theorem 3.4 In the Xor-Tree Algorithm any of the (n \Gamma non-faulty processors is equally
likely to be the sender for any poly-bounded internal k-listening adversary.
Proof: If there exists an adversary A that reveals information on the identity of the sender
in a tree T then there exists an abstract game with the same number of processors and the
same seeds distribution, such that the application of the reduction in Figure 7 yields the
communication pattern on T and reveals information on the sender identity in the abstract
game. This contradicts Theorem 3.2 and thus contradicts the existence of A.
The next Lemma states the communication and computation overheads of the anonymous
communication algorithm.
Lemma 3.5 The next two assertions hold for every bit of data to be transmitted over each
edge of the spanning tree:
ffl The communication overhead of the algorithm is O(1) per edge.
ffl The computation overhead of our algorithm is O(k) pseudo-random bits to be computed
by each processor per each bit of data.
Proof: In each time unit two bits are sent in each link: one upwards and the other downwards.
Since a bit of data is sent every time unit (possibly except the first and last h time units, where
is the depth of the tree) the number of bits send over each link to transmit a bit of
data is O(1). The second assertion follows from the fact that in each time unit each processor
generates at most 2k
Anonymous Seeds Transmission
We first outline the main ideas in the seeds transmission scheme and then give full details.
Every processor has a public-key encryption, known to all other processors. A virtual ring
defined by the Euler tour on the tree is used for the seeds transmission. Note that the indices
of the processor used in this description are related to their location on the virtual ring. First
all processors send messages to P 1 over the (virtual) ring. Those processors that wish to
broadcast send a collection of seeds, and those processors that do not wish to broadcast, send
dummy messages of equal length. To do so in an anonymous fashion (so that P 1 does not know
which message is from which processor), k + 1 of Chaum's mixes [Ch81] are used, where k
(real) processors just before P 1 in the Euler tour are used as mixes. Hence, P 1 can identify
the number of non-dummy arriving messages but not their origin. In case more than one non-dummy
message reaches P 1 , a standard back-off algorithm is initiated by P 1 . Once exactly
one message (containing a collection of seeds) arrives to P 1 the seed distribution procedure
described above (for sending a collection of seeds to P 1 ) is used to send the seeds to P 2 and
so on. (At this point processors know that only one processor wishes to broadcast.) This
procedure is repeated n times in order to allow the anonymous sender to transmit a collection
of seeds to every processor. Notice that this process is quadratic in the size of the ring, the
number of colluding processors k, and the length of the security parameter, (i.e., let g be a
security parameter and k as before, then we send O((gkn) 2 ) bits per edge.) Thus, as long the
message size p to be broadcasted is greater than O((gkn) 2 ) we achieve O(1) overall amortized
cost per edge, and otherwise we get O((gkn) 2 =p) amortized cost.
The details follow. The seeds transmission procedure uses a virtual ring R defined by an
Euler tour of the tree T . Note that each edge of T appears exactly twice in R and therefore
the number of edges and nodes in R is 2. The seeds transmission procedure starts with
the transmission of seeds to the first processor P 1 . Let L be the list of
processors in R in clockwise order starting with P 1 ; the indices 2 to 2n \Gamma 2 are implied by the
Euler tour and not by the indices of the processors in T . Note that a single processor of T
may appear more than once in L 1 . We use the term instance for each such appearance. Define
the reduced list RL 1 to be a list of processors that is obtained from L 1 by removing all but
the first instance of each processor. Thus, in RL 1 every processor of T appears exactly once.
The communication of seeds uses the anti-clockwise direction. Define the last l real processors
to be the first l processors in RL 1 . When transmitting seeds to P i , L i , RL i and the last l
processors, are defined analogously.
In the first stage every processor that wants to communicate with another processor sends
an encrypted message with the seeds to be used by P 1 . Note that P 1 can be a faulty processor,
thus a careful transmission must be carried on. Let L be the list of
processors in R in anti-clockwise order i.e. L 1 in reversed order. Again L 1 includes more than
one instance of each processor P of T . Define the active instance of a processor P of T in L 1
to be the last appearance of P in L 1 . Define an active message to be a message that arrives
to an active instance of a processor. The details of the anonymous seeds transmission to P 1
appears in Figure 8.
As we prove in the sequel no information concerning the identity of the requesting processors
is revealed during the anonymous seeds transmission to P 1 except the information that can
be concluded by the value of n t - the number of processors that would like to transmit.
Once processors starts sending messages to P 2 in a fashion similar to the one
used to send seeds to P 1 . Then processors sends seeds to P 3 and so on and so forth, till the
processors send messages to P n . Note that when n there is exactly one sender for the
next communication session and at the end of the seeds distribution procedure every processor
holds the seeds distributed by the sender.
Lemma 4.1 A coalition of k colluding processors cannot reveal the identity of the seeds distributors
Proof: We prove the lemma for the transmission of the seeds from the sender to P 1 . Note that
one of the last k+1 real processors must be non-faulty. If P 1 is non-faulty then no information
starts - The first processor to send a message m n to P 1 is P n . If P n wants to transmit
data then m n contains seeds to be used by P 1 , otherwise m n is an empty message
i.e. a message that can be identified by P 1 as a null message. P n uses the public
of the last k processors
m n in a nested fashion; First encrypting m n with pu 1 then encrypting the resulting
message with pu 2 and so on. P n sends the k 1-nested encrypted message m k+1
n to
the processor that is next to the active instance of P n in L 1 .
Non active message - When a processor P i receives a non active message it forwards
the message to the next processor according to L 1 .
Active message - We now proceed by describing the actions taken by a processor upon
the arrival of an active message.
ffl We first describe the action taken by a processor P i that is not among the last k+1
real processors. When an active message with fm k+1
arrives
(to the active instance of) P
its own k 1-encrypted message
(again, containing seeds to be used by P 1 or null message) to the message received
and sends the message to the next processor according to L 1 .
ffl We now turn to consider a processor P i that is among the last k
processors. When an active message with fm i
arrives to
using its private key
to obtain fm
g. its message to P 1 by the
public keys of the last processors. P i randomly orders the set
fm
and sends the reordered set to the processor that
is next according to L 1 (Note that following the first such reordering the j'th
index of m
j is not necessarily the index of the sender of m
Arrival to receives an active message with fm 1
every message and finds out the number n t of the processors that would like
to transmit. If n t 6= 1 then P 1 sends a message with the value of n t that traverses the
virtual ring. Upon receiving such a message each processor that wants to transmit,
randomly chooses a waiting time in the range of say, 1 to 2n t . The procedure of
sending seeds to P 1 is repeated until n
Figure
8: Anonymous Seeds Transmission to P 1 .
concerning the identity of the seeds distributors is revealed to the adversary. Otherwise,
when P 1 is faulty then let P i be the non faulty processor that is the last to reorder the set
fm
upon the arrival of fm i
g. Since every arriving m i is
encrypted with P i 's public key no set of k-faulty processors can decrypt m
originated by a faulty processor). P i randomly order the set fm
holds that a coalition of k processors cannot reveal the identity of the sender of any m i\Gamma1 in
fm
g.
5 Initialization and Termination
When the seed distribution procedure is over, then the transmission of data may start. P n
broadcasts a signal on the tree that notifies the leaves that they can start transmitting data.
The leaves start sending data in a way that ensures that every non-leaf processor receives
the i'th bit from its children simultaneously. Thus, the delay in starting transmission of a
particular leaf l is proportional to the difference between the longest path from a leaf to the
root and the distance of l from the root. Each non leaf processor waits for receiving the i'th
bit from each of its children, uses these bits and its seeds to compute its own i'th bit and
sends the output to its parent. Note that buffers can be used in case the processors are not
completely synchronized.
The sender can terminate the session by sending a termination message that is not encrypted
by its additional seed. This message will be decrypted by the root that will broadcast
it to the rest of the processors to notify the beginning of a new anonymous seeds transmission.
6 Extensions and Concluding Remarks
Our treatment so far considered the anonymous sender case, which is also anonymous to
the non participating processors. A simple modification of the algorithm can support the
anonymous receiver case: The receiver plays a role of a sender of the previous solution in order
to communicate in an anonymous fashion an additional seed to the sender. Then the sender
uses the same scheme for the anonymous sender case with the seed the sender got from the
receiver.
To achieve anonymous communication in which both the sender and the receiver are anony-
mous, do the following: The two participants, P and Q, that would like to communicate (each)
send anonymously distinct seeds to P . It is possible that more than two participants
will send anonymously distinct seeds to P 1 . In such a case, P 1 will broadcast the processors
that more than two processors tried to anonymously chat and a back-off mechanism will be
used until exactly two participants, P and Q, send seeds to P 1 . Then, P 1 will encrypt and
broadcast the two seeds it got, each seed encrypted (using distinct intervals of the pseudo-random
expansions of the two seeds) by the other seed. Hence, each of the two processors will
use its seed to reveal the seed of the other processor. At this stage P and Q will continue
and anonymously send seeds to P . The same procedure continues for
Now P has a set of k seeds that are used for encryption of messages sent to Q and
used for encryption messages sent to P . They both act as senders using the
bit resulting from xoring the bits produced by the set of the k seeds as the bit of special
seed known to the receiver in our anonymous sender scheme. The back-off mechanism ensures
that one of P and Q starts the communication and then the other can replay (when the first
allows him to, i.e., stops transmitting data). We remark that it is possible to have more than
two participants by a similar scheme.
The security of the above algorithm is derived from the fact that there must be a non-faulty
processor among the processor therefore the adversary does not know at
least one key used to encrypt and decrypt messages by the sender and the receiver.
Acknowledgment
We thank Oded Goldreich, Ron Rivest and the anonymous referees for
helpful remarks.
--R
"Detection of disrupters in the DC protocol"
"An efficient probabilistic public-key encryption scheme which hides all partial information"
"How to Generate Cryptographically Strong Sequences of Pseudo-Random Bits"
"Completeness Theorems for Non-Cryptographic Fault-Tolerant Distributed Computation"
"Adaptively Secure Multi-Party Computation"
"Randomness vs. Fault- Tolerance"
"Untraceable Electronic Mail, Return Addresses, and Digital Pseudonyms"
"Multiparty Unconditionally Secure Pro- tocols"
"The Dining Cryptographers Problem: Unconditional Sender and Recipient Untraceability"
"Achieving Electronic Privacy"
"Eavesdropping Games: A Graph-Theoretic Approach to Privacy in Distributed Systems,"
"How To Play Any Mental Game"
"Pseudo-Random Generators under Uniform Assumptions"
"Pseudo-Random Generation from One-Way Functions,"
"Reducibility and Completeness in Multi-Party Private Computations"
"How to Implement ISDNs Without User Observability - Some Re- marks"
"Network without User Observability,"
"ISDN-MIXes - Untraceable Communication with Very Small Bandwidth Overhead,"
"Anonymous Connections and Onion Routing"
"Unconditional Sender and Recipient Untraceability in spite of active attacks"
"Cryptographic Defense Against Traffic Analysis"
--TR
How to generate cryptographically strong sequences of pseudo-random bits
Networks without user observability
How to play ANY mental game
The dining cryptographers problem: <italic>unconditional sender and recipient untraceability</>
Completeness theorems for non-cryptographic fault-tolerant distributed computation
Multiparty unconditionally secure protocols
Unconditional sender and recipient untraceability in spite of active attacks
Detection of disrupters in the DC protocol
The dining cryptographers in the disco
Cryptographic defense against traffic analysis
Adaptively secure multi-party computation
Randomness vs. fault-tolerance
The art of computer programming, volume 1 (3rd ed.)
A Pseudorandom Generator from any One-way Function
Untraceable electronic mail, return addresses, and digital pseudonyms
ISDN-MIXes
Anonymous Connections and Onion Routing
--CTR
Chin-Chen Chang , Chi-Yien Chung, An efficient protocol for anonymous multicast and reception, Information Processing Letters, v.85 n.2, p.99-103, 31 January
Nicholas Hopper , Eugene Y. Vasserman, On the effectiveness of k;-anonymity against traffic analysis and surveillance, Proceedings of the 5th ACM workshop on Privacy in electronic society, October 30-30, 2006, Alexandria, Virginia, USA
Steven S. Seiden , Peter P. Chen , R. F. Lax , J. Chen , Guoli Ding, New bounds for randomized busing, Theoretical Computer Science, v.332 n.1-3, p.63-81, 28 February 2005
Jiejun Kong , Dapeng Wu , Xiaoyan Hong , Mario Gerla, Mobile traffic sensor network versus motion-MIX: tracing and protecting mobile wireless nodes, Proceedings of the 3rd ACM workshop on Security of ad hoc and sensor networks, November 07-07, 2005, Alexandria, VA, USA | anonymous communication;anonymous multicast |
354878 | Configuring role-based access control to enforce mandatory and discretionary access control policies. | Access control models have traditionally included mandatory access control (or lattice-based access control) and discretionary access control. Subsequently, role-based access control has been introduced, along with claims that its mechanisms are general enough to simulate the traditional methods. In this paper we provide systematic constructions for various common forms of both of the traditional access control paradigms using the role-based access control (RBAC) models of Sandhu et al., commonly called RBAC96. We see that all of the features of the RBAC96 model are required, and that although for the manatory access control simulation, only one administrative role needs to be assumed, for the discretionary access control simulations, a complex set of administrative roles is required. | INTRODUCTION
Role-based access control (RBAC) has recently received considerable attention as
a promising alternative to traditional discretionary and mandatory access controls
(see, for example, Proceedings of the ACM Workshop on Role-Based Access Con-
trol, 1995-2000). In RBAC, permissions are associated with roles, and users are
made members of appropriate roles thereby acquiring the roles' permissions. This
greatly simplies management of permissions. Roles can be created for the various
job functions in an organization and users then assigned roles based on their
responsibilities and qualications. Users can be easily reassigned from one role to
another. Roles can be granted new permissions as new applications and systems
are incorporated, and permissions can be revoked from roles as needed.
An important characteristic of RBAC is that by itself it is policy neutral. RBAC
is a means for articulating policy rather than embodying a particular security policy
(such as one-directional information
ow in a lattice). The policy enforced in a
particular system is the net result of the precise conguration and interactions of
various RBAC components as directed by the system owner. Moreover, the access
control policy can evolve incrementally over the system life cycle, and in large
systems it is almost certain to do so. The ability to modify policy to meet the
changing needs of an organization is an important benet of RBAC.
Traditional access control models include mandatory access control (MAC), which
we shall call lattice-based access control (LBAC) here [Denning 1976; Sandhu 1993],
and discretionary access control (DAC) [Lampson 1971; Sandhu and Samarati 1994;
Sandhu and Samarati 1997]. Since the introduction of RBAC, several authors have
discussed the relationship between RBAC and these traditional models [Sandhu
1996; Sandhu and Munawer 1998; Munawer 2000; Nyanchama and Osborn 1996;
Nyanchama and Osborn 1994]. The claim that RBAC is more general than all of
these traditional models has often been made. The purpose of this paper is to show
how RBAC can be congured to enforce these traditional models.
Classic LBAC models are specically constructed to incorporate the policy of
one-directional information
ow in a lattice. This one-directional information
ow
can be applied for condentiality, integrity, condentiality and integrity together, or
for aggregation policies such as Chinese Walls [Sandhu 1993]. There is nonetheless
strong similarity between the concept of a security label and a role. In particular,
the same user cleared to, for example, Secret can on dierent occasions login to a
system at Secret and Unclassied levels. In a sense the user determines what role
(Secret or Unclassied) should be activated in a particular session.
This leads us naturally to ask whether or not LBAC can be simulated using
RBAC. If RBAC is policy neutral and has adequate generality it should indeed be
able to do so, particularly since the notion of a role and the level of a login session
are so similar. This question is theoretically signicant because a positive answer
would establish that LBAC is just one instance of RBAC, thereby relating two
distinct access control models that have been developed with dierent motivations.
A positive answer is also practically signicant, because it implies that the same
Trusted Computing Base can be congured to enforce RBAC in general and LBAC
in particular. This addresses the long held desire of multi-level security advocates
that technology which meets needs of the larger commercial marketplace be appli-
Conguring RBAC to Enforce MAC and DAC 3
cable to LBAC. The classical approach to fullling this desire has been to argue
that LBAC has applications in the commercial sector. So far this argument has
not been terribly productive. RBAC, on the other hand, is specically motivated
by needs of the commercial sector. Its customization to LBAC might be a more
productive approach to dual-use technology.
In this paper we answer this question positively by demonstrating that several
variations of LBAC can be easily accommodated in RBAC by conguring a few
components. 1 We use the family of RBAC models recently developed by
Sandhu et al [Sandhu et al. 1996; Sandhu et al. 1999] for this purpose. This family
is commonly called the RBAC96 model. Our constructions show that the concepts
of role hierarchies and constraints are critical to achieving this result.
Changes in the role hierarchy and constraints lead to dierent variations of LBAC.
A simulation of LBAC in RBAC was rst given by Nyanchama and Osborn [Nyan-
chama and Osborn 1994]; however, they do not exploit role hierarchies and constraints
and cannot handle variations so easily as the constructions of this paper.
Discretionary access control (DAC) has been used extensively in commercial ap-
plications, particularly in operating systems and relational database systems. The
central idea of DAC is that the owner of an object, who is usually its creator, has
discretionary authority over who else can access that object. DAC, in other words,
involves owner-based administration of access rights. Whereas for LBAC, we do
not need to discuss a complex administration of access rights, we will see that for
DAC, the administrative roles developed in Sandhu, Bhamidipati, and Munawer
[1999] are crucial. Because each object could potentially be owned by a unique
owner, the number of administrative roles can be quite large. However, we will
show that the role administration facilities in the RBAC96 model are adequate to
build a simulation of these sometimes administratively complex systems.
The rest of this paper is organized as follows. We review the family of RBAC96
models due to Sandhu, Coyne, Feinstein, and Youman [1996] in section 2. This is
followed by a quick review of LBAC in section 3. The simulation of several LBAC
variations in RBAC96 is described in section 4. This is followed by a brief discussion
in Section 5 of other RBAC96 congurations which also satisfy LBAC properties.
Section 6 introduces several major variations of DAC. In Section 7 we show how
each of these variations can be simulated in RBAC96. Section 8 summarizes the
results. Preliminary versions of some of these results have appeared in Sandhu
[1996], Sandhu and Munawer [1998], Nyanchama and Osborn [1996] and Osborn
[1997].
2. RBAC MODELS
A general RBAC model including administrative roles was dened by Sandhu et
al [Sandhu et al. 1996]. It is summarized in Figure 1. The model is based on
three sets of entities called users (U ), roles (R), and permissions (P ). Intuitively, a
It should be noted that RBAC will only prevent overt
ows of information. This is true of any
access control model, including LBAC. Information
ow contrary to the one-directional requirement
in a lattice by means of so-called covert channels is outside the purview of access control
per se. Neither LBAC nor RBAC addresses the covert channel issue directly. Techniques used to
deal with covert channels in LBAC can be used for the same purpose in RBAC.
4 Osborn, Sandhu and Munawer
user is a human being or an autonomous agent, a role is a job function or job title
within the organization with some associated semantics regarding the authority and
responsibility conferred on a member of the role, and a permission is an approval
of a particular mode of access to one or more objects in the system.
ADMINIS-
ROLES
AR
user roles
HIERARCHY
ROLE
RH
ROLE
HIERARCHY
ARH
IONS
PERMISSION
ASSIGNMENT
PA
ROLES
R
PERMISSION
APA
ASSIGNMENT
ADMIN.
IONS
CONSTRAINTS
U
USERS
USER
ASSIGNMENT
UA
USER
ASSIGNMENT
AUA
Fig. 1. The RBAC96 Model
The user assignment (UA) and permission assignment (PA) relations of Figure 1
are both many-to-many relationships (indicated by the double-headed arrows). A
user can be a member of many roles, and a role can have many users. Similarly,
a role can have many permissions, and the same permission can be assigned to
many roles. There is a partially ordered role hierarchy RH, also written as ,
where x y signies that role x inherits the permissions assigned to role y. In the
work of Nyanchama and Osborn [Nyanchama and Osborn 1994; Nyanchama and
Osborn 1999; Nyanchama and Osborn 1996], the role hierarchy is presented as an
acyclic directed graph, and direct relationships in the role hierarchy are referred to
as edges. Inheritance along the role hierarchy is transitive; multiple inheritance is
allowed in partial orders.
Figure
1 shows a set of sessions S. Each session relates one user to possibly
many roles. Intuitively, a user establishes a session during which the user activates
Conguring RBAC to Enforce MAC and DAC 5
some subset of roles that he or she is a member of (directly or indirectly by means
of the role hierarchy). The double-headed arrow from a session to R indicates
that multiple roles can be simultaneously activated. The permissions available to
the user are the union of permissions from all roles activated in that session. Each
session is associated with a single user, as indicated by the single-headed arrow from
the session to U . This association remains constant for the life of a session. A user
may have multiple sessions open at the same time, each in a dierent window on the
workstation screen for instance. Each session may have a dierent combination of
active roles. The concept of a session equates to the traditional notion of a subject
in access control. A subject (or session) is a unit of access control, and a user may
have multiple subjects (or sessions) with dierent permissions active at the same
time.
The bottom half of Figure 1 shows administrative roles and permissions. RBAC96
distinguishes roles and permissions from administrative roles and permissions re-
spectively, where the latter are used to manage the former. Administration of
administrative roles and permissions is under control of the chief security o-cer
or delegated in part to administrative roles. The administrative aspects RBAC96
elaborated in [Sandhu et al. 1999] are relevant for the DAC discussion in Section 6.
For the purposes of the LBAC discussion, we assume a single security o-cer is the
only one who can congure various components of RBAC96.
Finally, Figure 1 shows a collection of constraints. Constraints can apply to any
of the preceding components. An example of constraints is mutually disjoint roles,
such as purchasing manager and accounts payable manager, where the same user
is not permitted to be a member of both roles.
The following denition formalizes the above discussion.
Definition 1. The RBAC96 model has the following components:
|U , a set of users
R and AR, disjoint sets of (regular) roles and administrative roles
P and AP , disjoint sets of (regular) permissions and administrative permissions
S, a set of sessions
many-to-many permission to role assignment relation
APA AP AR, a many-to-many permission to administrative role assignment
relation
|UA U R, a many-to-many user to role assignment relation
AUA U AR, a many-to-many user to administrative role assignment relation
|RH R R, a partially ordered role hierarchy
ARH AR AR, partially ordered administrative role hierarchy
(both hierarchies are written as in inx notation)
mapping each session s i to the single user user(s i )
(constant for the session's lifetime),
roles maps each session s i to a set of roles and administrative roles
(which can change with
time)
session s i has the permissions [
6 Osborn, Sandhu and Munawer
|there is a collection of constraints stipulating which values of the various components
enumerated above are allowed or forbidden.
3. LBAC (OR MAC) MODELS
Lattice based access control is concerned with enforcing one directional information
ow in a lattice of security labels. It is typically applied in addition to classical
discretionary access controls, but in this section we will focus only on the MAC
component. A simulation of DAC in RBAC96 is found in Section 7. Depending
upon the nature of the lattice, the one-directional information
ow enforced by
LBAC can be applied for condentiality, integrity, condentiality and integrity
together, or for aggregation policies such as Chinese Walls [Sandhu 1993]. There are
also variations of LBAC where the one-directional information
ow is partly relaxed
to achieve selective downgrading of information or for integrity applications [Bell
1987; Lee 1988; Schockley 1988].
The mandatory access control policy is expressed in terms of security labels
attached to subjects and objects. A label on an object is called a security classi-
cation, while a label on a user is called a security clearance. It is important to
understand that a Secret user may run the same program, such as a text editor, as
a Secret subject or as an Unclassied subject. Even though both subjects run the
same program on behalf of the same user, they obtain dierent privileges due to
their security labels. It is usually assumed that the security labels on subjects and
objects, once assigned, cannot be changed (except by the security o-cer). This
last assumption, that security labels do not change, is known as tranquillity. (Non-
tranquil LBAC can also be simulated in RBAC96 but is outside the scope of this
paper.) The security labels form a lattice structure as dened below.
Definition 2. (Security Lattice) There is a nite lattice of security labels SC
with a partially ordered dominance relation and a least upper bound operator. 2
An example of a security lattice is shown in Figure 2. Information is only permitted
to
ow upward in the lattice. In this example, H and L respectively denote high
and low, and M1 and M2 are two incomparable labels intermediate to H and L.
This is a typical condentiality lattice where information can
ow from low to high
but not vice versa.
The specic mandatory access rules usually specied for a lattice are as follows,
where signies the security label of the indicated subject or object.
Definition 3. (Simple Security Property) Subject s can read object
if
Definition 4. (Liberal ?-property) Subject s can write object
(o). 2
The ?-property is pronounced as the star-property. For integrity reasons sometimes
a stricter form of the ?-property is stipulated. The liberal ?-property allows a low
subject to write a high object. This means that high data may be maliciously or
accidently destroyed or damaged by low subjects. To avoid this possibility we can
employ the strict ?-property given below.
Definition 5. (Strict ?-property) Subject s can write object
(o). 2
Conguring RBAC to Enforce MAC and DAC 7
Fig. 2. A Partially Ordered Lattice
The liberal ?-property is also referred to as write-up and the strict ?-property as
non-write-up or write-equal.
In variations of LBAC the simple-security property is usually left unchanged as
we will do in all our examples. Variations of the ?-property in LBAC whereby the
one-directional information
ow is partly relaxed to achieve selective downgrading
of information or for integrity applications [Bell 1987; Lee 1988; Schockley 1988]
will be considered later.
4. CONFIGURING RBAC FOR LBAC
We now show how dierent variations of LBAC can be simulated in RBAC96. It
turns out that we can achieve this by systematically changing the role hierarchy and
dening appropriate constraints. This suggests that role hierarchies and constraints
are central to dening policy in RBAC96.
4.1 A Basic Lattice
We begin by considering the example lattice of Figure 2 with the liberal ?-property.
Subjects with labels higher up in the lattice have more power with respect to read
operations but have less power with respect to write operations. Thus this lattice
has a dual character. In role hierarchies subjects (sessions) with roles higher in
the hierarchy always have more power than those with roles lower in the hierarchy.
To accommodate the dual character of a lattice for LBAC we will use two dual
hierarchies in RBAC96, one for read and one for write. These two role hierarchies
for the lattice of Figure 2 are shown in Figure 3(a). Each lattice label x is modeled
as two roles xR and xW for read and write at label x respectively. The relationship
among the four read roles and the four write roles is respectively shown on the left
and right hand sides of Figure 3(a). The duality between the left and right lattices
is obvious from the diagrams.
To complete the construction we need to enforce appropriate constraints to re
ect
the labels on subjects in LBAC. Each user in LBAC has a unique security clearance.
This is enforced by requiring that each user in RBAC96 is assigned to exactly two
roles xR and LW. An LBAC user can login at any label dominated by the user's
clearance. This requirement is captured in RBAC96 by requiring that each session
has exactly two matching roles yR and yW. The condition that x y, that is the
8 Osborn, Sandhu and Munawer
HR
LR
M1R M2R M1W M2W
(a) Liberal ?-Property
HR
LR
M1R M2R HW LW M2W
(b) Strict ?-Property
Fig. 3. Role Hierarchies for the Lattice of Figure 2
user's clearance dominates the label of any login session established by the user, is
not explicitly required because it is directly imposed by the RBAC96 construction.
Note that, by virtue of membership in LW, each user can activate any write role.
However, the write role activated in a particular session must match the session's
read role. Thus, both the role hierarchy and constraints of RBAC96 are exploited
in this construction.
LBAC is enforced in terms of read and write operations. In RBAC96 this means
our permissions are read and writes on individual objects written as (o,r) and (o,w)
respectively. An LBAC object has a single sensitivity label associated with it.
This is expressed in RBAC96 by requiring that each pair of permissions (o,r) and
(o,w) be assigned to exactly one matching pair of xR and xW roles respectively.
By assigning permissions (o,r) and (o,w) to roles xR and xW respectively, we are
implicitly setting the sensitivity label of object o to x.
4.2 The General Construction
Based on the above discussion we have the following construction for arbitrary lattices
(actually the construction works for partial orders with a lower-most security
class). Given SC with security labels and partial order LBAC , an
equivalent RBAC96 system is given by:
Construction 1. (Liberal ?-Property)
|RH which consists of two disjoint role hierarchies. The rst role hierarchy consists
of the \read" roles fL and has the same partial order as LBAC ;
the second partial consists of the \write" roles fL and has a partial
order which is the inverse of LBAC .
Conguring RBAC to Enforce MAC and DAC 9
is an object in the systemg
|Constraint on UA: Each user is assigned to exactly two roles xR and LW where
x is the label assigned to the user and LW is the write role corresponding to the
lowermost security level according to LBAC
|Constraint on sessions: Each session has exactly two roles yR and yW
|Constraints on PA:
|(o,r) is assigned to xR i (o,w) is assigned to xW
|(o,r) is assigned to exactly one role xR such that x is the label of
Theorem 1. An RBAC96 system dened by Construction 1 satises the Simple
Security Property and the Liberal ?-Property.
Proof: (a) Simple Security Property: Subjects in the LBAC terminology correspond
to RBAC96 sessions. For subject s to read o, (o,r) must be in the permissions
assigned to a role, either directly or indirectly, which is among the roles available
to session s, which corresponds to exactly one user u. For u to be involved in this
session, this role must be in the UA for u (either directly or indirectly). Let
z and y. By the constraints on PA given in Construction 1, (o,r) is assigned
directly to exactly one role xR, where and by the construction of RH, is
inherited by roles yR such that y LBAC x. For s to be able to read o, it must have
one of these yR in its session. By the denition of roles in a session from Denition
1, any role junior to zR can be in a session for u, i.e. z LBAC y. In other words, a
session for u can involve one reading role yR such that z LBAC y. Therefore, the
RBAC96 system dened above allows subject s to read object
and which is precisely the Simple Security Property.
(b) Liberal ?-Property: Each user, u, is assigned by UA to xR, where x is the
clearance of the user. According to LBAC, the user can read data classied at
level x or at levels dominated by x. It also means that the user can start a session
at a level dominated by x. So, if a user cleared to say level x, wishes to run a
session at level y, such that x LBAC y, the constraints in Construction 1 allow
the session to have the two active roles yR and yW. Because every user is assigned
to LW, it is possible for every user to have a session with yW as one of its roles.
The structure of the two role hierarchies means that if the yW role is available to
a user in a session, the user can write objects for which the permission (o,w) is in
yW. By construction of the role hierarchy, the session can write to level y or levels
dominated by y. In LBAC terms, the subject, s, corresponds to the session, and
within a session a write can be performed if (o,w) is in the permissions of a role,
which by the construction is only if (o) LBAC (s). This is precisely the Liberal
?-Property.4.3 LBAC Variations
Variations in LBAC can be accommodated by modifying this basic construction in
dierent ways. In particular, the strict ?-property retains the hierarchy on read
roles but treats write roles as incomparable to each other as shown in Figure 3(b)
for the example of our basic lattice.
Construction 2. (Strict ?-Property) Identical to construction 1 except RH
has a partial order among the read roles identical to the LBAC partial order, and
no relationships among the write roles. 2
Theorem 2. An RBAC96 system dened by Construction 2 satises the Simple
Security Property and the Strict ?-Property.
The proof of this and subsequent similar results is omitted.
Next we consider a version of LBAC in which subjects are given more power than
allowed by the simple security and ?-properties [Bell 1987]. The basic idea is to
allow subjects to violate the ?-property in a controlled manner. This is achieved
by associating a pair of security labels r and w with each subject (objects still
have a single security label). The simple security property is applied with respect
to r and the liberal ?-property with respect to w . In the LBAC model of [Bell
1987] it is required that r should dominate w . With this constraint the subject
can read and write in the range of labels between r and w which is called the
trusted range. If r and w are equal the model reduces to the usual LBAC model
with the trusted range being a single label.
The preceding discussion is remarkably close to our RBAC constructions. The
two labels r and w correspond directly to the two roles xR and yW we have
introduced earlier. The dominance required between r and w is trivially recast
as a dominance constraint between x and y. This leads to the following construction:
Construction 3. (Liberal ?-Property with Trusted Range) Identical to construction
|Constraint on UA: Each user is assigned to exactly two roles xR and yW such
that x y in the original lattice
|Constraint on sessions: Each session has exactly two roles xR and yW such that
x y in the original lattice 2
Lee [1988] and Schockley [1988] have argued that the Clark-Wilson integrity
model [Clark and Wilson 1987] can be supported using LBAC. Their models are
similar to the above except that no dominance relation is required between x and y.
Thus the write range may be completely disjoint with the read range of a subject.
This is easily expressed in RBAC96 as follows.
Construction 4. (Liberal ?-Property with Independent Write Range) Identical
to construction 3 except x y is not required in the constraint on UA and the
constraint on sessions. 2
A variation of the above is to use the strict ?-property as follows.
Construction 5. (Strict ?-Property with Designated Write) Identical to construction
|Constraint on UA: Each user is assigned to exactly two roles xR and yW
|Constraint on sessions: Each session has exactly two roles xR and yW 2
Construction 5 can also be directly obtained from construction 4 by requiring the
strict ?-property instead of the liberal ?-property. Construction 5 can accommodate
Clark-Wilson transformation procedures as outlined by Lee [1988] and Schockley
Conguring RBAC to Enforce MAC and DAC 11
[1988]. (Lee and Schockley actually use the liberal ?-property in their constructions,
but their lattices are such that the constructions are more directly expressed in
terms of the strict ?-property.)
5. EXTENDING THE POSSIBLE RBAC CONFIGURATIONS
In the previous section, we looked at specic mappings of dierent kinds of LBAC to
an RBAC system with the same properties. In this section we examine whether or
not more arbitrary RBAC systems which do not necessarily follow the constructions
in Section 4 still satisfy LBAC properties. In order to do this, we assume that all
users and objects have security labels, and that permissions involve only reads and
writes.
In the previous discussion, all constructions created role hierarchies with disjoint
read and write roles. This is not strictly necessary; the role hierarchy in Figure 5
could be the construction for the strict ?-property with the following modications:
|Constraint on UA: Each user is assigned to all roles, xRW such that the clearance
the user dominates the security label x
|Constraint on sessions: Each session has exactly one role: yRW
|Constraints on PA:
|(o,r) is assigned to xR i (o,w) is assigned to xRW
|(o,r) is assigned to exactly one role xR 2
HR
LR
M1R M2R
Fig. 4. Alternate role hierarchy for Strict ?-property
Nevertheless, the structure of role hierarchies which do map to valid LBAC cong-
urations is greatly restricted, as the examples in Osborn [1997] show. For example,
a role with permissions to both read and write a high data object and a low data
object cannot be assigned to a high user as this would allow write down, and cannot
be assigned to a low user, as this would allow read up. If a role had only read
permissions for some objects classied at M1, and other objects classied at M2 (cf
Figure
2), a subject cleared at H could be assigned to this role.
As far as the read operation is concerned, a subject can have a role r in its session
if the label of the subject dominates the level of all o such that (o,r) is in the role.
Since the least upper bound is dened for the security lattice, this can always be
determined. Similarly, for write operations, if a greatest lower bound is dened for
the security levels, then the Liberal ?-property is satised in a session if the security
level of the subject dominates the greatest lower bound of all o such that (o,w) is
in the role. If such a greatest lower bound does not exist, such a role should not be
in any user's UA. (If it could be determined that (s) (o) for all o such that
(o,w) is in the role, then this (s) would be a lower bound, and then a greatest
lower bound would exist.)
We introduce the following two denitions to capture the maximum read level of
objects in a role, and the minimum write level if one exists.
Definition 6. The r-level of a role r (denoted r-level(r)) is the least upper
bound (lub) of the security levels of the objects for which (o,r) is in the permissions
of r.
Definition 7. The w-level of a role r (denoted w-level(r)) is the greatest lower
bound (glb) of the security levels of the objects o for which (o,w) is in the permissions
of r, if such a glb exists. If the glb does not exist, the w-level is undened.
The following theorem follows from these denitions.
Theorem 3. An RBAC96 conguration satises the simple security property
and the Liberal ?-Property if all of the following hold:
|Constraint on Users: ((8u 2 U) [(u) is given])
|Constraints on Permissions:
is an object in the systemg
|Constraint on UA:
|Constraint on Sessions:
An example showing a possible role hierarchy is given in Figure 5, where the
underlying security lattice contains labels funclassied, secret, top secretg and roles
are indicated by, for example, (ru,rs) meaning the permissions in the role include
read of some unclassied and some secret object(s) (each role may have permissions
inherited because of the role hierarchy). The roles labeled ru1 and ru3 at the
bottom have read access to distinct objects labeled unclassied; ru2 inherits the
permissions of ru1 and has additional read access to objects at the unclassied level.
The role labeled (ru,ws) contains permission to read some unclassied objects and
some secret objects. This role could be assigned in UA to either unclassied
users or to secret users. Notice the role at the top of the role hierarchy, labeled
(ru,rs,rts,ws,wts). This role cannot be assigned to any user without violating either
Conguring RBAC to Enforce MAC and DAC 13
the Simple Security Property or the Liberal-? Property. Note that if this role
is deleted from the role hierarchy, we have an example of a role hierarchy which
satises the Simple Security Property and the Liberal-? Property, and which does
not conform to any of the constructions of Section 4.
Not valid in any
User Assignment
ru,rs
In UA for
unclassified
users
ru,ws
ws,wts
ws
ru,rs
ws
ru,rs
ws,wts
In UA for Top-
Users
ru,rs,rts
ws,wts
ru,rs,rts
users
In UA for
Fig. 5. A Role Hierarchy and its User Assignments
An RBAC96 conguration satises the strict ?-property if all of the above conditions
hold, changing the Constraint on Sessions to:
|Constraint on Sessions: (8s 2 sessions)
6. DAC MODELS
In this section we discuss the DAC policies that will be considered in this paper.
The central idea of DAC is that the owner of an object, who is usually its creator,
has discretionary authority over who else can access that object. In other words the
core DAC policy is owner-based administration of access rights. There are many
variations of DAC policy, particularly concerning how the owner's discretionary
power can be delegated to other users and how access is revoked. This has been
recognized since the earliest formulations of DAC [Lampson 1971; Graham and
Denning 1972].
14 Osborn, Sandhu and Munawer
Our approach here is to identify major variations of DAC and demonstrate their
construction in RBAC96. The constructions are such that it will be obvious how
they can be extended to handle other related DAC variations. This is an intuitive,
but well-founded, justication for the claim that DAC can be simulated in RBAC. 2
The DAC policies we consider all share the following characteristics.
|The creator of an object becomes its owner.
|There is only one owner of an object. In some cases ownership remains xed
with the original creator, whereas in other cases it can be transferred to another
user. (This assumption is not critical to our constructions. It will be obvious
how multiple owners could be handled.)
|Destruction of an object can only be done by its owner.
With this in mind we now dene the following variations of DAC with respect to
granting of access.
(1) Strict DAC requires that the owner is the only one who has discretionary
authority to grant access to an object and that ownership cannot be transferred.
For example, suppose Alice has created an object (Alice is owner of the object)
and grants read access to Bob. Strict DAC requires that Bob cannot propagate
access to the object to another user. (Of course, Bob can copy the contents
of Alice's object into an object that he owns, and then propagate access to
the copy. This is why DAC is unable to enforce information
ow controls,
particularly with respect to Trojan Horses.)
(2) Liberal DAC allows the owner to delegate discretionary authority for granting
access to an object to other users. We dene the following variations of liberal
DAC.
(a) One Level Grant: The owner can delegate grant authority to other users
but they cannot further delegate this power. So Alice being the owner of
object O can grant access to Bob who can grant access to Charles. But
Bob cannot grant Charles the power to further grant access to Dorothy.
(b) Two Level Grant: In addition to a one-level grant the owner can allow
some users to further delegate grant authority to other users. Thus, Alice
can now authorize Bob for two-level grants, so Bob can grant access to
Charles, with the power to further grant access to Dorothy. However, Bob
cannot grant the two-level grant authority to Charles. (We could consider
n-level grant but it will be obvious how to do this from the two level
construction.)
(c) Multilevel Grant: In this case the power to delegate the power to grant
implies that this authority can itself be delegated. Thus Alice can authorize
Bob, who can further authorize Charles, who can further authorize
Dorothy, and so on indenitely.
(3) DAC with Change of Ownership: This variation allows a user to transfer
ownership of an object to another user. It can be combined with strict or liberal
DAC in all the above variations.
2 A formal proof would require a formal denition of DAC encompassing all its variations, and a
construction to handle all of these in RBAC96. This approach is pursued in [Munawer 2000].
Conguring RBAC to Enforce MAC and DAC 15
For revocation we consider two cases as follows.
(1) Grant-Independent Revocation: Revocation is independent of the granter.
Thus Bob may be granted access by Alice but have it revoked by Charles.
(2) Grant-Dependent Revocation: Revocation is strongly tied to the granter.
Thus if Bob receives access from Alice, access can only be revoked by Alice.
In our constructions we will initially assume grant-independent revocation and then
consider how to simulate grant-dependent revocation. In general, we will also assume
that anyone with authority to grant also has authority to revoke. This coupling
often occurs in practice. Where appropriate, we can decouple these in our
simulations because, as we will see, they are represented by dierent permissions.
These DAC policies certainly do not exhaust all possibilities. Rather these are
representative policies whose simulation will indicate how other variations can also
be handled.
7. CONFIGURING RBAC FOR DAC
To specify the above variations in RBAC96 it su-ces to consider DAC with one
operation, which we choose to be the read operation. Similar constructions for
other operations such as write, execute and append, are easily possible. 3 Before
considering specic DAC variations, we rst describe common aspects of our constructions
7.1 Common Aspects
The basic idea in our constructions is to simulate the owner-centric policies of DAC
using roles that are associated with each object.
7.1.1 Create an Object. For every object O that is created in the system we
require the simultaneous creation of three administrative roles and one regular role
as follows.
|Three administrative roles in AR: OWN O, PARENT O and PARENTwith-
GRANT O
|One regular role in R: READ O
Role OWN O has privileges to add and remove users from the role PAREN-
TwithGRANT O which in turn has privileges to add and remove users from the
role PARENT O The relationship between these roles is shown in Figure 6. In
Figure
administrative roles are shown with darker circles than regular roles. In
Figure
6(a), the dashed right arrows indicate that the role on the left contains the
administrative permissions governing the role on the right. Figure 6(b) shows the
administrative role hierarchy, with the senior role above its immediate junior, connected
by an edge. For instance role OWN O has administrative authority over
roles PARENTwithGRANT O as indicated in gure 6(a). In addition due to the
3 More complex operations such as copy can be viewed as a read of the original object and a write
(and possibly creation) of the copy. It can be useful to associate some default permissions with
the copy. For example, the copy may start with access related to that of the original object or
it may start with some other default. Specic policies here could be simulated by extending our
constructions.
inheritance via the role hierarchy of gure 6(a) OWN O also has administrative
authority over PARENT O and READ O.
OWN_O READ_O
(a)
OWN_O
PARENTwithGRANT_O
PARENT_O
(b)
PARENT_O
PARENTwithGRANT_O
Fig. 6. (a)Administration of roles associated with an object (b) Administrative role hierarchy
In addition we require simultaneous creation of the following eight permissions
along with creation of each object O.
|canRead O: authorizes the read operation on object O. It is assigned to the role
READ O.
|destroyObject O: authorizes deletion of the object. It is assigned to the role
OWN O.
|addReadUser O, deleteReadUser O: respectively authorize the operations to add
users to the role READ O and remove them from this role. They are assigned to
the role PARENT O.
|addParent O, deleteParent O: respectively authorize the operations to add users
to the role PARENT O and remove them from this role. They are assigned to
the role PARENTwithGRANT O.
|addParentWithGrant O, deleteParentWithGrant O: respectively authorize the
operations to add users to the role PARENT O and remove them from this role.
They are assigned to the role OWN O.
These permissions are assigned to the indicated roles when the object is created
and thereafter they cannot be removed from these roles or assigned to other roles.
7.1.2 Destroy an Object. Destroying an object O requires deletion of the four
roles namely OWN O, PARENT O, PARENTwithGRANT O and READ O and
the eight permissions (in addition to destroying the object itself). This can be done
only by the owner, by virtue of exercising the destroyObject O permission.
Conguring RBAC to Enforce MAC and DAC 17
7.2 Strict DAC
In strict DAC only the owner can grant/revoke read access to/from other users.
The creator is the owner of the object. By virtue of membership (via seniority) in
PARENT O and PARENTwithGRANT O, the owner can change assignments of
the role READ O. Membership of the three administrative roles cannot change, so
only the owner will have this power. This policy can be enforced by imposing a
cardinality constraint of 1 on OWN O and of 0 on PARENT O and PARENTwith-
GRANT O.
This policy could be simulated using just two roles OWN O and READ O, and
giving the addReadUser O and deleteReadUser O permissions directly to OWN O
at creation of O. For consistency with subsequent variations we have introduced all
required roles from the start.
7.3 Liberal DAC
The three variations of liberal DAC described in Section 6 are now considered in
turn.
7.3.1 One-Level Grant. The one-level grant DAC policy can be simulated by
removing the cardinality constraint of strict DAC on membership in PARENT O.
The owner can assign users to the PARENT O role who in turn can assign users to
the READ O role. But the cardinality constraint of 0 on PARENTwithGRANT O
remains.
7.3.2 Two-Level Grant. In the two level grant DAC policy the cardinality constraint
on PARENTwithGRANT O is also removed. Now the owner can assign
users to PARENTwithGRANT O who can further assign users to PARENT O.
Note that members of PARENTwithGRANT O can also assign users directly to
READ O, so they have discretion in this regard. Similarly the owner can assign
users to PARENTwithGRANT O, PARENT O or READ O as deemed appropri-
ate. (N-level grants can be similarly simulated by having N roles, PARENTwith-
GRANT O N 1 , PARENTwithGRANT O N 2 , . , PARENTwithGRANT O, PARENT
O.)
7.3.3 Multilevel Grant. To grant access beyond two levels we authorize the role
PARENTwithGRANT O to assign users to PARENTwithGRANT O. We achieve
this by assigning the addParentWithGrant O permission to the role PARENTwith-
GRANT O when object O is created. As per our general policy of coupling grant
and revoke authority, we also assign the deleteParentWithGrant O permission to
the role PARENTwithGRANT O when O is created. This coupling policy is arguably
unreasonable in the context of grant-independent revoke, so the deletePar-
entWithGrant O permission could be retained only with the OWN O role if so
desired. For grant-dependent revoke the coupling is more reasonable.
7.4 DAC with Change of Ownership
Change of ownership can be easily accomplished by suitable redenition of the administrative
authority of a member of OWN O. Recall that change of ownership
in this context means transfer of ownership from one user to another. Thus the
OWN O role needs a permission that enables this transfer to occur and this per-
mission can only be assigned to this role. A member of OWN O can assign another
user to OWN O but at the cost of loosing their own membership.
7.5 Multiple Ownership
Multiple ownership can also be accommodated by removing the cardinality constraint
on membership in the OWN O role. Since all members of OWN O have
identical power, including the ability to revoke other owners, it would be appropriate
with grant-independent revoke to distinguish the original owner. Alternately,
we can have grant-dependent revoke of ownership.
7.6 Grant-Dependent Revoke
So far we have considered grant-independent revocation where revocation is independent
of granter. Now nally we consider how to simulate grant-dependent
revoke in RBAC96. In this case only the user who has granted access to another
user can revoke the access (with possible exception of the owner who is allowed to
revoke everything).
U1_PARENT_O
U2_PARENT_O
Un_PARENT_O
U1_READ_O
U2_READ_O
Un_READ_O
Fig. 7. Read O Roles associated with members of PARENT O
Specically, let us consider the one level grant DAC policy simulated earlier
by allowing members of PARENT O role to assign users to READ O role. To
simulate grant-dependent revocation with this one level grant policy we need a
dierent administrative role U PARENT O and a dierent regular role U READ O
for each user U authorized to do a one-level grant by the owner. These roles are
automatically created when the owner authorizes user U. We also need two new
administrative permissions created at the same time as follows.
|addU ReadUser O, deleteU ReadUser O: respectively authorize the operations
to add users to the role U READ O and remove them from this role. They are
assigned to the role U PARENT O.
Conguring RBAC to Enforce MAC and DAC 19
Ui PARENT O manages the membership assignments of Ui READ O role as indicated
in Figure 7 for user Ui. U PARENT O has a membership cardinality constraint
of one. Moreover, its membership cannot be changed. Thus user U will be
the only one granting and revoking users from U READ O. The U READ O role
itself is assigned the permission canRead O at the moment of creation. As before
all of this enforced by RBAC96 constraints. We can allow the owner to revoke
users from the U READ O role by making U PARENT O junior to OWN O in
the administrative role hierarchy. Simulation of grant-dependent revocation can be
similarly simulated with respect to the PARENT O and PARENTwithGRANT O
roles. Extension to multiple ownership is also possible.
8. CONCLUSIONS
We have shown that the common forms of LBAC and DAC models can be simulated
and enforced in RBAC96 with systematic constructions. All of the components of
the RBAC96 model shown in Figure 1 were required to carry out these simulations.
Users and permissions are essential to express any access control model. The Role
Hierarchy is important in the LBAC simulation. The Administrative Role Hierarchy
is essential in the enforcement of DAC policies, as is the administrative user to role
assignment relation. We observe however that the permissions that have been
granted to users in a DAC system can give an arbitrarily rich role hierarchy, as was
noted in a conversion of relational database permissions to role graphs by Osborn,
Reid, and Wesson [1996]. Constraints play a role in all of the constructions. It is
important to note that the LBAC simulation assumes a single administrative role,
whereas the DAC simulation requires a large number of administrative roles, which
are dynamically created and destroyed.
RBAC Models
One Administrative Role Complex Administrative Roles
of Section 7
of Section 4
of Section 5
a. LBAC construction
b. other LBAC configurations
c. DAC configurations
Fig. 8. Containment of Models
We can represent some of our ndings using the Venn diagram in Figure 8. The
20 Osborn, Sandhu and Munawer
area on the left of the gure indicates that in this subset of RBAC96 congurations,
there is no need for administrative roles except for an assumed single administrative
role. On the right, the administrative part of the RBAC96 model is fully utilized.
Part (a) represents the subset of possible RBAC96 congurations which are built
by constructions 1 and 2. Area (b) shows that there are other congurations not
built by these two constructions which still satisfy LBAC properties. Part (c) of
the diagram represents in general the RBAC96 congurations built by the various
constructions in Section 7. Note that these latter constructions all fall in the region
where the administrative roles of the RBAC96 model are being fully utilized.
Future work should now focus on what happens in the rest of the RBAC96
Models not included in the areas constructed in this paper. Models for decentralized
role administration which fall in between these extremes have been proposed by
Sandhu, Bhamidipati, and Munawer [1999]. These models allow for large numbers
of administrative roles but this number is expected to be much smaller than the
number of objects in the system.
In conclusion, then, we have shown with various systematic constructions how to
simulate and enforce traditional LBAC and DAC access control models in RBAC96.
--R
Secure computer systems: A network interpretation.
A comparison of commercial and military computer security policies.
A lattice model of secure information ow.
Using mandatory integrity to enforce
Administrative Models for Role-Based Access Control
Access rights administration in role-based security systems
Modeling mandatory access control in role-based security systems
The role graph model and con ict of interest.
Mandatory access control and role-based access control revisited
On the interaction between role based access control and relational databases.
Role hierarchies and constraints for lattice-based access controls
The ARBAC97 model for role-based administration of roles
How to do discretionary access control using roles.
Access control: Principles and practice.
Implementing the Clark/Wilson integrity policy using current tech- nology
--TR
Role-Based Access Control Models
Modeling mandatory access control in role-based security systems
Mandatory access control and role-based access control revisited
On the interaction between role-based access control and relational databases
How to do discretionary access control using roles
The role graph model and conflict of interest
The ARBAC97 model for role-based administration of roles
A lattice model of secure information flow
Lattice-Based Access Control Models
Access Rights Administration in Role-Based Security Systems
Role Hierarchies and Constraints for Lattice-Based Access Controls
Administrative models for role-based access control
--CTR
Sylvia L. Osborn, Information flow analysis of an RBAC system, Proceedings of the seventh ACM symposium on Access control models and technologies, June 03-04, 2002, Monterey, California, USA
Cungang Yang , Chang N. Zhang, An approach to secure information flow on Object Oriented Role-based Access Control model, Proceedings of the ACM symposium on Applied computing, March 09-12, 2003, Melbourne, Florida
James B. D. Joshi , Rafae Bhatti , Elisa Bertino , Arif Ghafoor, Access-Control Language for Multidomain Environments, IEEE Internet Computing, v.8 n.6, p.40-50, November 2004
Rafae Bhatti , Elisa Bertino , Arif Ghafoor , James B. D. Joshi, XML-Based Specification for Web Services Document Security, Computer, v.37 n.4, p.41-49, April 2004
Gail-Joon Ahn , Ravi Sandhu, Role-based authorization constraints specification, ACM Transactions on Information and System Security (TISSEC), v.3 n.4, p.207-226, Nov. 2000
Rafae Bhatti , James Joshi , Elisa Bertino , Arif Ghafoor, X-GTRBAC admin: a decentralized administration model for enterprise wide access control, Proceedings of the ninth ACM symposium on Access control models and technologies, June 02-04, 2004, Yorktown Heights, New York, USA
James B D Joshi , Elisa Bertino , Arif Ghafoor, Temporal hierarchies and inheritance semantics for GTRBAC, Proceedings of the seventh ACM symposium on Access control models and technologies, June 03-04, 2002, Monterey, California, USA
Marian Ventuneac , Tom Coffey , Ioan Salomie, A policy-based security framework for Web-enabled applications, Proceedings of the 1st international symposium on Information and communication technologies, September 24-26, 2003, Dublin, Ireland
Sejong Oh , Ravi Sandhu, A model for role administration using organization structure, Proceedings of the seventh ACM symposium on Access control models and technologies, June 03-04, 2002, Monterey, California, USA
Sylvia Osborn, Integrating role graphs: a tool for security integration, Data & Knowledge Engineering, v.43 n.3, p.317-333, December 2002
Ravi Sandhu , David Ferraiolo , Richard Kuhn, The NIST model for role-based access control: towards a unified standard, Proceedings of the fifth ACM workshop on Role-based access control, p.47-63, July 26-28, 2000, Berlin, Germany
Sejong Oh , Ravi Sandhu , Xinwen Zhang, An effective role administration model using organization structure, ACM Transactions on Information and System Security (TISSEC), v.9 n.2, p.113-137, May 2006
Roberto Di Pietro , Luigi V. Mancini, Security and privacy issues of handheld and wearable wireless devices, Communications of the ACM, v.46 n.9, p.74-79, September
Engineering authority and trust in cyberspace: the OM-AM and RBAC way, Proceedings of the fifth ACM workshop on Role-based access control, p.111-119, July 26-28, 2000, Berlin, Germany
Manuel Koch , Luigi V. Mancini , Francesco Parisi-Presicce, A graph-based formalism for RBAC, ACM Transactions on Information and System Security (TISSEC), v.5 n.3, p.332-365, August 2002
Wolfgang Essmayr , Stefan Probst , Edgar Weippl, Role-Based Access Controls: Status, Dissemination, and Prospects for Generic Security Mechanisms, Electronic Commerce Research, v.4 n.1-2, p.127-156, January-April 2004
David F. Ferraiolo , Serban Gavrila , Vincent Hu , D. Richard Kuhn, Composing and combining policies under the policy machine, Proceedings of the tenth ACM symposium on Access control models and technologies, June 01-03, 2005, Stockholm, Sweden
Jason Crampton, On permissions, inheritance and role hierarchies, Proceedings of the 10th ACM conference on Computer and communications security, October 27-30, 2003, Washington D.C., USA
Yang , Raimund K. Ege , Huiqun Yu, Mediation security specification and enforcement for heterogeneous databases, Proceedings of the 2005 ACM symposium on Applied computing, March 13-17, 2005, Santa Fe, New Mexico
James B. D. Joshi , Elisa Bertino , Usman Latif , Arif Ghafoor, A Generalized Temporal Role-Based Access Control Model, IEEE Transactions on Knowledge and Data Engineering, v.17 n.1, p.4-23, January 2005
Patrick C. K. Hung , Dickson K. W. Chiu , W. W. Fung , William K. Cheung , Raymond Wong , Samuel P. M. Choi , Eleanna Kafeza , James Kwok , Jousha C. C. Pun , Vivying S. Y. Cheng, Towards end-to-end privacy control in the outsourcing of marketing activities: a web service integration solution, Proceedings of the 7th international conference on Electronic commerce, August 15-17, 2005, Xi'an, China
Siqing Du , James B. D. Joshi, Supporting authorization query and inter-domain role mapping in presence of hybrid role hierarchy, Proceedings of the eleventh ACM symposium on Access control models and technologies, June 07-09, 2006, Lake Tahoe, California, USA
W. T. Tsai , X. Liu , Y. Chen , R. Paul, Simulation Verification and Validation by Dynamic Policy Enforcement, Proceedings of the 38th annual Symposium on Simulation, p.91-98, April 04-06, 2005
James B. D. Joshi , Basit Shafiq , Arif Ghafoor , Elisa Bertino, Dependencies and separation of duty constraints in GTRBAC, Proceedings of the eighth ACM symposium on Access control models and technologies, June 02-03, 2003, Como, Italy
Mahesh V. Tripunitara , Ninghui Li, Comparing the expressive power of access control models, Proceedings of the 11th ACM conference on Computer and communications security, October 25-29, 2004, Washington DC, USA
Jingzhu Wang , Sylvia L. Osborn, A role-based approach to access control for XML databases, Proceedings of the ninth ACM symposium on Access control models and technologies, June 02-04, 2004, Yorktown Heights, New York, USA
James B. D. Joshi , Elisa Bertino, Fine-grained role-based delegation in presence of the hybrid role hierarchy, Proceedings of the eleventh ACM symposium on Access control models and technologies, June 07-09, 2006, Lake Tahoe, California, USA
Jason Crampton , George Loizou, Administrative scope: A foundation for role-based administrative models, ACM Transactions on Information and System Security (TISSEC), v.6 n.2, p.201-231, May
David F. Ferraiolo, An argument for the role-based access control model, Proceedings of the sixth ACM symposium on Access control models and technologies, p.142-143, May 2001, Chantilly, Virginia, United States
Timothy Fraser , David Ferraiolo , Mikel L. Matthews , Casey Schaufler , Stephen Smalley , Robert Watson, Panel: which access control technique will provide the greatest overall benefit, Proceedings of the sixth ACM symposium on Access control models and technologies, p.141-149, May 2001, Chantilly, Virginia, United States
Basit Shafiq , James B. D. Joshi , Elisa Bertino , Arif Ghafoor, Secure Interoperation in a Multidomain Environment Employing RBAC Policies, IEEE Transactions on Knowledge and Data Engineering, v.17 n.11, p.1557-1577, November 2005
Thuong Doan , Steven Demurjian , T. C. Ting , Andreas Ketterl, MAC and UML for secure software design, Proceedings of the 2004 ACM workshop on Formal methods in security engineering, October 29-29, 2004, Washington DC, USA
Shih-Chien Chou , Yuan-Chien Chen, Managing role relationships in an information flow control model, Journal of Systems and Software, v.79 n.4, p.507-522, April 2006
Ravi Sandhu , Kumar Ranganathan , Xinwen Zhang, Secure information sharing enabled by Trusted Computing and PEI models, Proceedings of the 2006 ACM Symposium on Information, computer and communications security, March 21-24, 2006, Taipei, Taiwan
Elisa Bertino , Piero Andrea Bonatti , Elena Ferrari, TRBAC: A temporal role-based access control model, ACM Transactions on Information and System Security (TISSEC), v.4 n.3, p.191-233, August 2001
Gustaf Neumann , Mark Strembeck, Design and implementation of a flexible RBAC-service in an object-oriented scripting language, Proceedings of the 8th ACM conference on Computer and Communications Security, November 05-08, 2001, Philadelphia, PA, USA
Vugranam C. Sreedhar, Data-centric security: role analysis and role typestates, Proceedings of the eleventh ACM symposium on Access control models and technologies, June 07-09, 2006, Lake Tahoe, California, USA
James B. D. Joshi , Elisa Bertino , Arif Ghafoor, An Analysis of Expressiveness and Design Issues for the Generalized Temporal Role-Based Access Control Model, IEEE Transactions on Dependable and Secure Computing, v.2 n.2, p.157-175, April 2005
Yanjun Zuo , Brajendra Panda, Component based trust management in the context of a virtual organization, Proceedings of the 2005 ACM symposium on Applied computing, March 13-17, 2005, Santa Fe, New Mexico
Ting Yu , Divesh Srivastava , Laks V. S. Lakshmanan , H. V. Jagadish, A compressed accessibility map for XML, ACM Transactions on Database Systems (TODS), v.29 n.2, p.363-402, June 2004
Shih-Chien Chou, Providing flexible access control to an information flow control model, Journal of Systems and Software, v.73 n.3, p.425-439, November-December 2004
Lawrence A. Gordon , Martin P. Loeb, The economics of information security investment, ACM Transactions on Information and System Security (TISSEC), v.5 n.4, p.438-457, November 2002
Shih-Chien Chou , Chin-Yi Chang, An information flow control model for C applications based on access control lists, Journal of Systems and Software, v.78 n.1, p.84-100, October 2005
Rafae Bhatti , Arif Ghafoor , Elisa Bertino , James B. D. Joshi, X-GTRBAC: an XML-based policy specification framework and architecture for enterprise-wide access control, ACM Transactions on Information and System Security (TISSEC), v.8 n.2, p.187-227, May 2005
Charles E. Phillips, Jr. , T.C. Ting , Steven A. Demurjian, Information sharing and security in dynamic coalitions, Proceedings of the seventh ACM symposium on Access control models and technologies, June 03-04, 2002, Monterey, California, USA
Shih-Chien Chou, Embedding role-based access control model in object-oriented systems to protect privacy, Journal of Systems and Software, v.71 n.1-2, p.143-161, April 2004
David F. Ferraiolo , Ravi Sandhu , Serban Gavrila , D. Richard Kuhn , Ramaswamy Chandramouli, Proposed NIST standard for role-based access control, ACM Transactions on Information and System Security (TISSEC), v.4 n.3, p.224-274, August 2001
Joon S. Park , Ravi Sandhu , Gail-Joon Ahn, Role-based access control on the web, ACM Transactions on Information and System Security (TISSEC), v.4 n.1, p.37-71, Feb. 2001
W. T. Tsai , Yinong Chen , Ray Paul , Xinyu Zhou , Chun Fan, Simulation Verification and Validation by Dynamic Policy Specification and Enforcement, Simulation, v.82 n.5, p.295-310, May 2006
Katherine Campbell , Lawrence A. Gordon , Martin P. Loeb , Lei Zhou, The economic cost of publicly announced information security breaches: empirical evidence from the stock market, Journal of Computer Security, v.11 n.3, p.431-448, 1 March | mandatory access control;lattice-based access control;role-based access control;discretionary access control |
355051 | Automated systematic testing for constraint-based interactive services. | Constraint-based languages can express in a concise way the complex logic of a new generation of interactive services for applications such as banking or stock trading, that must support multiple types of interfaces for accessing the same data. These include automatic speech-recognition interfaces where inputs may be provided in any order by users of the service. We study in this paper how to systematically test event-driven applications developed using such languages. We show how such applications can be tested automatically, without the need for any manually-written test cases, and efficiently, by taking advantage of their capability of taking unordered sets of events as inputs. | INTRODUCTION
Today, it is becoming more and more commonplace for modern
interactive services, such as those for personal banking or stock
trading, to have more than one interface for accessing the same
data. For example, many banks allow customers to access personal
banking services from an automated teller machine, bank-by-
phone interface, or web-based interface. Furthermore, telephone-based
services are starting to support automatic speech recognition
and natural language understanding, adding further flexibility in
interaction with the user. When multiple interfaces are provided
to the same service, duplication can be a serious problem in that
there may be a different service logic (i.e., the code that defines
the essence of the service) for every different user interface to the
service. Moreover, to support natural language interfaces, services
must allow users considerable flexibility in the way they input their
service requests. Thus, it is desirable for the programming idioms
and methods for such services to provide the following capabilities:
allow requests to be phrased in various ways (e.g., needed
information can be provided in any order),
prompting for missing information,
correction of erroneous information,
lookahead (to allow the user to speak several commands at
once), and
backtracking to earlier points in the service logic.
Constraint-based languages (for a foundational overview, see [16,
19]) provide a suitable paradigm to support the required flexibility
in user inputs. Examples of such languages in dialogue management
include methods based on frames [1], forms [9, 15], approaches
based on AND/OR trees [22] and Sisl (Several Interfaces,
Single Logic) [4].
The powerful expressiveness of constraint-based languages allows
the construction of succinct programs with complex reactive
behavior. This motivates the need for reliable yet cost-effective
testing techniques and tools suitable to check the correctness of
business-critical applications developed using such languages. We
study in this paper how to automatically and efficiently check temporal
properties of programs written in constraint-based languages
for interactive services.
We first present a nondeterministic algorithm for systematically
testing the logic of a program in a constraint-based language. The
possible use of arbitrary host language code and potential elaborate
access to external data (such as database lookups) in a full program
makes the use of classical constraint satisfaction and model checking
techniques problematic. Consequently, this algorithm assumes,
as is usually done in testing, that the user specifies a fixed set of possible
data values for each user input. The algorithm then dynamically
detects at run-time the set of input events that an application
is currently ready to accept, and uses that information to drive the
execution of the application by sending it inputs selected nondeter-
ministically. Used in conjunction with the systematic state-space
exploration tool VeriSoft [11], which supports a special system call
(called VS toss) simulating nondeterminism, this nondeterministic
test driver can systematically generate all possible behaviors of any
such program. These behaviors can then be monitored and checked
against user-specified safety properties. This algorithm thus eliminates
the need for manually-written test cases, and is automatically
applicable to any program. This testing technique is made possible
by the structured interface that an event-driven constraint-based
program provides to its environment.
Unfortunately, the expressiveness of constraint-based languages
makes this algorithm quite inefficient. For instance, a service that
awaits 10 inputs from the user (in no particular order) has 10! different
paths to collect the inputs. However, the flip side of this
ability to accept unordered sets of inputs that can be provided in
any order, is the inability of constraint languages to (observation-
ally) distinguish permutations of the unordered sets of inputs. Thus,
generating all these sequences is not always necessary to check the
temporal properties that we are considering. Our second algorithm
makes systematic testing significantly more efficient by exploiting
this observation and taking advantage of a form of symmetry induced
by constraints and their ability to specify sets of input events
rather than single events.
While the basic problems and ideas that we discuss in this work
are common to any deterministic constraint-based language in our
domain of interest, our concrete presentation is in the context of
the Sisl language mentioned earlier. Sisl includes a deterministic
constraint-based domain-specific language for developing event-driven
reactive services. It is implemented as a library in Java.
Sisl supports the development of services that are shared by multiple
user interfaces including automatic speech recognition based or
text-based natural language understanding, telephone voice access,
web, and graphics based interfaces. Sisl is currently being used to
prototype a new generation of call processing services for a Lucent
Technologies switching product.
This paper is organized as follows. In the next section, we describe
the Sisl language and formalize its semantics. Then, we
present algorithms for automatically and efficiently checking safety
properties of Sisl programs. These algorithms have been implemented
and implementation issues are discussed in Section 4. A
simple example of Sisl application is presented in Section 5. Results
of experiments with this example are then discussed. Finally,
we present concluding remarks and compare our results with other
related work.
2. SISL: SEVERAL INTERFACES, SINGLE
2.1 Overview of Sisl
Sisl applications consist of a single service logic, together with
multiple user interfaces. The service logic and the interfaces communicate
via events: the user interfaces collect information from
the user and send it to the service logic in the form of events. The
service logic reacts to the events, and reports to the user interfaces
its set of enabled events; i.e. the set of events it is ready to accept
in its current state. The user interfaces utilize this information
to appropriately prompt the user. Any user interface that performs
these functions can be used in conjunction with a Sisl service logic.
For example, in the context of an interactive natural language service
that browses and displays organizational data from a corporate
database, Sisl may inform the natural-language interface that
it can accept an organization e.g., if the user has said "Show me
the size of the organization." The interface may then prompt the
user, e.g. by saying "What organization do you want to see?" If
the user responds with something that the natural-language interface
recognizes as an organization, the interface will then generate
a Sisl event indicating the organization was detected. This event
will then be processed by the Sisl service logic.
In Sisl, the service logic of an application is specified as a reactive
constraint graph, which is a directed graph with an enriched
structure on the nodes. The traversal of reactive constraint graphs
is driven by the reception of events from the environment: these
events have an associated label (the event name) and may carry
associated data. In response, the graph traverses its nodes and executes
actions; the reaction of the graph ends when it needs to wait
for the next event to be sent by the environment. Predicates on
events play an important role in reactive constraint graphs. In par-
ticular, nodes may contain an ordered set of predicates, which indicate
a conjunction of information to be received from the user.
Upon receipt of the appropriate events, predicates in the current
node are evaluated in order. Satisfaction of all predicates at a node
triggers a node change in the graph, as may violation of any predicate
Reactive constraint graphs are implemented as a Java library. An
important aspect of reactive constraint graphs is that nodes may
have associated actions, consisting of arbitrary Java code, which
is executed upon entry into/exit from the node. These actions can
hence have side effects on local and global variables of the Java
program, or on external databases. We note that individual predicates
do not have actions associated with them.
2.2 The Sisl process algebra
In this section, we describe a process algebra to succinctly describe
Sisl programs.
Events. We begin with a description of the events used in the
process algebra.
A set of (input) labels I .
A set of values V , that are carried on labels from I .
A label with an associated value is called an event, and we write e
to range over [I V]. A signature is a subset of I . We use for a
signature, and jj for its size. A predicate over a signature is a
boolean valued function from V jj ! Bool; that is, it maps every
set f(a; v)ja to ftrue; falseg. The signature of a
predicate is sometimes written ().
Syntax. Terms of the process algebra have the following abstract
(where
and
viol
denote processes).
(Sequential composition)
viol
(Constraint)
We refer to P i
viol
as the target node of predicate i .
Informal dynamic semantics. The process combinators have the
following intuitive meaning. Sequential composition is standard:
means that P1 is executed first and P2 is executed after
terminates.
Choice,
functions like prefixing in process alge-
bras: the process waits for an event with label in the a i 's and with
data v i . Such a term corresponds to a "choice" node in the reactive
constraint graph, and represents a disjunction of events to be
received from the user. For every event in this set, the Sisl service
logic automatically sends out a corresponding prompt to the
user. The choice node then waits for the user to send an event in
the specified set. When such an event arrives, the corresponding
transition is taken, and control transfers to the child node. To ensure
determinism of the Sisl program, all events (a must be
distinct.
next
viol
n), the constraint
combinator, has the most interesting dynamic behavior. Such
a term corresponds to a "constraint" node in the reactive constraint
graph, and has an associated ordered set of predicates i on events.
Intuitively, a constraint node is awaiting all the events in [
Thus, this node represents a conjunction of information to be received
from the user. These events can be sent to the constraint
node in any order. When the control reaches the constraint node,
the Sisl service logic automatically sends out a prompt event for every
event that is still needed in order to evaluate some predicate. In
addition, it automatically sends out a optional prompt for all other
events mentioned in the predicates - these correspond to information
that can be corrected by the user. In every round of interaction,
the constraint node waits for the user to send any event that is mentioned
in its associated predicates. Each predicate associated with
a constraint node is evaluated in order as soon as all of its events
have arrived. If an event is resent by the user interfaces (i.e, information
is corrected), predicates with that event in their signature
are re-evaluated in order.
There are two ways to exit a constraint node.
All of its predicates have been evaluated and are satisfied. In
this case, control transfers to P next
Some predicate i with non-null P i
viol
evaluates to false,
and all predicates j with j < i evaluate to true. That is,
predicate i is the first predicate in order that is false, and its
viol
is non-null. In this case, control transfers to P i
viol
As mentioned earlier, node changes may cause side-effecting actions
to be executed; however, evaluation, satisfaction, or violation
of individual predicates do not cause any side effects to occur, other
than the node changes described above.
2.3 Sisl: The state machine semantics
We will describe a associated space of state machines associated
with Sisl, and describe the semantics of Sisl processes as state machines
A state machine in the Sisl semantics is a tuple (I;
where I is the set of input labels, S is a set of states, s0 is the
unique start state, F S is the set of final states, and the transition
relation T S [I V ] S is deterministic, i.e., given a state s
and any event e, there is at most one s 0 such that (s; e; s 0 ) is in the
transition relation. In this case, we write s e
Final states do
not have any outgoing transitions.
Constructions
We will describe the denotations of Sisl combinators as constructions
on the state machines. In this section we will use P (perhaps
with super/subscripts) to range over the members of the given class
of state machines.
The state machine corresponding to null has a single state that
is both the start state and the final state. It has no transitions.
Sequential composition This is described in a standard fashion
by assumption on final states. All arcs into the final states of P1 are
redirected to the unique start state of P2 in P1 ; P2 . Formally, the
sequential composition of state machines
and is the state machine
Choice:
This is described by building the state machine
for adding a new start state with transitions labeled
to the start state of the state machine for P i . Formally,
given state machines P the resulting state
machine P is (I;
new state,
Constraint:
viol
The set S of intermediate states the Sisl
program goes through while collecting events in K are determined
by partial maps f Intuitively, the domain of a partial
function f , written dom(f), indicates the labels on which data has
been received. With this intuition, it is clear that the information
still required to satisfy the requirements of this constraint node is
given by the transitions that are enabled at this state, and have labels
undefinedg. Furthermore, the start state is given
by the partial map with empty domain, since no information has
been received.
We are identical on all
other labels.
f is consistent if f is such that for all predicates i , if dom(f)
includes
all a
f is inconsistent otherwise, i.e., there is some predicate i
such that dom(f) includes
f is complete, if dom(f) = K, incomplete otherwise.
All four combinations of the above two parameters are possible.
Formally, the set S of states is all partial maps f
is either inconsistent or incomplete. Let
transition relation T on states in S be defined as follows:
(a; v) be a consistent and complete map. There
is a transition labeled (a; v) from state with label f to P next .
That is, P next
is the target of transitions that make the information
contained in a state complete in a consistent fashion.
(a; v) be an inconsistent map, with i being the
first i that is falsified in g. If P viol
i is not null, then there
is an arc labeled (a; v) from state with label f to the start
state of P i
viol
. That is, causing a predicate violation with a
non-null target node causes control to move to that node.
There is a transition labeled (a; v) from a state with label f
to a state with label
- g is consistent and incomplete, or
- g is inconsistent, and P viol
i is null, where i is the
first predicate in order that is falsified in g.
That is, if no predicates are violated, or if a predicate with
a null target node is violated, then control remains in the
constraint node.
A transition labeled (a; v) adds or changes information on label a
at a state. If it changes the information, it is called an overwrite
event: (a; v) is an overwrite event if it causes a transition from
some state f in which a 2 dom(f).
The start state is given by the partial map with empty domain.
The final states are given by the union of the final states of P next
and the non-null P i
viol
's.
Formally, given state machines P next
and the non-null P i
the resulting state
machine is
is the partial map with empty
domain,
In the following we write target( i ) to refer to the target node
of predicate i .
3. AUTOMATIC TESTING OF SISL PROGRAM
In this section, we show how Sisl programs can be automatically
and systematically tested for violations of safety properties using
nondeterministic testing algorithms. Used in conjunction with the
systematic state-space exploration tool VeriSoft [11], which supports
a special system call "VS toss" simulating nondeterminism,
these nondeterministic testing algorithms can systematically drive
the execution of the Sisl application being tested in order to exhibit
all its possible behaviors.
Safety properties can be represented by prefix-closed finite automata
on finite words [3]. We assume such a representation AP ,
and define a safety property L(P ) as the set of finite words accepted
by the finite automaton AP .
Let O be the alphabet of AP . By construction, O is contained
in the set of input events I of the state machine M representing the
Sisl program to be tested. We call input events in O observable
events. Let w jO
denote the projection of a word w 2 I over a set
O I .
DEFINITION 3.1. Let be the state machine
defining the semantics of a Sisl program as defined in the previous
section but with (which is therefore omitted). Let s w
denote an execution of M that goes from state s to state s 0 after
receiving a finite sequence w of events which does not include any
overwrite events. Let O I denote a set of observable events.
We define the set of observable behaviors of M as the language
LO (M) on finite words on O such that LO
jO
g.
In other words, the set of observable behaviors of a Sisl program
with respect to a set of observable events O is defined as the set of
finite sequences of observable events that the Sisl program can take
as inputs excluding overwrite events. For technical convenience
and efficiency reasons, we deliberately ignore overwrite events in
this definition since their occurrence is an artifact of the user-interface
of the system that does not affect transitions from nodes to nodes,
and hence the logic of the Sisl program.
The problem we address in this work is thus how to check automatically
and efficiently that LO (M) L(P ).
A naive solution to this problem consists of driving the execution
of the Sisl program by a test driver that nondeterministically
sends any enabled input event and associated valid data value to
whenever M is ready to take a new input, the execution of
the nondeterministic test driver being itself under the control of
VeriSoft. Checking LO (M) L(P ) can then be done by monitoring
all possible executions of M in conjunction with this test
driver, and checking that all its observable behaviors are accepted
by AP . However, this naive approach would generate a state space
typically so large that it would render any analysis intractable: for
instance, any input event that takes a 32-bit integer as argument
would immediately generate a branching point with 2 branches.
Clearly, data values are the cause of this unacceptable state explosion
In the case of constraint nodes for instance, one could think that
an analysis of the predicates associated to such nodes using constraint
satisfaction techniques might be used to generate automatically
data values. However, this approach is problematic in our
context since predicates in Sisl programs are implemented by arbitrary
Java code, can be quite elaborate and may involve accesses
to external data (for instance, the evaluation of a predicate may involve
database lookups to fetch and test provisioning data for the
subscriber of the service). Also, the evaluation of a predicate on
a same set of input data values may change over time (when the
evaluation of a predicate may depend on data previously modified
during the current execution). How to close automatically any open
reactive (Java) program with its most general environment is an
interesting but hard problem [5] that is beyond the scope of the
present work.
Therefore, we will simply assume here, as is usually done in
testing, that the user specifies a fixed set values(a) of possible data
values for each input event a in I . Whenever event a is provided as
input to the Sisl program during testing, a data value v in values(a)
is chosen nondeterministically by the test driver and then passed as
argument of event a to the Sisl service.
For a given set V of sets values(a) of data values, we define the
restriction of M by V as follows.
DEFINITION 3.2. Let be the state machine
defining the semantics of a Sisl program as defined in the previous
section. Let V be a (complete) valuation function that associates
with each input event a a finite nonempty set values(a) of data
values: 8a We call the restriction of
by V the state machine
(a)g.
The restriction of M by V is thus the set of states the Sisl program
represented by M can reach when data values associated to input
events are taken from V exclusively. Note that, since Sisl programs
are deterministic, the successor state s 0 reached after receiving an
input event a(v) in a state s is always unique.
In the remainder of this section, we discuss a restricted version
of our original problem, namely how to check automatically and
efficiently that LO (MV ) L(P ), instead of LO (M) L(P ).
A simple algorithm for checking whether LO (MV ) L(P ) is
presented in Figure 1. At any reached state, this algorithm non-deterministically
selects an enabled input event a (Step 2.b) and a
data value v in values(a) (Step 2.c), and then sends a(v) to the
Sisl program being tested (Step 2.d). Nondeterminism is simulated
by the special operation "VS toss" supported by VeriSoft. This
operation takes as argument a positive integer n, and returns an integer
in [0; n]. The operation is nondeterministic: the execution of
a transition starting with VS toss(n) may yield up to n different
successor states, corresponding to different values returned by
VS toss.
The execution of this nondeterministic and nonterminating test-
driver algorithm is controled by VeriSoft. VeriSoft provides the
value to be returned for each call to VS toss in order to systematically
explore all the possibilities. It also forces the termination of
every execution when a certain depth is reached. This maximum
depth is specified by the user via one of the several parameters
that the user can set to control the state-space search performed by
VeriSoft, and is measured here by the number of calls to VS toss
executed so far in the current run of the algorithm.
Checking LO (MV ) L(P ) can be done as follows. Whenever
an event a(v) is sent to the Sisl service SSV to be tested, the automaton
AP representing the property P is evaluated on a(v). If
a 2 O, the automaton may move and reach a new state. By con-
struction, if AP reaches a non-accepting state, this means that the
property P is violated. An error is then reported, and the search
stops. The last scenario executed is saved in memory, and can be
replayed later by the user with the VeriSoft simulator.
The algorithm of Figure 1 generates all possible sequences of input
events (and associated data values) that can be taken as input
by the Sisl program. In the rest of this section, we show that generating
all these sequences is actually not necessary to check the type
of safety properties we consider here. Precisely, we show that the
Initialize the Sisl service SSV .
2. Loop forever:
(a) Let E be the set of non-overwrite events in I that are enabled in the current state s. If
(b) Let i =VS toss(jEj 1); let a be the i th element of E.
(c) Let j =VS toss(jvalues(a)j 1); let v be the j th element of values(a).
(d) Send a(v) to SSV .
Figure
1: Simple Algorithm
algorithm of Figure 1 can be optimized so that it does not generate
all possible sequences of input events enabled in constraint nodes
provided that these events are not observable.
EXAMPLE 3.3. Let P be a sisl program, and let 1
any predicates, with signatures given by
Let O, the set of observable events be empty. All interleavings of
unobservable input events that drive the system to a same successor
node of a constraint node have the same void effect on the property
being checked; so, there is no need to generate all of these. Conse-
quently, the testing of this constraint node requires the generation
of one sequence of input events rather than the ([ i i
)! sequences
generated by the naive algorithm.
On the other hand, if O is ([ i i
), the testing of this constraint
node requires the generation of all the ([ i i
)! sequences generated
by the naive algorithm.
This observation is exploited in the algorithm of Figure 2. This
algorithm behaves as the previous one except in the case of constraint
nodes. In that case (Step 2.c), the algorithm starts (Step
2.c.ii) by nondeterministically choosing a data value va to be associated
with each input event a enabled in the constraint node. Then
(Step 2.c.iii), it evaluates successively all the predicates i of the
constraint node. Predicates i that are violated by the selected set
of data values va for each a 2 E are added to a set Marked of
violated predicates unless there exists another predicate j previously
added in Marked whose signature ( j ) is contained in the
signature whose signature contains the same
set of observable events as
(Step 2.c.iii.A). If the selected set of data values does not satisfy all
the predicates of the node (Step 2.c.iv), one violated predicate i in
the set Marked such that target( i ) 6= null is nondeterministically
chosen, and the input events in its signature are selected in the
set S of events to be provided to the Sisl program (Step 2.c.iv.E);
otherwise, all predicates are satisfied, and set S is the set of all enabled
input events that are enabled in the node (Step 2.c.v). Then,
all unobservable input events in set S (if any) are sent to the Sisl
program (Step 2.c.vi). Finally, all remaining (hence observable) input
events in S are sent to the Sisl program one by one in some
random order picked nondeterministically among the set of all possible
interleavings of these events (Step 2.c.vii).
The correctness of the algorithm of Figure 2 can be proved by
showing that there is a weak bisimulation between the nodes reached
during the execution of the algorithm and the nodes of MV , the restriction
of M by V . This in turn guarantees that all observable
behaviors of MV can be observed during the nondeterministic executions
of the algorithm of Figure 2. Let node(s) be the current
node the Sisl program when it is in state s. We write n w
denote that there exists a sequence of non-overwrite input events
w 0 such that s w 0
jO
and no node other than n and n 0 are traversed during the transition
from s to s 0 .
THEOREM 3.4. Let be the restriction of
the state machine M defining the semantics of a Sisl program by a
valuation function V . Let M be the state machine
defined with the set T 0 representing the set of state transitions performed
by the Sisl program when it is being tested by the algorithm
of
Figure
2. Then, for any reachable state s in MV , we have for
if n w
if n w
in MV .
PROOF. (Sketch) The proof is immediate if n is not a constraint
node. Consider the case where n is a constraint node. Let s be
a reachable state in MV such that To simplify the
presentation, assume s is the first state reached when entering node
n during that visit of n. (Other cases can be treated in a similar
way.) Let us show that every node transition n w
in MV can
be matched by a node transition n w
(the converse is
immediate).
there exists a sequence of non-overwrite input
events w 0 such that s w 0
jO
no node other than n and n 0 are traversed during the transition
from s to s 0 . For simplicity, assume that s 0 denote the first state
reached when entering node n 0 when executing w 0 from s. (Again,
other cases can be treated in a similar way.)
If n 0 is the node Pnext reached when all predicates in n are sat-
isfied, then the set fva ja 2 w 0 g of data values associated to input
events provided during the execution of w 0 from state s to s 0 satisfy
all the predicates in n. Thus, there exists an execution of the algorithm
of Figure 2 that can select this set of data values in Step 2.c.ii.
In that case, none of the predicates of node n will be marked, the
algorithm will select all the input events in w to be sent to the Sisl
program (Step 2.c.v), and there exists one execution of the algorithm
that will send all the observable events in w 0 in the same
order as in w (Step 2.c.vii).
Otherwise, n 0 is the (non-null) target node target( i ) reached
after a predicate i of n is violated. This means that the set fva ja 2
of data values associated to input events provided during the
execution of w 0 from state s to s 0 violates predicate i . Since
is reachable, this also means that no other predicate
and such that violated (other-
wise, target( j ) would be reached instead of target( i ) when
executing w 0 from s). There exists an execution of the algorithm of
Figure
2 that can select in Step 2.c.ii the set of data values fva ja 2
Initialize the Sisl service SSV .
2. Loop forever:
(a) Let E be the set of non-overwrite events in I that are enabled in the current state s. If
(b) If the current node of SSV is NOT a constraint node:
let a be the i th element of E.
ii. Let j =VS toss(jvalues(a)j 1); let v be the j th element of values(a).
iii. Send a(v) to SSV .
(c) If the current node of SSV is a constraint node:
be the sequence of predicates associated with the constraint node.
ii. Loop through all events a in E (in any order):
A. Let j =VS toss(jvalues(a)j 1); let va be the j th element of values(a).
iii. Loop through all i from 1 to m (in
A. If i is violated by the data values fva ja 2 Eg AND
Marked implies
then add i to set Marked.
iv. If jMarkedj > 0:
A. Remove from Marked all i such that target( i
B. If
C. Let i =VS toss(jMarkedj 1).
D. Let i be the i th element of Marked.
v. Else:
A. Let
vi. For all a 2 (S n O), send a(va) to SSV .
vii. Loop until (S \
A. Let i =VS toss(jS \ Oj 1); let a be the i th element of S \ O.
B. Send a(va) to SSV .
C. Remove event a from set S.
Figure
2: Optimized Algorithm
g. This set of data values violates predicate i , which will then
be added to the set Marked in Step 2.c.iii of the algorithm, unless
there exists another violated predicate j with j < i, previously
added in Marked and such that target( i
in that case, violating this other predicate
also leads to node n 0 via a sequence w 00 of input events
such that w 00
jO
In any case, a predicate whose violation leads
to n 0 via a sequence w 000 of events such that w 000
jO
w is added
to Marked. If this predicate is selected in Step 2.c.iv of the algo-
rithm, all the events in its signature (which includes all events in w)
are then sent to the Sisl program, and there exists one execution of
the algorithm that will send all the observable events in w 000 in the
same order as in w (Step 2.c.vii).
An immediate corollary of the above theorem is that all observable
behaviors of MV are generated by the algorithm of Figure 1, which
can thus be used to check whether LO (MV ) L(P ).
We can also prove that the optimization for constraint nodes performed
by Step 2.c of the algorithm of Figure 2 is optimal in the
following sense.
THEOREM 3.5. Let
be defined as in Theorem 3.4. Let s be any reachable state s in MV
such that is a constraint node. For any given
set fva ja 2 Eg of data values associated to input events enabled
when the node is entered, if n w
exists exactly one transition n w
PROOF. (Sketch) By contradiction. Assume that there exists
two transitions n w
This implies that there exist two
predicates i and j of n that are both violated by the set fva ja 2
Eg of data values, whose signatures are not included in each other,
and such that target( i transitions
are labeled by w, we have
wg. Therefore, by the condition of Step 2.c.iii.A of
the algorithm of Figure 2, i and j cannot be both be added to the
set Marked, and hence the algorithm cannot visit node n 0 twice
by executing from s two different sequences of events w 0 and w 00
such that w 0
jO
jO
4. IMPLEMENTATION ISSUES
To automatically and systematically test Sisl programs for violation
of safety properties using the algorithms presented in the
previous section, we have integrated VeriSoft and Sisl.
VeriSoft is a tool for systematically exploring the state spaces of
systems composed of several concurrent processes executing arbitrary
code written in any language. The state space of a system is
a directed graph that represents the combined behavior of all the
components of the system. Paths in this graph correspond to sequences
of operations (scenarios) that can be observed during executions
of the system. VeriSoft systematically explores the state
space of a system by controlling and observing the execution of
all the components, and by reinitializing their executions. VeriSoft
drives the execution of the whole system by intercepting, suspending
and resuming the execution of specific operations (system calls)
executed by the implementation being tested. Examples of operations
intercepted by VeriSoft are operations on communication objects
(e.g., sending or receiving a message), and the VS toss(n)
operation mentioned earlier, which simulates nondeterminism and
introduces a branching point with n+ 1 branches in the state space
whenever it is executed. VeriSoft can always guarantee a complete
coverage of the state space up to some depth; in other words, all
possible executions of the system up to that depth are guaranteed
to be covered. Since VeriSoft can typically generate, execute and
evaluate thousands of tests per minute, it can quickly reveal behaviors
that are virtually impossible to detect using conventional
testing techniques. More details about the state-space exploration
techniques used by VeriSoft are given in [11]. VeriSoft has been
applied successfully for analyzing several software products developed
in Lucent Technologies, such as telephone-switching applications
and implementations of network protocols (e.g., see [12]).
In order to use VeriSoft for controlling the execution of the non-deterministic
algorithms from Section 3, we have built a "VeriSoft
interface" to Sisl. This interface provides the necessary information
requested by the algorithms of the previous section (such as the current
set of enabled input events, etc. These algorithms were implemented
in a straightforward manner in Java. Calls to the external
operation VS toss were performed using the Java Native Interface.
VeriSoft can then control the execution of the resulting single process
formed by the combination of the Sisl application being tested
and its nondeterministic test driver, by intercepting calls to VS toss
and providing the value returned by these calls, and by creating and
destroying the Java Virtual Machine to reinitialize the program.
For testing of safety properties, we used the specification-based
testing package of Triveni [6], a framework for event-driven concurrent
programming in Java. This implementation uses a standard
algorithm [20] to translate a given safety formula in propositional
linear-time temporal logic formulas into a finite-state automaton
whose language is the set of finite event sequences that violate the
formula.
5. EXAMPLE AND EXPERIMENTS
Consider Table 1 which describes an interactive banking service
called the Teller.
To motivate the Sisl implementation of this service, we describe
the transfer of funds in more detail. The transfer capability establishes
a number of constraints among the three input events (source
account, target account, and transfer amount) required to make a
the specified source and target accounts both must be valid
accounts for the previously given (login,PIN)
the dollar amount must be greater than zero and less than or
equal to the balance of the source account;
it must be possible to transfer money between the source and
target accounts.
These constraints capture the minimum requirements on the input
for a transfer transaction to proceed. Perhaps more important
is what these constraints do not specify: for example, they do not
specify an ordering on the three inputs, or what to do if a user has
repeatedly entered incorrect information.
Figure
3 depicts the Sisl service logic for the Teller, specified
as a reactive constraint graph. It uses constraint nodes to describe
the requirements on the transfer capability, deposit capability, and
withdrawal capability. In particular, there is a constraint node for
each transaction type. In order to enter the service, the user must
first provide a startService event (e.g. dialing into the service or
going to the web page); this is not depicted in Figure 3. The user
then must successfully log in with a valid login and pin combi-
nation. Since the login and pin may be provided in either order, a
constraint node expresses this requirement. For expository simplic-
ity, we assume that the login and pin must be identical for the login
to be successful. After the user has successfully logged in, the service
provides a choice among the different transaction types. If a
startTransfer event is provided, for example, control flows to the
transfer constraint node. The user is then prompted for a source ac-
count, target account and amount, in any order. If the user provides
a source account and an amount which is greater than the balance
of the source account, for example, the constraint amt <= Bal-
ance(src) will be violated. Since no explicit failure target nodes
have been specified, control flow will remain in the current node.
If the user provides consistent information about both accounts and
the amount, the transfer will be performed and control reverts back
to the choice node on transaction types. The self-loop annotated
with "!has Quit" on the choice node indicates that the the subgraph
from this node will be repeatedly executed until the precondition
becomes false (i.e. the user has quit the service).
Some temporal properties of interest for the Teller include:
The service can accept a source account only if the current
transaction type is either a withdrawal or transfer.
The service can accept a target account only if the current
transaction type is either a deposit or transfer.
The service can begin a deposit transaction only if the user
has given a login and pin in the past and has not quit the
service.
Each property described above in English has an equivalent formula
in propositional linear-time temporal logic. In our terminol-
ogy, the set of observable events when analyzing each property is
the set of events that occur in the corresponding formula. For ex-
ample, the set of observable events when analying the first property
is fsrc,startWithdrawal, startDeposit, startTransferg; in particular,
the reference to the "last" transaction type necessitates the startDe-
posit event to occur in the corresponding formula.
In our implementation, the two valid accounts are checking and
savings, and transfers are permitted only between accounts of different
i.e. between checking and savings, and vice-versa.
Money market accounts are not considered to be valid accounts in
this example. Initially, the balance on all accounts is zero, and the
hasQuit variable is set to false.
Our implementation of this portion of the Teller consists of approximately
200 lines of text in a mark-up language, which is automatically
translated by the Sisl toolset into approximately 500 lines
of Java code. It currently has applet, HTML, automatic speech
recognition, and VoiceXML [21] interfaces, each about 300 lines,
all sharing the same service logic.
5.1 Results of Experiments
To evaluate our approach and compare the efficiency of the algorithms
presented in Section 3, we performed systematic state-space
explorations on the Teller service.
The Teller is an interactive banking service. The service is login protected; the customer must authenticate themselves by entering an
identifier (login) and PIN (password) to access the functions. As customers may have many money accounts, most functions require the
customer to select the account(s) involved. Once authenticated, the customer may:
Make a deposit.
Make a withdrawal. The service makes sure the customer has enough money in the account, then withdraws the specified amount.
Transfer funds between accounts. The service prompts the customer to select a source and target account, and a transfer amount, and
performs the transfer if (1) the customer has enough money in the source account, and (2) transfers are permitted between the two accounts.
Quit the service.
Table
1: A high-level description of the Teller.
login==pin
isValid(src)
amt <= Balance(src)
Do the withdrawal
and notify
Do the deposit
and notify
isValid(src)
amt <= Balance(src)
Do the transfer
and notify
startWithdrawal
startDeposit
startTransfer
Choice Node
hasQuit:=false
userQuit
Figure
3: The Reactive Constraint Graph for the Teller
We first selected the following data to be associated to each
event: the name and pin events each were assigned two names from
the same set fJohn, Maryg, the src, tgt, amt events each were assigned
three types from the set fchecking, savings, moneymarketg,
and the amt event was assigned values from the set f0; 100g.
For the analysis, we first tested the following temporal prop-
erty: the service can accept a target account only if the current
transaction type is a deposit. The set of observable events of this
property is ftgt, startDeposit, startWithdrawal, startTransferg. As
expected, both algorithms reported a violation trace in which the
current transaction type is a transfer and a target account was accepted
We then ran a set of experiments in which the set of observable
events was empty, in order to measure the efficiency of the algo-
rithms. This actually tests that there were no uncaught run-time
exceptions in the Sisl (Java) program along all paths up to a certain
depth, as measured in the number of events sent to the service
logic. We ran these tests in succession, each time increasing the
maximum depth of the paths to be explored. Our results are summarized
in the Figure 4. The plot on the left depicts the number
of paths explored by the algorithms against the maximum depth of
paths to be explored, while the plot on the right depicts the running
time of the algorithms in seconds against the maximum depth of
paths to be explored.
Some interesting observations can be made about the experimental
data. First, as expected, the running times of both algorithms are
proportional to the number of paths explored. Second, consider the
rate at which the number of paths and running times increase with
respect to the maximum path depth; this rate is significantly less
for the optimized algorithm.
An important observation about the experimental data is that for
maximum depths of 5 and 6, the optimized algorithm explores the
same number of paths and hence has the same running time! This
phenomenon occurs because the optimized algorithm performs the
bulk of its work upon entry into a constraint node. This is especially
true in the case of an empty set of observable events: in this
case, all the work is done by the optimized algorithm upon entry
into the constraint node. For example, in the Teller, paths of depth
4 consist of a startService event followed by a consistent name and
pin event in either order, followed by a transaction type. At depth
5, control enters one of the transaction constraint nodes. The optimized
algorithm performs all its work in choosing the event data,
computing the marked predicates, and choosing a marked predicate
upon entry into the node. It then merely sends the corresponding
events in a fixed order. Hence the set of paths is explored
is identical at depths 5 and 6, and the running time is the same
(except for the extremely minor activity of sending an additional
event). The similar phenomenon occurs at depths 8, 9, and 10.
The results show that even for small examples such as the Teller,
the optimized algorithm can provide a significant improvement in
efficiency: for example, at depth 11, the simple algorithm takes
over one and a half hours to complete, while the optimized algorithm
takes only eleven minutes. Hence, the latter can be used
to efficiently and systematically test much more complicated services
simple
simple
optimized
Figure
4: Experimental Results for Number of Paths Explored (left) and Running Time in Seconds (right) vs. Path Depth
6. CONCLUDING REMARKS
6.1 Sisl Applications
We are currently using Sisl in several projects involving multi-modal
interactive services. For example, Sisl is being used to prototype
a new generation of call processing services for a Lucent Technologies
switching product. As part of this research/development
collaboration, we are developing some call processing features that
may form the basis of new product offerings. The service logics
for these features are expected to be quite complex, and need to be
tested thoroughly. We are planning to use the techniques and tools
presented in this paper to test these applications.
We are also planning to test two other Sisl applications being developed
at Lucent Technologies: an interactive service based on a
system for visual exploration and analysis of data, and some collaborative
applications in which users may interact with the system
through a rich collection of devices.
6.2 Related Work
We conclude with a comparison of our approach with other related
work.
Combining an open reactive system with its most general environment
is related to the idea of "hiding" a set of visible actions of
a process in a process calculus [13, 17].
Closing automatically open reactive (event-driven) programs for
systematic testing (model-checking) purposes has been studied in [5,
8]. For sequential (data-driven) programs, numerous algorithms
have also been proposed to automatically generate a set of input
data that is sufficient to exercise and test all the possible paths in
the control-flow graph of a program, for instance. This previous
work makes extensive use of static analysis techniques (e.g., [7, 18,
2]), which automatically extract information about the dynamic behavior
of a sequential program by examining its text. In contrast,
the algorithms presented here dynamically detects at run-time the
set of input events that the application under test is currently ready
to accept, and uses that information to drive its execution, without
using any static analysis techniques. This makes our algorithms directly
applicable to any host language (Java, Perl scripts, etc.) and
environment (including external databases, etc.
The observation exploited by our second algorithm, namely that
interleavings of input events at a constrained node have sometimes
the same effect on the overall behavior of the system, is somewhat
similar to the intuition behind partial-order reduction algorithms
used in model-checking to prune the state spaces of concurrent
systems (e.g., see [10]). A major difference is that these
algorithms exploit a notion of "independence" (commutativity) on
actions executed by interacting concurrent processes. In contrast,
constraint-based programs are purely sequential. The reduction we
obtain here is derived directly from the structure of the program
and takes advantage of a form of symmetry induced by constraint
nodes and their ability to specify sets of input events rather than single
events. Another example of programming language construct
inducing symmetry that can be exploited during verification (sys-
tematic testing) is the "scalarset" [14].
7.
--R
Development principles for dialog-based interfaces
Compilers: Principles
Recognizing safety and liveness.
Sisl: Several interfaces
Abstract interpretation: A unified lattice model for static analysis of programs by construction or approximation of fixpoints.
Model Checking for Programming Languages using VeriSoft.
Model Checking Without a Model: An Analysis of the Heart-Beat Monitor of a Telephone Switch using VeriSoft
Communicating Sequential Processes.
Better verification through symmetry.
A speech interface for forms on WWW.
Constraint logic programming.
Communication and Concurrency.
Program Flow Analysis: Theory and Applications.
Concurrent Constraint Programming.
An automata-theoretic approach to automatic program verification
An event driven model for dialogue systems.
--TR
Communicating sequential processes
Compilers: principles, techniques, and tools
Constraint logic programming
Communication and concurrency
Concurrent constraint programming
Model checking for programming languages using VeriSoft
Model checking without a model
Automatically closing open reactive programs
Filter-based model checking of partial systems
Abstract interpretation
Partial-Order Methods for the Verification of Concurrent Systems
Program Flow Analysis
Better Verification Through Symmetry
Design and Implementation of Triveni
--CTR
Jean Berstel , Stefano Crespi Reghizzi , Gilles Roussel , Pierluigi San Pietro, A scalable formal method for design and automatic checking of user interfaces, Proceedings of the 23rd International Conference on Software Engineering, p.453-462, May 12-19, 2001, Toronto, Ontario, Canada
Jean Berstel , Stefano Crespi Reghizzi , Gilles Roussel , Pierluigi San Pietro, A scalable formal method for design and automatic checking of user interfaces, ACM Transactions on Software Engineering and Methodology (TOSEM), v.14 n.2, p.124-167, April 2005
Patrice Godefroid, Software Model Checking: The VeriSoft Approach, Formal Methods in System Design, v.26 n.2, p.77-101, March 2005 | model checking;interactive services;testing;state-space reduction;constraint-based languages;state explosion;verification |
355188 | Interpreting Stale Load Information. | AbstractIn this paper, we examine the problem of balancing load in a large-scale distributed system when information about server loads may be stale. It is well-known that sending each request to the machine with the apparent lowest load can behave badly in such systems, yet this technique is common in practice. Other systems use round-robin or random selection algorithms that entirely ignore load information or that only use a small subset of the load information. Rather than risk extremely bad performance on one hand or ignore the chance to use load information to improve performance on the other, we develop strategies that interpret load information based on its age. Through simulation, we examine several simple algorithms that use such load interpretation strategies under a range of workloads. Our experiments suggest that by properly interpreting load information, systems can: 1) match the performance of the most aggressive algorithms when load information is fresh relative to the job arrival rate, 2) outperform the best of the other algorithms we examine by as much as percent when information is moderately old, significantly outperform random load distribution when information is older still, and 4) avoid pathological behavior even when information is extremely old. | Introduction
When balancing load in a distributed system, it is well known that the strategy of sending
each request to the least-loaded machine can behave badly if load information is old [11,
18, 21]. In such systems a "herd effect" often develops, and machines that appear to be
underutilized quickly become overloaded because everyone sends their requests to those
machines until new load information is propagated. To combat this problem, some systems
This work was supported in part by an NSF CISE grant (CDA-9624082) and grants Novell and Sun.
Dahlin was also supported by an NSF CAREER grant (9733842).
adopt randomized strategies that ignore load information or that only use a small subset of
load information, but these systems may give up the opportunity to avoid heavily loaded
machines.
Load balancing with stale information is becoming an increasingly important problem for
distributed operating systems. Many recent experimental operating systems have included
process migration facilities [2, 6, 9, 16, 17, 23, 24, 25, 26, 30] and it is now common for workstation
clusters to include production load sharing programs such as LSF [31] or DQS [10].
In addition, many network DNS servers, routers, and switches include the ability to multiplex
incoming requests among equivalent servers [1, 5, 8], and several run-time systems
for distributed parallel computing on clusters or metacomputers include modules to balance
requests among nodes [12, 14]. Server load may also be combined with locality information
for wide area network (WAN) information systems such as selecting an HTTP server or
cache [13, 22, 28]. As such systems include larger numbers of nodes or the distance between
nodes increases, it becomes more expensive to distribute up-to-date load information. Thus,
it is important for such systems to make the best use of old information.
This paper attempts to systematically develop algorithms for using old information. The
core idea is to use not only each server's last reported load information (L i ), but also to use
the age of that information (T ) and an estimate of the rate at which new jobs arrive to change
that information (-). For example, under a periodic update model of load information [21]
that updates server load information every T seconds, clients using our algorithm calculate
the fraction of requests they should send to each server in order to equalize the load across
servers by the end of the epoch. Then, for each new request during an epoch, clients randomly
choose a server according to these probabilities.
In this paper, we devise load interpretation (LI) algorithms by analyzing the relevant queuing
systems. We then evaluate these algorithms via simulation under a range of load information
models and workloads. For our LI algorithms, if load information is fresh (e.g., T or - or both
are small), then the algorithms tend to send requests to machines that recently reported low
load, and the algorithms match the performance of aggressive algorithms while exceeding
the performance of algorithms that use random subsets of load information or pure random
algorithms that use no load information at all. Conversely, if load information is stale,
the LI algorithms tend to distribute jobs uniformly across servers and thus perform as well
as randomized algorithms and dramatically better than algorithms that naively use load
information. Finally, for load information of modest age, the LI algorithms outperform
current alternatives by as much 60%.
Other algorithms that attempt to cope with stale load information, such as those proposed
by Mitzenmacher [21], have the added benefit that by restricting the amount of load information
that clients may consider when dispatching jobs, they may reduce the amount of
load information that must be sent across the network. We examine variations of the LI
algorithms that base their decisions on similarly reduced information. We conclude that
even with severely restricted information, the algorithms that use LI can outperform those
that do not. Furthermore, modest amounts of load information allow the LI algorithms to
achieve nearly their full performance. Thus, LI decouples the question of how much load
information should be used from the question of how to interpret that information.
The primary disadvantage of our approach is that it requires clients to estimate or be told
the job arrival rate (-) and the age of load information (T ). If this information is not avail-
able, or if clients misestimate these values, our algorithms can have poor performance. We
note, however, that although other algorithms that make use of stale load information do
not explicitly track these factors, those algorithm do implicitly assume that these parameters
fall within the range of values for which load information can be considered "fresh;" if
the parameters fall outside of this range, those algorithms can perform quite badly. Con-
versely, because our algorithms explicitly include these parameters, they gracefully degrade
as information becomes relatively more stale.
The rest of this paper proceeds as follows. Section 2 describes related work with a particular
emphasis on Mitzenmacher's recent study [21], on which we base much of our methodology
and several of our system models. Section 3 introduces our models of old information
and Section 4 describes the load interpretation algorithms we use. Section 5 contains our
experimental evaluation of the algorithms, and Section 6 summarizes our conclusions.
Related work
Awerbuch et. al [3] examined load balancing with very limited information. However, their
model differs considerably from ours. In particular, they focus on the task of selecting a
good server for a job when other jobs are placed by an adversary. In our model, jobs are
placed by entities that act in their own best interest but that do not seek to interfere with
one another. This difference allows us to more aggressively use past information to predict
the future.
A number of theoretical studies [4, 7, 15, 20, 27] have suggested that load balancing algorithms
can often be quite effective even if the amount of information examined is severely
restricted. We explore how to combine this idea with LI in Section 5.6.
Several studies have examined the behavior of load balancing with old or limited information
in queuing studies. Eager et. al [11] found that simple strategies using limited information
worked well. Mirchandaney et. al [18, 19] found that as delay increases, random assignment
performs as well as strategies that use load information.
Several system have used the heuristic of weighing recent information more heavily than
old information. For example, the Smart Clients prototype [29] distributed network requests
across a group of servers using such a heuristic. Additionally, a common technique in process
migration facilities is to use an exponentially decaying average for to estimate load on a
machine (e.g., Load new = Load old k current 1). Unfortunately,
the algorithms used by these systems are somewhat ad hoc and it is not clear under what
circumstances to use these algorithms or how to set their constants. A goal of our study is
to construct a systematic framework for using old load information.
Our study most closely resembles Mitzenmacher's work [21]. Mitzenmacher examined a
system in which arriving jobs are sent to one of several servers based on stale information
about the servers' loads. The goal in such a system is to minimize average response time.
He examined a family of algorithms that make each server choice from small random subsets
of the servers to avoid the "herd effect" that can cause systems to exhibit poor behavior
when clients chase the apparently least loaded server. Under Mitzenmacher's algorithm, if
there are n servers, instead of sending a request to the least loaded of the n servers, a client
randomly selects a subset of size k of the servers, and sends its request to the least loaded
server from that subset. Note that when this algorithm is equivalent to uniform
random selection without load information and that when it is equivalent to sending
each request to the apparently least loaded server. In addition to the formulating these
k-subset algorithms as a solution to this problem, Mitzenmacher uses a fluid limit approach
to develop analytic models for these systems for the case when (n ! 1); however, the
primary results in the study come from simulating the queuing systems, and we follow a
similar simulation methodology here.
Mitzenmacher concludes that the version of the algorithm is a good choice in most
situations. He finds that it seldom performs significantly worse and generally performs
significantly better than the more aggressive algorithms (e.g., or even the modestly
aggressive outperforms the uniform random
algorithm for a wide range of update frequencies.
We believe, however, that this approach still has drawbacks. In particular, we note that as
-the update frequency of load information-changes, the optimal value of k also changes.
For example, under Mitzenmacher's periodic update model and one sample workload he
examines, quickly
becomes much better than larger values of T . Similarly, although
outperforms for such a workload, the reverse is true for larger values of
T . For example, when algorithm is a factor of 2 better than the
variation.
We also note that under Mitzenmacher's algorithms, the resulting arrival rate at a server
depends only on the server's rank in the sorted list of server loads, not on the magnitude of
difference in the queue lengths between servers. Furthermore, the least loaded servers
receive no requests at all during a phase. More generally, if servers are ordered by load, with
lowest load and s n\Gamma1 the highest, a given request will be sent to server s i if and
only if (1) servers s 0 through s i\Gamma1 are not chosen as part of the random subset of k servers
and (2) server s i is chosen as part of that subset. Because the probability that any server s j
is chosen as part of the k-server subset is k
, the probability that conditions (1) and (2) hold
Fraction
of
Requests
Server Rank
Figure
1: Distribution of requests to servers under the k-subset algorithm.
(The numerator is the number of ways to choose place them in the
slots from slot the denominator is the number of ways to choose
k elements from n elements assuming that element s i is always chosen.)
Figure
1 illustrates the resulting distributions for a range of k's. These distributions have
something of right flavor-more heavily loaded nodes get fewer requests than more lightly
loaded nodes. However, it is not obvious that the slope of any one of the lines is, in general,
right. The figure also illustrates why large values of k are inappropriate when T is large: a
large fraction of requests are concentrated on a small number of servers for a long period of
time.
3 Models of old information
There are several reasonable ways to model a delay from when load information is sampled to
when a decision is made to when the job under consideration arrives at its server. Different
models will be appropriate for different practical systems, and Mitzenmacher found significant
differences in system behavior among models [21]. We therefore examine performance
under three models so that we can understand our results under a wide range of situations
and so that we can compare our results to directly to those in the literature. We take the
first two models, periodic update and continuous update, from Mitzenmacher's study. 1 Our
third model, update-on-access, abstracts some additional systems of practical interest. We
describe these models in more detail below.
3.1 Periodic and continuous update
Mitzenmacher's periodic update and continuous update models can be visualized as variations
of a bulletin board model. Under the periodic update model, we imagine that every T
seconds a bulletin board that is visible to all arriving jobs is updated to reflect the current
load of all servers. The period between bulletin board updates is termed a phase. Load
information will thus be accurate at the beginning of a phase and may grow increasingly
inaccurate as the phase progresses.
Under the continuous update model, the bulletin board is constantly updated with load
information, but on average the board state is T seconds behind the true system state. Each
request thus bases its decisions on the state of the system on average T seconds earlier.
Mitzenmacher finds that the probability distribution of T had a significant impact on the
effectiveness of different algorithms. For a given average delay T , distributions with high
variance in which some requests see newer information and others see older information
outperform distributions with less variance where all jobs see data that are about T seconds
old.
Note that the real systems abstracted by these models would typically not include a centralized
bulletin board. The periodic model could represent, for instance, a system that
periodically gathers load information from all servers and then multicasts it to clients. The
continuous update model could represent a system where an arriving job probes the servers
for load information and then chooses a server but where there is a delay T due to network
latency and transfer time from when the servers send their load information to when the
client's job arrives at its destination server.
3.2 Update-on-access
The final model we examine was not examined by Mitzenmacher. In our update-on-access
model, we explicitly model separate clients sending requests to the servers, and different
clients may have different views of the system load. In particular, when a client sends a
request to a server, we assume that the server replies with a message that contains the
system's current load and that snapshot of system load may be used by the client's next
request. In such a system, the average load update delay, T , is equal to a client's inter-request
time. Thus, the update-on-access model assumes that jobs sent by active clients will
have a fresher picture of load than jobs sent by inactive clients.
Mitzenmacher found a third model, individual updates to have similar behavior to the periodic update
model, so we omit analysis of this model for compactness.
We consider this model because it may be applicable for problems such as the server selection
problem on the Internet [13, 22] where it may be prohibitively expensive to maintain load
information at clients that are not actively using a service, but where it may be possible
for clients to maintain good pictures of server load while they are actively using a service.
Furthermore, we hypothesize that if a system exhibits bursty access patterns, it may be able
to perform good load balancing even though average node's load information is, on average,
quite stale.
A disadvantage to using the update-on-access model is that it is more complex than the
previous models. For example, under this model, results depend not only on the aggregate
request rate but also on the number of clients generating a given number of requests. If
there are many clients generating a certain number of requests, their load information will
be on average older than if there is one client generating the same number of requests.
4 Algorithms for interpreting old information
In this section we describe our algorithms for load balancing, which work by interpreting load
information in the context of its age. We first describe the basic algorithm under the periodic
update model and then describe a more aggressive algorithm under the same model. Finally,
we describe minor variations of the algorithms to adapt them for the continuous update and
update-on-access models.
In general, our algorithms for interpreting load information follow two principles that distinguish
them from previous algorithms. First, we consider the magnitude of imbalance between
nodes, not just the nodes' ranks. Second, we modify our interpretation of information based
on its age and the arrival rate of requests in the system to account for expected changes to
system state over time.
In the descriptions below, we use the following notation:
Average age of the load information
(The specific meaning of T depends on the update model.)
n Number of servers
- Average per-server arrival rate
Reported load (queue length) on server i
arrive T The number of requests expected to arrive during a phase
The probability that an arriving request will be sent to server i
4.1 Algorithms for periodic update model
The during a phase of length T , arrive arrive in the system. The
goal of the Basic Load Interpretation (Basic LI) algorithm is to determine what fraction of
those requests should be sent to each server in order to balance load (represented as server
queue length) so that the sum of the jobs at the servers at the start of the phase plus the
jobs that arrive during the phase are equal across all servers. 2 So, if we begin with L tot jobs
at the servers and L i jobs at server i, the probability P i that we should send an arriving job
to server i is
arrive T
if 8i( L tot +arrive T
see below otherwise
(1)
The first term in the numerator is the number of jobs that should end up at each server to
evenly divide the incoming jobs plus the current jobs. The second term in the numerator is
the jobs already at server i. So the numerator is the number of jobs that should be sent to
server i during this phase. The denominator is the total number of jobs that are expected
to arrive during the phase. Thus the Basic LI algorithm is to send each arriving request to
server i with probability P i as calculated above for the current phase.
Note that if L tot +arrive T
i, then the phase is too short to completely equalize
the load. In that case, we want to place the arrive T requests in the least loaded buckets
to even things out as well as we can. We use the following simple procedure in that case:
at the start of the phase, pretend to place arrive T requests greedily and sequentially in the
least loaded buckets and keep track of the number of requests placed in each bucket (tmp i ).
During the phase, send each arriving request to server i with probability
arrive T
4.1.1 More aggressive algorithm
The above algorithm seems sub-optimal in the following sense: it tries to equalize the load
across servers by the end of a phase. Thus, if the phase is long, the system may spend a long
time with significantly unbalanced server load. A more aggressive algorithm might attempt
to subdivide the phase and use the first part of the phase to bring all machines to an even
state and then distribute requests uniformly across servers for the rest of the phase.
Our aggressive algorithm works as follows: without loss of generality assume that the servers
have been sorted by L i (with so that machine i is the ith least loaded server
and set L sentinal value. Then, subdivide the phase into n intervals. During
evenly distribute arrivals across machines to bring their
2 Notice that we make the simplifying assumption that the departure rate is the same at all servers so
that we can ignore the effect of departing jobs on the relative server queue lengths. This assumption will
be correct if all servers are always busy, but it will be incorrect if some servers are idle at any time during
the phase. This assumption can be justified because we are primarily concerned that our algorithms work
well when the system is heavily loaded, and in that case, queues will seldom be empty. The impact of
this simplification is that for lightly-loaded systems, we will overestimate the queue length at lightly-loaded
machines and send too few requests to them. In such a case, our probability distribution will be somewhat
more uniform across servers than would be ideal, and our algorithms will not be as aggressive as they could
be. Our experiments suggest that this simplification has little performance impact.
loads up to L j+1 . Thus, subinterval j is of length
-n
, and during subinterval j,
the probability that an arriving request should be sent to machine i, is:
(2)
4.2 Algorithms for other update models
Adapting the Basic LI algorithm to the continuous update or update-on-access model is
simple. We use Equation 1 to calculate the probabilities P i for sending incoming requests
to each server. The only difference is that for the periodic update model this calculation is
based on the L i estimates that hold during the entire phase, but under the new models the
may change with each request. P i can now be thought of as a current estimate
of the instantaneous rate at which requests should be sent to each server.
Adapting the Aggressive LI algorithm is more problematic. We use Equation 2 to calculate
the values based on the current L i array. However, under the continuous update model,
we are effectively always "at the end of a phase" in that the information is T seconds old.
And, although the aggressive algorithm is more aggressive than the basic algorithm during
the early subintervals of a phase (e.g., j near 0), it is less aggressive during later subintervals
(e.g., j near n). Thus, the "aggressive" algorithm may actually be less aggressive than the
basic algorithm under these update models when T is large.
5 Evaluation
In this section we evaluate the algorithms under a range of update models and workloads.
Our primary methodology is to simulate the queuing systems. We model task arrivals as
a Poisson stream of rate -n for a collection of n servers. When a task arrives, we send it
to one of the server queues based on the algorithm under study. Server queues follow a
first-in-first-out discipline. We select default system parameters to match those used
in Mitzenmacher's study [21] to facilitate direct comparison of the algorithms. In particular,
unless otherwise noted, and we assume that each server has a service
rate of 1 and the service time for each task is exponentially distributed with a mean time of
1.
We initially examine the Basic LI and Aggressive LI algorithms under the periodic update,
continuous update, and update-on-access models and compare their performance to the k-subset
algorithms examined by Mitzenmacher. We then explore three key questions for the
LI algorithms: (1) What is the impact of bursty arrival patterns? (2) What is the impact
of misestimating the system arrival rate? (3) What is the impact of limiting the amount of
load information available to the algorithms?
Basic LI
Aggressive LI
Figure
2: Service time v. update delay for periodic update model
5.1 Periodic update model
Figure
shows system performance under the periodic update model for the default param-
eters. The performance of the LI algorithms is good over a wide range of update intervals.
When T is large, the LI algorithms do not suffer the poor performance that the k-subset algorithms
encounter for any k ? 1. In fact, the LI algorithms maintain a measurable advantage
over the oblivious random algorithm even for large values of T . For example, when
outperforms the oblivious algorithm by 9% and Aggressive LI outperforms
the oblivious algorithm by 22%. For more modest values of T , the advantages are larger.
For example, at Aggressive LI is 60% faster than any of the k-subset algorithms and
Basic LI is 41% faster than any of the k-subset algorithms.
Figure
3 details the performance of the algorithms for small values of T . Aggressive LI
outperforms all other algorithms by at least a few percent down to the smallest value of T
we examined Basic LI is generally better than and always at least as good as any
k-subset algorithm over this range of T .
Figure
4 shows the performance of the system under a workload with a lighter load
than our default. When load is lighter, the need for load balancing is less pronounced and
the gains by any algorithm over random are more modest. When information is fresh, the
algorithms can perform up to a factor of two better than the oblivious algorithm. When
information is stale, the performance of the k-subset algorithms is not nearly as bad as it
was for the heavier load, although they do exhibit poor behavior compared to the oblivious
algorithm for large T . Over the entire range of staleness examined (0:1 - T - 200), the
Basic LI and Aggressive LI algorithms perform as well as or better than the best k-subset
or oblivious algorithm.
Basic LI
Aggressive LI
Figure
3: Detail: service time v. update delay for periodic update model
Average
Response
Time
Update Interval (T)
Basic LI
Aggressive LI
Figure
4: Service time v. update delay for periodic update model
Basic LI
Aggressive LI
Figure
5: Service time v. update delay for periodic update model
Figure
5 shows the performance of the system with servers rather than the standard
100. The results are qualitatively similar to the results for the standard
5.2 Continuous update model
Figure
6 shows the performance of the algorithms under the continuous update model. Because
system behavior depends on the distribution of the delay parameter, we show results
for four distributions of delay, all with average value T . In order of increasing variation, they
are: constant(T ), uniform( Tto 3T), uniform(0 to 2T ), and exponential(T ). As the earlier
discussion suggests, the Aggressive LI algorithm is actually less aggressive than the Basic
LI algorithm, and Basic generally outperforms aggressive for this model. We will therefore
focus on the Basic LI algorithm.
Mitzenmacher notes that for a given T , the k-subset algorithms' performance improves for
distributions that contain a mix of more recent and older information. This relationship
seems present but less pronounced for the LI algorithms. As a result, as the distribution's
variability increases, the advantage of LI over the k-subset algorithms shrinks. Thus, Basic
LI seems a clear choice for the constant and uniform T distributions: for any value of T ,
its performance is as good as any of the k-subset algorithms and for any given k-subset
algorithm there is some range of T where Basic LI's performance is significantly better.
For the exponential distribution of T , however, the k-subset algorithms enjoy an advantage
of up to 16% over Basic LI. Figure 7 tests the hypothesis that the relatively poor performance
of Basic LI in this situation is because the algorithm calculates P i using the expected value of
T whereas each individual request may see significantly different values of T . In this figure,
still varies according to the specified distribution, but rather than knowing the average
Average
Response
Time
Update Interval (T)
Basic LI
Aggressive LI
(a) Constant T (b) Uniform Tto 3T01020
Average
Response
Time
Update Interval (T)
Basic LI
Aggressive LI26100 50 100 150 200
Average
Response
Time
Update Interval (T)
Basic LI
Aggressive LI
(c) Uniform 0 to 2T (d) Exponential with mean T
Figure
update delay for continuous update model when clients only know
T , the average delay. (a) - (d) show result for different distributions of delay around the
(a) Uniform Tto 3T01020
Average
Response
Time
Update Interval (T)
Basic LI
Aggressive LI26100 50 100 150 200
Average
Response
Time
Update Interval (T)
Basic LI
Aggressive LI
(b) Uniform 0 to 2T (c) Exponential with mean T
Figure
7: Service time v. update delay for continuous update model when clients know the
age of information actually encountered for each request. (a) - (c) show result for different
Figure
8: Service time v. update delay for update-on-access model.
value of T , each request knows the value of T that holds for that request, and the algorithm
calculates its P i vector using this more certain information. Compared to the performance
in
Figure
6, this extra information improves performance for each distribution of T , and the
improvement becomes more pronounced for distributions with more variation. From this we
conclude that good estimates of the delay between when load information is gathered and
when a request will arrive at a server are important for getting the best performance from
the LI algorithms.
5.3 Update-on-access model
Figure
8 shows performance for the update-on-access model. In this model, we simulate
some number of clients, and each client uses the load information gathered after sending one
request to decide where to send the next request. Thus, T equals the per-client inter-request
time. To vary T for a fixed total arrival rate, we simply vary the number of clients from
which the requests are issued.
For this model, all of the algorithms perform reasonably well. It appears that the per-client
updates desynchronize the clients enough to reduce the herd effect. The Basic LI algorithm
outperforms all of the others and provides a modest speedup over a wide range of update
intervals.
Figure
9: Service time v. update delay for update-on-access model under bursty workload
5.4 Bursty arrivals
Figure
9 shows performance under a bursty-arrival version of the update-on-access model.
As with the standard update-on-access model, each client uses the server loads discovered
during one request to route the next one. To generate our bursty-arrivals workload, rather
than assume that each client produces exponentially-distributed arrivals, we assume that
a client whose average inter-request time is T produces a burst of b requests separated by
seconds, with the bursts separated by exponential(T b) seconds. For Figure 9,
The bursty workload significantly increases the performance of all of the algorithms that use
server load compared to the oblivious algorithm. Although over time, a client's picture of
server load is on average T seconds old, an average request sees a much fresher picture of
the L i vector. This suggests that it may often be possible to significantly outperform the
oblivious strategy even for challenging workloads such as internet server selection [13, 22]
where information will likely be old on average, but where a client's requests to a service are
bursty. Once again, the Basic LI algorithm is the best or tied for the best over the entire
range of T examined (0:1 - T - 200).
5.5 Impact of imprecise information
The primary drawback to the LI algorithms is that they require good estimates of T and -.
Subsection 5.2 examined the impact of uncertainty about T . In this subsection, we examine
what happens when the estimate of - is incorrect. We believe that servers supporting the
Average
Response
Time
Update Interval (T)
Basic LI (8*Load)
Basic LI (4*Load)
Basic LI (2*Load)
Basic LI (1*Load)
Basic LI (0.5*Load)
Basic LI (0.25*Load)
Basic LI (0.125*Load)
Figure
10: Service time v. update delay for periodic update model when clients mis-estimate
the arrival rate.
LI algorithms would be equipped to inform clients both of their current load and of the
arrival rate of requests they anticipate. For example, a server might report the arrival rate
it had seen over some recent period of time, or it might report the maximum request rate it
anticipates encountering. However, it may be difficult for some systems to accurately predict
future request patterns based on history.
Figure
shows performance under the periodic update model when the LI algorithm uses
an incorrect estimate of -. Each line shows performance when the - used for calculating P i
is multiplied by an error factor e between 1and 8. If we overestimate the load, the algorithm
is more conservative than it should be and performance suffers a bit. If we underestimate
the load, the algorithm sends too many requests to the apparently-least-loaded servers, and
performance is very poor.
From this, we conclude that systems should err on the side of caution when estimating -.
From
Figure
note that if - estimated = 2- actual , performance is only marginally worse than
actual . Also note that for these experiments, - actual = 0:9 and the the system
would be unstable if - actual - 1:0. In other words, to overestimate - by a factor of two, one
would have to predict a service rate 1.8 times larger than could ever be sustained by the
system.
We suggest the following strategy for estimating -: if the system's maximum achievable
throughput is known, use that throughput as an estimate of - for purposes of the LI algo-
rithms. When the system is heavily loaded, that estimate will be only a little bit higher than
the actual arrival rate; when it is lightly loaded, the estimate will be far too high. But, as
we have seen, the algorithm is relatively insensitive to overestimates of arrival rate. Further-
more, overestimating the arrival rate does little harm when the system is lightly loaded. In
Average
Response
Time
Arrival Rate (lambda)
Basic LI (actual lambda)
Basic LI (assume lambda = 1.0)
Figure
11: Service time v. arrival rate (-) for periodic update model with 20. The
graph compares the standard algorithms as well as a variation of the Basic LI algorithm that
overestimates - as the maximum achievable system throughput
that case, the conservative estimate of - tends to make the LI algorithm distribute requests
uniformly across the servers, which is an acceptable strategy when load is low. Figure 11
illustrates the effect of assuming - estimated = 1:0 as we vary - actual for a system with
The two Basic LI lines-one with exact and the other with conservative estimates of -are
almost indistinguishable. For all points, the difference between the two results is less than
4.5%; when - 0:7, the difference is always less than 1.5%.
5.6 Impact of reduced information
The k-subset algorithms have an additional purpose beyond attempting to cope with stale
load information: by restricting the amount of load information that clients may consider
when dispatching jobs, they may reduce the amount of load information sent across the
network. A number of theoretical [4, 7, 15, 20, 27] and empirical [11, 18] studies have
suggested that load balancing algorithms can often be quite effective even if the amount of
information they have is severely restricted.
The Basic LI algorithm can also be adapted to use a subset of server load information rather
than requiring a vector of all servers' loads. In the k-subset version of the Basic LI algorithm
(Basic LI-k), we select a random subset of k servers and use the algorithm to determine how
to bias requests among those k nodes. In particular, we modify Equation 1 to use P 0
arrays of size k rather than n, to compute L 0
tot from the smaller L 0
i array, to replace n with
k, and to calculate arrive 0
k. Note that, as for the standard k-subset algorithms,
we select a different subset for each incoming request.
Average
Response
Time
Update Interval (T)
Basic LI (k=2)
Basic LI (k=3)
Basic LI (k=100)
(a)26100 50 100 150 200
Average
Response
Time
Update Interval (T)
Basic LI (k=2)
Basic LI (k=3)
Basic LI (k=10)
Basic LI (k=100)
(b)
Figure
12: Service time v. update delay for k-subset version of Basic LI algorithm for (a)
update-on-access model and (b) continuous update with fixed delay model.
Figure
12 examines the impact of restricting the information available to the Basic LI al-
gorithm. This experiment suggests that the Basic LI-k algorithm can achieve good perfor-
mance. Under the update-on-access model, the original k-subset algorithms perform well,
and the LI-2 algorithm's performance is similar to that of the standard
algorithms. Unlike the standard k-subset algorithms, as the LI-k algorithm is given more
information, its performance becomes better. The LI-3 algorithm outperforms the all of
the standard k-subset algorithms by a noticeable amount, and the full Basic LI algorithm
widens the margin.
Under the continuous update with fixed delay model (Figure 12-b), the performance of the
LI-k algorithms is also good. In this case, the original k-subset algorithms behave badly, but
the LI-k versions behave nearly identically with the Basic LI system. In this experiment, the
versions of the LI algorithm are slightly better than the
version, with smaller k consistently giving slightly better performance. We do not have an
explanation for the improving behavior with reduced information in this experiment.
From these experiments, we conclude that LI can be an effective technique in environments
where we wish to restrict how much load information is distributed through the system.
Modest amounts of load information allow the LI algorithms to achieve nearly their full
performance. Thus, LI decouples the question of how much load information should be used
from the question of how to interpret that information.
6 Conclusions
The primary contribution of this paper is to present a simple strategy for interpreting stale
load information. This approach resolves the paradox that under some algorithms, using
additional information often results in worse performance than using less information or
none at all. The Load Interpretation (LI) strategy we propose interprets load information
based on its age so that a system is essentially always better off when it has and uses more
information. When information is fresh, the algorithm aggressively addresses imbalances;
when the information is stale, the algorithm is more conservative.
We believe that this approach may open the door to safely using load information to attempt
to outperform random request distribution in environments where it is difficult to
maintain fresh information or where the system designer does not know the age of the information
a priori. Our experiments suggest that by interpreting load information, systems
can (1) match the performance of the most aggressive algorithms when load information is
fresh, (2) outperform current algorithms by as much as 60% when information is moderately
old, (3) significantly outperform random load distribution when information is older still,
and (4) avoid pathological behavior even when information is extremely old.
--R
The Next Step in Server Load Balancing.
Designing a Process Migration Facility: The Charlotte Experience.
Making Commitments in the Face of Uncertainty: How to Pick a Winner Almost Every Time.
Balanced Allocations.
DNS Support for Load Balancing.
Network Tasking in the Locus Distributed Unix System.
Towards Developing Universal Dynamic Mapping Algorithms.
Cisco Distributed Director.
Transparent Process Migration: Design Alternatives and the Sprite Implementation.
Research Toward a Heterogeneous Networked Computing Cluster: The Distributed Queuing System Version 3.0.
Adaptive Load Sharing in Homogeneous Distributed Systems.
Locating Nearby Copies of Replicated Internet Servers.
Load distribution: Implementation for the Mach Microkernel.
Analysis of the Effects of Delays on Load Sharing.
Adaptive Load Sharing in Heterogeneous Distributed Systems.
The Power of Two Choices in Randomized Load Balancing.
How Useful is Old Information.
Performance Characteristics of Mirror Servers on the Internet.
Using Idle Workstations in a Shared Computing Environment.
Process Migration in DEMOS/MP.
Experiences with the Amoeba Distributed Operating System.
Queuing Systems with Selection of the Shortest of Two Queues: an Asymptotic Approach.
Squid Internet Object Cache.
Using Smart Clients to Build Scalable Services.
Attacking the Process Migration Bottleneck.
A Load Sharing Facility for Large
--TR
--CTR
Michael Rabinovich , Zhen Xiao , Amit Aggarwal, Computing on the edge: a platform for replicating internet applications, Web content caching and distribution: proceedings of the 8th international workshop, Kluwer Academic Publishers, Norwell, MA, 2004
Mauro Andreolini , Michele Colajanni , Riccardo Lancellotti , Francesca Mazzoni, Fine grain performance evaluation of e-commerce sites, ACM SIGMETRICS Performance Evaluation Review, v.32 n.3, December 2004
Simon Fischer , Berthold Vcking, Adaptive routing with stale information, Proceedings of the twenty-fourth annual ACM symposium on Principles of distributed computing, July 17-20, 2005, Las Vegas, NV, USA
Request Redirection Algorithms for Distributed Web Systems, IEEE Transactions on Parallel and Distributed Systems, v.14 n.4, p.355-368, April
Giovanni Aloisio , Massimo Cafaro , Euro Blasi , Italo Epicoco, The Grid Resource Broker, a ubiquitous grid computing framework, Scientific Programming, v.10 n.2, p.113-119, April 2002
Mauro Andreolini , Michele Colajanni , Ruggero Morselli, Performance study of dispatching algorithms in multi-tier web architectures, ACM SIGMETRICS Performance Evaluation Review, v.30 n.2, September 2002
Suman Nath , Phillip B. Gibbons , Srinivasan Seshan, Adaptive data placement for wide-area sensing services, Proceedings of the 4th conference on USENIX Conference on File and Storage Technologies, p.4-4, December 13-16, 2005, San Francisco, CA
Bogumil Zieba , Marten van Sinderen , Maarten Wegdam, Quality-constrained routing in publish/subscribe systems, Proceedings of the 3rd international workshop on Middleware for pervasive and ad-hoc computing, p.1-8, November 28-December 02, 2005, Grenoble, France
Mauro Andreolini , Sara Casolari, Load prediction models in web-based systems, Proceedings of the 1st international conference on Performance evaluation methodolgies and tools, October 11-13, 2006, Pisa, Italy
Yu, The state of the art in locally distributed Web-server systems, ACM Computing Surveys (CSUR), v.34 n.2, p.263-311, June 2002 | server selection;load balancing;distributed systems;queuing theory;stale information |
355190 | A Protocol-Centric Approach to on-the-Fly Race Detection. | AbstractWe present the design and evaluation of a new data-race-detection technique. Our technique executes at runtime rather than post-mortem, and handles unmodified shared-memory applications that run on top of CVM, a software distributed shared memory system. We do not assume explicit associations between synchronization and shared data, and require neither compiler support nor program source. Instead, we use a binary code re-writer to instrument instructions that may access shared memory. The most novel aspect of our system is that we are able to use information from the underlying memory system implementation in order to reduce the number of comparisons made at runtime. We present an experimental evaluation of our techniques by using our system to look for data races in five common shared-memory programs. We quantify the effect of several optimizations to the basic technique: data flow analysis, instrumentation batching, runtime code modification, and instrumentation inlining. Our system correctly found races in three of the five programs, including two from a standard benchmark suite. The slowdown of this debugging technique averages less than 2.5 for our applications. | Introduction
Despite the potential savings in time and effort, data-race detection techniques are not yet an accepted tool of builders
of parallel and distributed systems. Part of the problem is surely the restricted domain in which most such mechanisms
operate, i.e., parallelizing compilers. Compiler support is usually deemed necessary because race-detection is generally
NP-complete [19].
This paper presents the design and evaluation of an on-the-fly race-detection technique for explicitly parallel shared-memory
applications. This technique is applicable to shared memory programs written for the lazy-release-consistent (LRC) [11] (see
Section 3.1) memory model. Our work differs from previous work [3, 4, 7, 9, 18, 17] in that data-race detection is performed
both on-the-fly and without compiler support. In common with other dynamic systems, we address only the problem of
detecting data races that occur in a given execution, not the more general problem of detecting all races allowed by program
semantics [19, 25]. Earlier work [21] introduced this approach by demonstrating its use on a less complex single-writer
protocol. This paper extends this earlier work through the use of a more advanced multi-writer protocol, and through a series
of optimizations to the basic technique.
We find data races by running applications on a modified version of the Coherent Virtual Memory (CVM) [13, 14] software
distributed shared memory (DSM) system. DSMs support the abstraction of shared memory for parallel applications running
on CPUs connected by general-purpose interconnects, such as networks of workstations or distributed memory machines like
the IBM SP-2. The key intuition of this work is the following:
LRC implementations already maintain enough ordering information to make a constant-time
determination of whether any two accesses are concurrent.
In addition to the LRC information, we track individual shared accesses through binary instrumentation, and run a simple
race-detection algorithm at existing global synchronization points. This last task is made much easier precisely because of
the synchronization ordering information maintained by LRC. The system can automatically generate global synchronization
points in long-running applications if there are none originally.
We used this technique to check for data races in implementations of five common parallel applications. Our system
correctly found races in three. Water-Nsquared and Spatial, from the Splash2 [27] benchmark suite, had data races that
constituted real bugs. These bugs have been reported to the Splash authors and fixed in their current version. While the races
could affect correctness, they were unlikely to occur on the platforms for which SPLASH was originally intended. Barnes,
on the other hand, had been modified locally in order to eliminate unnecessary synchronizations. The races introduced by
these modifications did not affect the correctness of the application.
Since overhead is still potentially exponential, we describe a variety of techniques that greatly reduce the number of
comparisons that need to be made. Those portions of the race-detection procedure that have the largest theoretical complexity
turn out to be only the third or fourth-most expensive component of the overall overhead.
Specifically, we show that i) we can statically eliminate over 99% of all load and store instructions as potential race
participants, ii) we eliminate over 80% potential comparisons at runtime through use of LRC ordering information, and iii)
the average slowdown from use of our techniques is currently less than 2.8 on our applications, and could be reduced even
further with support for inlining of instrumentation code. While this overhead is still too high for the system to be used all
of the time, it is low enough for use when traditional debugging techniques are insufficient, or even to be a part of standard
debugging toolbox for parallel programs.
Problem Definition
We paraphrase Adve's [1] terminology to define the data races detected by our system.
We define the happened-before-1 partial order, denoted hb1
!, over shared accesses and synchronization acquires
and releases as follows:
1. If a and b are ordinary shared memory accesses, releases, or acquires on the same processor, and a occurs before b in
program order, then a
b.
2. If a is a release on processor p 1 , and b is the corresponding acquire on processor p 2 , then a
b. For a lock, an
acquire corresponds to a release if there are no intervening acquires or releases of that lock. For a barrier, an acquire
corresponds to a release if the acquire is a departure from and the release is an arrival to the same instance of the
barrier.
3. If a hb1
c, then a hb1
Given Definition 1, we define data races as follows:
Shared accesses a and b constitute a data race if and only if:
1. a and b both access the same word of shared data, and at least one is a write, and
2. Neither a hb1
! a is true.
approximates the notion of actual data races defined by Netzer [20].
In common with most other implemented systems, both with and without compiler support, we make no claim to detect all
data races allowed by the semantics of the program (the feasible races discussed by Netzer [20]). As such, a program running
to completion on our system without data races is not a guarantee that subsequent executions will be free of data races as
well. However, we do detect all races that occur in any given execution.
Figure
1 shows two possible executions of code in which processes access shared variable x and synchronize through
synchronization variable L. The access pair w 1 in the execution on the left does not constitute a data race because
semantics do not enforce an ordering on lock acquisitions, the execution might instead
have happened as shown in 1(b). In this case, r 1 (x) is not ordered with respect to w 1 (x), and the two therefore constitute a
race.
Note that not all data races cause incorrect results to be generated. "Correct" results will be generated even for the
execution on the right if r 1 completes before w 1 is issued.
In order for the system to distinguish between the accesses in Figure 1, the system must be able to detect and understand
the semantics of all synchronization used by the programs. In practice, this requirement means that programs must use only
system-provided synchronization. Any synchronization implemented on top of the shared memory abstraction is invisible to
the system, and could result in spurious race warnings.
However, the above requirement is no stricter than that of the underlying DSM system. Programs must use system-visible
synchronization in order to run on any release-consistent system. Our data-race detection system imposes no additional
consistency or synchronization constraints.
3 Lazy Release Consistency and Data Races
(a) (b)
Figure
1. If the ordering of accesses to lock L is non-deterministic, either (a) or (b) is a possible ordering of
events. The ordering given in (a) is not a data race because there is a release-acquire sequence between each
pair of conflicting accesses. (b) has a race between r 1 (x) and w 1 (x).
3.1 Lazy Release Consistency
Lazy release consistency [11] is a variant of eager release consistency (ERC) [8], a relaxed memory consistency that allows
the effects of shared memory accesses to be delayed until selected synchronization accesses occur. Simplifying matters
somewhat, shared memory accesses are labeled either as ordinary or as synchronization accesses, with the latter category
further divided into acquire and release accesses. Acquires and releases may be thought of as conventional synchronization
operations on a lock, but other synchronization mechanisms can be mapped on to this model as well. Essentially, ERC requires
ordinary shared memory accesses to be performed before the next release by the same processor. ERC implementations can
delay the effects of shared memory accesses as long as they meet this constraint.
Under LRC protocols, processors further delay performing modifications remotely until subsequent acquires by other
processors, and the modifications are only performed at the other processor that performed the acquire. The central intuition
of LRC is that competing accesses to shared locations in correct programs will be separated by synchronization. By
deferring coherence operations until synchronization is acquired, consistency information can be piggy-backed on existing
synchronization messages.
To do so, LRC divides the execution of each process into intervals, each identified by an interval index. Figure 2 shows
an execution of two processors, each of which has two intervals. The second interval of P 1 , for example, is denoted oe 2
1 .
Each time a process executes a release or an acquire, a new interval begins and the current interval index is incremented.
We can relate intervals of different processes through a happens-before-1 partial ordering similar to that defined above for
shared accesses:
1. intervals on a single processor are totally ordered by program order,
2. interval oe i
if oe j
begins with the acquire corresponding to the release that concluded interval oe i
, and
w(y 2
reply
sync request
Figure
2. Shared data x 1 and x 2 are assumed to be on the same page, and y 1 and y 2 are co-located on another
page. A page-based comparison of concurrent intervals would flag concurrent interval pairs oe 1
1 and oe 2
containing possibly conflicting references to common pages. Comparison of the associated bitmaps would
reveal that the former has only false sharing, while the latter has a true race in r(x 1 ) from oe 2
1 and w(x 1 ) from oe 2
.
3. the transitive closure of the above.
LRC protocols append consistency information to all synchronization messages. This information consists of structures
describing intervals seen by the releaser, together with enough information to reconstruct the hb1
! ordering on all visible
intervals. For example, the message granting the lock to P 2 in Figure 2 contains information about all intervals seen by P 1 at
the time of the release that had not yet been seen by P 2 , i.e., oe 1
1 . The system also records the fact that oe 1hb1
.
While we discuss only locks and barriers in this paper, the notion of synchronization acquires and releases can be easily
mapped to other synchronization models as well.
3.2 Data-Race Detection in an LRC System
Intuitively, a data race is a pair of accesses that do not have intervening synchronization, such that at least one of the accesses
is a write. In Figure 2, the read of x 1 by P 1 and the write of x 1 by P 2 constitute a data race, because intervals oe 2
1 and oe 2
are
concurrent (not ordered).
Detecting data races generally requires comparing each shared access against every other shared access. With an LRC
system, as with any other system based on hb1
partial ordering, we can limit comparisons only to accesses in pairs of concurrent
intervals. For example, interval pair oe 1
2 in
Figure
2 is not concurrent, and so we do not have to check further in order to
determine if there is a data race formed by accesses in those intervals. We only perform word-level comparisons if we have
first verified that the pages accessed by the two intervals overlap.
For example, assume that y 1 and y 2 of Figure 2 reside on the same page. A comparison of pages accessed by concurrent
1 and oe 1
would reveal that they access overlapping pages, i.e. the page containing y 1 and y 2 . We would therefore
need to perform a bitmap comparison in order to determine if the accesses constitute false sharing or true sharing (i.e., a
data race). In this case, the answer would be false sharing because the accesses are to distinct locations. However, if P 2 's
first write were to z, a variable on a completely different page, our comparison of pages accessed by the two intervals would
reveal no overlap. No bitmap comparison would be performed, even though the intervals are concurrent.
Implementation
4.1 System and its Changes
We implemented our race-detection system on top of CVM [13, 14], a software DSM that supports multiple protocols and
consistency models. Like commercially available systems such as TreadMarks [12], CVM is written entirely as a user-level
library and runs on most UNIX-like systems. Unlike TreadMarks, CVM was created specifically as a platform for protocol
experimentation.
The system is written in C++, and opaque interfaces are strictly enforced between different functional units of the system
whenever possible. The base system provides a set of classes that implement a generic protocol, lightweight threads, and
network communication. The latter functionality consists of efficient, end-to-end protocols built on top of UDP.
New shared memory protocols are created by deriving classes from the base Page and Protocol classes. Only those
methods that differ from the base class's methods need to be defined in the derived class. The underlying system calls
protocol hooks before and after page faults, synchronization, and I/O events take place. Since many of the methods are
inlined, the resulting system is able to perform within a few percent of a severely optimized system, TreadMarks, running
a similar protocol. CVM was also designed to take advantage of generalized synchronization interfaces, as well as to use
multi-threading for latency toleration. Our detection mechanism is based on CVM's multi-writerLRC protocol. This protocol
propagates modifications in the form of diffs, which are run-length encodings of modifications to a single page [12]. Diffs
are created through word-by-word comparisons of the current contents of a page with a copy of the page saved before any
modifications were made.
We made only three modifications to the basic CVM implementation: (i) we added instrumentation to collect read and
access information, (ii) we added lists of pages read (read notices) to message types that already carry analogous
information about pages written, and (iii) we potentially add an extra message round at barriers in order to retrieve word-level
access information, if necessary.
4.2 Instrumentation
We use the ATOM [26] code-rewriter to instrument shared accesses with calls to analysis routines. ATOM allows executable
binaries to be analyzed and modified. We use ATOM to identify and instrument all loads and stores that may access shared
memory. Although ATOM is currently available only for DEC Alpha systems, a port is currently underway to Intel's x86
architecture. Moreover, tools that provide similar support for other architectures are becoming more common. Examples are
EEL [16] (SPARC and MIPS), Shade [5] (SPARC), and Etch [22] (x86).
The base instrumentation consists of a procedure call to an analysis routine that checks if the instruction accesses shared
memory. If so, the routine sets bits corresponding to the access's page and position in the page in order to indicate that the
page and word have been accessed. The analysis routine consists of including 10 instructions for saving
registers to and restoring registers from the stack. The actual call to the analysis routine, together with instructions that save
some registers outside the call, consume 7 or 8 more instructions. A small number of additional instructions are needed for
the batching and runtime code modification optimizations.
Information about which pages were accessed, together with the bitmaps themselves, is placed in known locations for
bitmap
comparison
barrier
arrivals
barrier
releases
interval
comparison
bitmap
requests
bitmaps
Figure
3. The barrier algorithm has the following steps: 1) read and write notices are sent to the barrier master,
2) the barrier master identifies concurrent intervals with overlapping page access lists, 3) bitmaps are requested
for the overlapping pages from step 2, and comparisons are used to identify data races. Dotted lines
indicate events that occur only if sharing is detected in step 2.
CVM for use during the execution of the application. All data structures, including bitmaps, are statically allocated in order
to reduce runtime cost. Shared memory and bitmaps are allocated at fixed memory locations in order to decrease the cost of
the instrumentation code.
4.3 Algorithm
The overall procedure for detecting data races, illustrated in Figure 3, is the following:
1. We use ATOM to instrument all shared loads and stores in the application binary.
The problem here lies in determining which references are shared and which are not. In the base case, we simply
instrument any reference that does not use the stack pointer, the global variables pointer 1 , or any register that has been
loaded with one of these values in the current basic block. We also skip library references, as we know from inspection
that our applications make no library calls that modify shared memory. In the absence of guarantees to the contrary, we
can easily instrument non-CVM libraries as well. Such instrumentation would not have affect our slowdown because
our applications spend time in libraries only during initialization. Section 4.5 describes several extensions to this basic
technique that either eliminate more memory accesses as candidates for instrumentation, or decrease the cost of the
resulting instrumentation.
2. Most synchronization messages in the base CVM protocol carry consistency information in the form of interval
structures. Each interval structure contains one or more write notices that enumerate pages written during that interval.
In CVM, we augmented these interval structures to also carry read notices, or lists of pages read during that interval.
assumes that all shared data is allocated dynamically.
Interval structures also contain version vectors that identify the logical time associated with the interval, and permit
checks for concurrency.
3. Worker processes in any LRC system append interval structures (together with other consistency information) to barrier
arrival messages. At each barrier, therefore, the barrier master has complete and current information on all intervals
in the entire system. This information is sufficient for the barrier master to locally determine the set of all pairs of
concurrent intervals. Although the algorithm must potentially compare the version vector of each interval of a given
processor with the version vector of each interval of every other processor, exploiting synchronization and program
order allows many of the comparisons to be omitted.
4. For each pair of concurrent intervals, the read and write notices are checked for overlap. A data race might exist on
any page that is either written in two concurrent intervals, or read in one interval and written in the other. Such interval
pairs, together with a list of overlapping pages, are placed on the check list.
Steps 5 and 6 are performed only if this check list is non-empty, i.e., there is a data-race or there is false sharing (see
Figure
3).
5. Barrier release messages are augmented to carry requests for bitmaps corresponding to accesses covered by the check
list. Each read or write notice has a corresponding bitmap that describes precisely which words of the page were
accessed during that interval. Hence, each pair of concurrent intervals has up to four bitmaps (read and write for each
interval) that might be needed in order to detect races. These bitmaps are returned to the barrier master for each interval
pair on the check list.
6. The barrier master compares bitmaps from overlapping pages in concurrent intervals. A single bitmap comparison is
a constant time process, dependent only on page size. In the case of a read-write or write-write overlap, the algorithm
has determined that a data race exists, and prints the address of the offending race.
We currently use a very simple interval comparison algorithm to find pairs of concurrent intervals, primarily because the
major system overhead is elsewhere. The upper bound on the number of intervals per processor pair that the comparison
algorithm must compare is O(i 2 ), where i is the maximum number of intervals of a single processor since the last barrier.
However, the algorithm needs only to examine intervals created during the last barrier epoch, where a barrier epoch is the
interval of time between two successive barriers. By definition, these intervals are separated from intervals in previous epochs
by synchronization, and are therefore ordered with respect to them. Since each interval potentially needs to be compared
against every other interval of another process in the current epoch, the total comparison time per barrier is bounded by
is the number of processes and i is the maximum number of intervals of any process in the current epoch.
In practice, however, the number of such comparisons is usually quite small. Applications that use only barriers have
one interval per process per barrier epoch. More than one interval per barrier is only created through additional peer-to-peer
synchronization, such as exclusive locks. However, peer-to-peer synchronization also imposes ordering on intervals of the
synchronizing processes. For example, a lock release and subsequent acquire order intervals prior to the release with respect
to those subsequent to the acquire. Since an ordered pair of intervals is by definition not concurrent, the same act that creates
intervals also removes many interval pairs from consideration for data races. Hence, programs with many intervals between
barriers usually also have ordering constraints that reduce the number of concurrent intervals.
Note that this technique does not require frequent barriers. We can accommodate long-running, barrier-free applications
by forcing global synchronization to occur when system buffers are filled.
4.4 Example
We illustrate the use of this technique through an example based on Figure 2. Figure 2 shows a portion of the execution of two
processes, together with their synchronization and memory accesses. Only memory accesses that were not statically identified
as non-shared are shown. Further, data items x 1 and x 2 are located on the same page, and y 1 and y 2 are on another. If we
assume that barriers occur immediately before and after the accesses in this figure, then the events of the figure correspond to
a single barrier epoch. Barrier arrival messages from P 1 and P 2 will therefore contain information about four intervals: oe 1
2 . Interval structures oe 1
2 each contain a single write notice, while oe 2
contains two read notices. The
reads of x 1 and x 2 are represented by a single read notice because they are located on the same page.
Upon the arrival of all processes at the second barrier, there are six possible interval pairs. We can eliminate oe 1
1 and
2 because of program order, and oe 1
2 because of synchronization order. Finally, oe 1
2 can be eliminated because these
intervals access no pages in common.
This leaves oe 2
2 and oe 2
2 as possible causes of races or false sharing. In the following, we use the notation oe j
r(x) to
refer to the read bitmap of page x during interval oe j
and similar notation for writes. Barrier release messages will include
requests for bitmaps oe 2
1 r(y) and oe 1
2 w(y) in order to judge the first pair, and oe 2
2 w(x) for the second pair.
The comparison of oe 2
1 r(y) and oe 1
2 w(y) will only reveal false sharing the intervals access different data items that just
happen to be located on the same page. By contrast, the comparison of oe 2
show that a data race exists
because x 1 is accessed in both intervals, and one of the accesses is a write.
4.5 Optimizations
This section describes three enhancements to our basic technique.
4.5.1 Dataflow analysis
We use a limited form of iterative interprocedural register dataflow analysis in order to identify additional non-shared memory
accesses. Our technique consists of creating a data-flow graph and associating incoming and outgoing sets of registers with
each basic block. The registers in each set define registers known not to be pointers into shared memory. During each iteration
of the analysis, the outgoing set is defined as the incoming set minus registers that are loaded in that block. Incoming sets are
then redefined as the intersection of the incoming set for the previous iteration and the outgoing sets of all preceding blocks.
The procedure continues until all incoming register sets stabilize.
Any values left in registers of the incoming register sets are known not to be pointers into shared space. Memory accesses
using such registers do not need to be instrumented.
We made two main assumptions here. First, we simplify interprocedural analysis by exploiting the fact that function
arguments are usually passed through registers. Tracking parameters that are not passed in this manner entails tracking the
order of stack accesses in the caller and callee blocks. We conservatively assume that parameters passed by any other method
might name pointers to shared data. Second, we assume that there are no function calls through register pointers. Such calls
complicate data flow analysis because the destination of the calls can not be identified statically. The system could easily be
modified to disable data-flow optimization when such calls are detected.
4.5.2 Batching
Calls to instrumentation routines can be batched by combining the accesses checks for multiple instructions into a single
procedure call. We implemented three different types of batching for accesses within a single basic blocks:
ffl batching of accesses to the same memory location with the same reference type (either both load or both store)
ffl batching of accesses to the same memory location with different reference types
ffl batching of accesses to consecutive memory location with the same reference type
The largest performance improvement is provided by the first method, i.e, batching of accesses to the same memory
location with the same reference type. Instrumentation for all but the first such access can be eliminated because we care
only that the data is accessed, we do not care how many times it is accessed. Duplicated loads or stores to the same memory
location might occur within a basic block because of register pressure or aliasing. An example of the latter case is a pair of
loads through one register, sandwiched around a store through another register. The compiler's static analysis generally has
no way of determining whether the loads and the store access the same, or distinct locations in memory. Hence, the second
load is left in the basic block.
The other batching methods are less useful because no instrumentation is eliminated. However, the instrumentation that
remains is less costly than without batching. Both the second and third methods avoid procedure calls by consolidating the
instrumentation for multiple accesses into a single routine. The resulting instrumentation also needs to check whether the
are shared only once. This is valid even for accesses to consecutive memory locations because we assume that
shared and non-shared regions are not located contiguously in the address space.
Instrumentation generated by the second method has the additional advantage of being able to use the same bitmap offset
calculation for all accesses.
4.5.3 Runtime code modification
We use self-modifying code to remove instrumentation from instructions that turn out to reference private data. Memory
reference instrumentation consists of a check to distinguish private and shared references, and code to record the access if it
references shared memory. With runtime code modification, we overwrite the instrumentation with no-op instructions if the
instruction references private data. The advantage is that subsequent executions of the instrumented instruction are delayed
only by cost of executing no-op instructions, rather that the cost of executing instrumentation code that includes additional
memory references.
Modifyingcode at runtime requires that the text segment be writable. We unprotect the entire text segment at the beginning
of an application's execution using ATOM routines to obtain the size of the application's instrumented text segment. The
reply
sync request
Figure
4. A single diff describes all modifications to page x from both oe 1
2 and oe 2
2 because of lazy diffing.
primary complication is caused by the separation of the data and instruction caches. We use data stores to overwrite
instrumentation code. The new instructions are seen as data by the system and can be stalled in the (write-back) data cache.
Problems remain even after the new instructions are written to memory, because stale copies might remain in the instruction
cache. We solve this problem by issuing a special PAL IMB Alpha instruction that makes the caches coherent.
A second complication is that naively overwriting the entire instrumentation call while inside the call causes the stack
to become corrupted. We get around this problem by merely saving an indication that the affected instrumentation calls
should be deleted, rather than performing the deletion immediately. The instrumentation calls are actually deleted by code at
subsequent synchronization points.
This technique is applicable only if we assume that each memory access instruction exclusively references either private
or shared data. We modified our system to detect instructions that access both shared and non-shared data at runtime. This
information is only anecdotal in that it provides no guarantees of behavior during other runs. Nonetheless, this technique can
be useful if applied with caution. We used our modified version of CVM to detect instructions that access both shared and
non-shared data in two of our applications. We eliminated the offending instructions by manually cloning [6] the routines
that contained them.
4.5.4 Diffs
One optimization that we do not exploit is the use of diffs to capture write behavior. Diffs are summaries of changes made
to a single page during an interval. They are created by comparing the current copy of a page with a twin, which is a copy
saved before the modifications were begun. Hence, this diff seemingly has the same information as the write bitmaps, and
use of the diffs could allow us to dispense with instrumentation of write accesses. However, diffs are created lazily, meaning
that shared writes might be assigned to the wrong portion of a process's execution. For example, consider the process in
Figure
4. We assume that x 1 and x 2 are on the same page. Lazy diff creation means that the diff describing P 2 's first write
is not immediately created at the end of interval oe 1
. The problem is that the subsequent write to x 1 is then folded into the
same diff, which is associated with the earlier interval. This merging does not violate consistency guarantees because LRC
systems require applications to be free of data races. However, the merging will cause the system to incorrectly believe that
a data-race exists between oe 1
1 and oe 1
.
Another disadvantage of this approach is that the use of diffs would slightly weaken our race detection technique. Diffs
contain only modifications to shared data. Locations that are overwritten with the same value do not appear in diffs, even
Apps Input Set Sync.
Memory Intervals Slowdown Intervals Bitmaps Msg
Used Used Ohead
Barnes barrier 32768 1 2.42 4% 47% 11%
Spatial 512 mols, 5 iters lock, barrier 824
Water 512 mols, 5 iters lock, barrier 344 84 3.18 5% 19% 31%
Table
1. Application Characteristics
though their use might constitute a race.
Performance
We evaluated the performance of our prototype by searching for data races in five common shared-memory applications:
Barnes (Barnes-Hut algorithm the from Splash2 [27] benchmark suite) FFT (Fast Fourier Transform), SOR (Jacobi relaxation),
Water (a molecular dynamics simulation; from the Splash2 suite, and Spatial (the same problem as Water, but different
algorithm; also from Splash2, but optimized for reduced synchronization). All applications were run on DECstations with
four 275 MHz Alpha processors, connected by a 155 MBit ATM. All performance numbers are measured on data-race free
applications, i.e., we first detected, identified, and removed data-races from Water, Barnes, and Spatial, and then measured
the numbers to be shown.
Table
1 summarizes the application inputs and runtime characteristics. "Memory size" is the size of the shared data
segment. "Intervals / Barrier" is the average number of intervals created between barriers. As the number of interval
comparisons is potentially proportional to the square of the number of intervals, this metric gives an approximate idea of the
worst-case cost of running the comparison algorithm. Roughly speaking, a new interval is created for each synchronization
acquire. Hence, barrier-only applications will have only a single interval per barrier epoch.
"Slowdown" is the runtime slowdown for each of the applications withoutany of the optimizations described in Section 4.5,
compared with an uninstrumented version of the application running on an unaltered version of CVM. The first iteration of
each application is not timed because we are interested in the steady-state behavior of long-running applications. However,
slowdowns would be even smaller if the first iteration were counted. Over the five applications, non-optimized execution
time slows only by an average factor of 3.8. This number compares quite favorably even with systems that exploit extensive
compiler analysis [17, 7]. The last three columns are discussed in Section 5.2.
Figure
5 breaks down the application slowdown into five categories (again, without the optimizations described in
Section 4.5). "CVM Mods" is the overhead added by the modifications to CVM, primarily setting up the data structures
necessary for proper data-race detection and the additional bandwidth used by the read and write notices. "Bitmaps" describes
the overhead of the extra barrier round required to retrieve bitmaps, together with the cost of the bitmap comparisons.
"Intervals" refers to the time spent using the interval comparison algorithm to identify concurrent interval pairs with
Barnes FFT SOR Spatial Water
Slowdown
CVM Mods
Bitmaps
Access Check
Proc Call
Orig
Figure
5. Breakdown of overhead for unoptimized instrumentation techniques.
overlapping page accesses. "Access" is the time spent inside the instrumentation's procedure calls determining whether
accesses are to shared memory, and setting the proper bits if so. "Proc Call" is the procedure call overhead for our
instrumentation. The base version of ATOM does not currently inline instrumentation; only procedure calls can be inserted
into existing code. Section 5.5 describes the performance impact of an experimental version of ATOM that can inline
instrumentation. "Orig" refers to the original running time.
Access check time dominates the overhead of the two applications that slow the most: SOR and Spatial. Neither application
has significant false sharing, or frequent synchronization. There are therefore few interval creations, and few opportunities
to run the interval comparison algorithm.
Barnes has the highest proportion of overhead spent in the interval comparison algorithm. The reason is that the process of
determining whether a given pair of intervals access the same pages is expensive. Each Barnes process accesses a significant
fraction of the entire shared address space during each interval. Our current representation of read and write notices as lists of
pages is not efficient for large numbers of pages. This overhead could be reduced by changing the representation to bitmaps
for intervals with many notices.
The following subsections describe the above overheads in more detail.
5.1 Instrumentation Costs
We instrumented each load and store that could potentially be involved in a data race. The instrumentation consists of a
procedure call to an analysis routine, and hence adds "Proc Call" and "Access Check" overheads. By summing these columns
from
Figure
5, we can see that instrumentation accounts for an average of 64.6% of the total race-detection overhead.
This overhead can be reduced by instrumenting fewer instructions. This goal is difficult because shared and private data
are all accessed using the same addressing modes, and sometimes even share the same base registers. However, we eliminate
most stack accesses by checking for use of the stack pointer as a base register. The fact that all shared data in our system is
dynamically allocated allows us to eliminate instructions that access data through the the "base register", which points to the
start of the statically-allocated data segment.
Finally, we do not instrument any instructions in shared libraries because none of our applications pass segment pointers
App
Load and Store Instructions
Stack Static Library CVM Inst.
Barnes 558 320 118057 15759 933
FFT 308 207 118057 15759 358
Spatial 758 506 118057 15782 1043
Water 613 503 118057 15759 940
Table
2. Categorization of memory access instructions. "Inst" shows the number of instructions that are actually
instrumented.
to any libraries. This is the case with the majority of the scientific programs where data race detection is the most important.
We can, however, easily instrument "dirty" library functions, if necessary.
Table
breaks down load and store instructions into the categories that we are able to statically distinguish for the base
case, i.e., without optimizations applied. The first five columns show the number of loads and stores that are not instrumented
because they access the stack, statically-allocated data, or are in library routines, including CVM itself.
The sixth column shows the remainder. These instructions could not be eliminated, and are therefore possible data-race
participants. We use ATOM to instrument each such access with a procedure call to an access check routine that is executed
whenever the instruction is executed.
On average, we are able to statically determine that over 99% of the loads and stores in our applications are to non-shared
data. As an example, the FFT binary contains 134993 load and store instructions. Of these, 118057 instructions are in
libraries. A further 308 instructions access data through the stack pointer, and hence reference stack data. Another 15759
are in the CVM system itself. Finally, 207 instructions access data through the global pointer, a register pointing to the base
of statically allocated global memory. We can eliminate these instructions as well, since CVM allocates all shared memory
dynamically. In the entire binary, there remain only 358 memory access instructions that could possibly reference shared
memory, and hence might be a part of a data race.
Nonetheless, Section 5.3 will show that the majority of run-time calls to our analysis routines are for private, not shared,
data.
5.2 The Cost of the Comparison Algorithm
The comparison algorithm has three tasks. First, the set of concurrent interval pairs must be found. Second, this list must
be reduced to those interval pairs that access at least one page in common (e.g., one interval has a read notice for page x
and the other interval has a write notice for page x). Each such pair of concurrent intervals exhibits unsynchronized sharing.
However, the sharing may be either false sharing, i.e., the loads and stores to the page x reference different locations in x
(not a data race), or true sharing, when the loads and stores reference at least one common location at the page x (data race).
The column labeled "Intervals Used" in Table 1 shows the percentage of intervals that are involved in at least one such
concurrent interval pair. This number ranges from zero for SOR, where there is no unsynchronized sharing (true or false), to
86% for Spatial, where there is a large amount of both true and false sharing. Note that the number of possible interval pairs
is quadratic with respect to the number of intervals, so even if this stage eliminates only 14% of all intervals, as we do for
Spatial, we may be eliminating a much higher percentage of interval pairs.
The column labeled "Bitmaps Used" shows that an average of only 27% of all bitmaps must be retrieved from constituent
processors in order to identify data races by distinguishing false from true sharing. As page access lists of concurrent intervals
will only overlap in cases of false sharing or actual data races, the percentage of intervals and bitmaps involved in comparisons
is fairly small.
Note, however, the effect of bitmap and interval comparisons on Barnes. Although the absolute amount of overhead added
by the comparisons is not large, it is larger relative to the rest of the overhead than for any other application. There is also
a seeming disparity between the utilization of intervals and bitmaps for Barnes. Only 4% of intervals are used, but 47% of
the bitmaps are used. This implies that the majority of the shared pages are accessed in only a small number of intervals,
probably one or two phases of a timestep loop that has many phases.
In fact, this is exactly the case for our version of Barnes. Most of the work is done in the force and position computation
phases, which are separated by a barrier. However, each processor accesses the same bodies during both phases, and the sets
of bodies accessed by different processors are disjoint. Hence, there is no true sharing between processors across the barrier
for these computations. Since the bodies assigned to each processor during any iteration are scattered throughout the address
space, a large amount of false sharing occurs. The barrier serves to double the effect of the false sharing by splitting each of
the intervals in half, causing bitmaps to be requested twice for each page instead of once. Note that the barrier can not be
removed without a slight reorganization of code because it synchronizes updates to a few scalar global variables.
We tested this interpretation of the results by implementing the above reorganization. Removal of the barrier effectively
reduced the interval and bitmap overheads in half, reducing the overall overhead by approximately 30%.
The final column of Table 1 shows the amount of additional data needed by the race detection technique compared with
the uninstrumented system.
5.3 The Effect of optimizations
Table
3 shows the effect of our optimizations on the number of instructions actually instrumented. "No Opt." refers to
the base case with no optimizations, "DF" is dataflow, "Batching" is self-explanatory, "Code Mod" refers to dynamic code
modification, and "All" includes all three. "DF+Batching" is included to show the synergy between dataflow and batching in
the absence of code modification. Code modification actually increases the number of instrumented sites in FFT and Water
because of cloning.
The last three columns of Table 3 show the number of times we were able to apply each batching method. Here, "2-3"
represents combining two or three instructions of the same type with consecutive addresses. "Same" represents combining
instructions of the same type and address. Finally, "Mix" is combining instructions with the same address, but different
access types. These numbers imply that batching is most applicable for applications with complex data structures and access
patterns.
Apps
Instrumented Instructions Batching
Opt. Data Flow Batching Code Mod DF+Batching All 2-3 Same Mix
Barnes 933 854 730 933 655 655 63 26 76
Spatial 1043 844 904 1043 725 725
Water 940 776 747 1156 604 733 39 38 58
Table
3. Static instrumentation statistics.
Apps Millions of Instrumented Instructions
Flow Batching Code Mod DF+Batching All
Barnes 435.8 415.0 434.3 88.5 413.5 87.0
FFT 5.8 5.3 5.8 1.5 5.3 1.5
Spatial 108.3 39.5 88.2 28.5 22.1 20.0
Water 124.9 51.8 105.0 21.6 32.6 7.5
Table
4. Dynamic Optimization Statistics
Table
4 shows the effect of our optimizations on the number of instrumented instructions executed at runtime. Although
data flow analysis and batching together eliminate 29.8% and 22.9% of local reference instrumentations for Barnes and FFT,
respectively, this accounts for only 5.1% and 8.8% of instrumented references at runtime. On the other hand, only 30.5% of
instrumentationsand 35.8% of Water's instrumentationsare eliminated. Yet the eliminationof these instrumentations
accounts for 80.0% and 73.9% of runtime references, respectively. Clearly the effectiveness of these optimizations is heavily
application-dependent.
Figure
6 shows the effect of these optimizations on the overall slowdown of the applications. The average slowdown when
all three optimizations are applied is 2.8, which is an improvement of 26%. FFT has the lowest overhead at 22%. Figure 6
also includes bars for the inlining optimization discussed in Section 5.5.
5.4 The Cost of CVM Modifications
Figure
5 shows that almost 15.8% of our overhead comes from "CVM Mods", or modifications made to the CVM system in
order to support the race-detection algorithm. This overhead consists of the cost of setting up additional data structures for
data-race detection and the cost of the additional bandwidth consumed by read notices.
The last column of Table 1 shows the bandwidth overhead of adding read and additional write notices to synchronization
messages. Individual read and write notices are the same size, but there are typically at least five times as many reads as
writes, and read notices consume a proportionally larger amount of space than write notices. Additional write notices are
needed because even the notices are no longer created lazily, even though diffs still are.
Barnes FFT SOR Spatial Water
Slowdown
Base
Data Flow
Batch
Code Mod
Inlining
All-Inlining
Figure
6. Optimizations
The bandwidth overhead for Water is quite large because the fine-grained synchronization means that many intervals and
notices are created. By contrast, the primary cost for Spatial is that false sharing is quite prevalent, leading to a large number
of bitmap requests.
Inlining
To verify our assumption that the procedure call and access check overhead can be significantly reduced by inlining, we
used an unreleased version of ATOM, called XATOM, to inline read and write access checks. Our implementation decreases
the cost of the inlined code fragments by using register liveness analysis to identify dead registers. Dead registers are used
whenever possible to avoid spilling the contents of registers needed by the instrumentation code.
Table
5 shows the effect of inlining on overall performance, relative to the base case with no optimizations. The column
labeled "Runtime" shows the effect as a percentage of overall running time, while "Overhead" shows the same quantity
as a percentage of instrumentation overhead. The "Static" column shows the percent of inlined instructions eliminated by
register liveness analysis (mostly load and store instructions), and "Dynamic" shows the corresponding dynamic quantity.
Elimination of these instructions is very useful because the majority are memory access instructions, and hence relatively
expensive.
The improvements roughly correlate with the procedure call overhead shown in Figure 5, where an average of 13.7%
of total overhead is caused by procedure calls. However, inlining can also eliminate some of the access check overhead
because the liveness analysis can reduce register spillage. This is particularly important in the case of SOR, where most of
the overhead is in access checks.
An important question that we have not answered is how effective the other optimizations are in combination with inlining.
Inlining certainly decreases the potential of the other techniques because all of them work by decreasing the cost or number
of access checks. However, they should still be effective in combination with inlining because the remaining overhead is still
significant. Runtime code modification, in particular, would still be useful because inlining has no effect on the total number
of instructions instrumented. However, the code modification mechanism would probably need to change slightly in order to
address the fact that the inserted instrumentation is no longer just a few bytes. Inserting unconditional branches to the end of
Apps
Improvement Register Liveness
Runtime Overhead Static Dynamic
Barnes 15.0% 28.5% 21.7% 35.5%
FFT 13.8% 23.5% 16.8% 15.8%
Spatial 1.3% 1.9% 13.5% 7.3%
Water 14.0% 25.7% 17.8% 24.4%
Table
5. Inlining
the instrumentation code might be more effective than overwriting the inlined instrumentation with a large number of no-op
instructions.
6 Discussion
6.1 Reference Identification
The system currently prints the shared segment address together with the interval indexes for each detected race condition.
In combination with symbol tables, this information can be used to identify the exact variable and synchronization context.
Identifying the specific instructions involved in a race is more difficult because it requires retaining program counter
information for shared accesses. This information is available at runtime, but such a scheme would require saving program
counters for each shared access until a future barrier analysis phase determined that the access was not involved in a race.
The storage requirements would generally be prohibitive, and would also add runtime overhead.
A second approach is to use the conflicting address and corresponding barrier epoch from an initial run of the program as
input to a second run. During the second run, program counter information can be gathered for only those accesses to the
conflicted address that originate in the barrier epoch determined to involve the data race.
While runtime overhead and storage requirements can thereby be drastically reduced, the data race must occur in the
second run exactly as in the first. This will happen if the application has no general races [20], i.e., synchronization order
is deterministic. This is not the case in Water, the application for which we found data races. A solution is to modify CVM
so as to save synchronization ordering information from the first run, and to enforce the same ordering in the second run.
This is done in the work on execution replay in TreadMarks, a similar DSM. The approach of the Reconstruction of Lamport
Timestamps (ROLT) [23] technique keeps track of minimal ordering information saved during an initial run to enforce exactly
the same interleaving of shared accesses and synchronization in a second run. During the second run, a complete address
trace can be saved for post-mortem analysis, although the authors do not discuss race detection in detail. The advantage of
this approach is that the initial run incurs minimal overhead, ensuring that the tracing mechanism does not perturb the normal
interleaving of shared accesses.
The ROLT approach is complementary to the techniques described in this paper. Our system could be augmented to
include an initial synchronization-tracing phase, allowing us to eliminate our perturbation of the parallel computation in the
second, i.e., race-reference identification phase.
Currently, we use the shared page of the variable involved in the data-race from the initial run as the target page for which
we save program counters during the identification run.
6.2 Global Synchronization
The interval comparison algorithm is run only at global synchronization operations, i.e., barriers. The applications and input
sets in this study use barriers frequently enough, or otherwise synchronize infrequently enough, that the number of intervals to
be compared at barriers is quite manageable. Nonetheless, there certainly exist applications for which global synchronization
is not frequent enough to keep the number of interval comparisons to a small number. Ideally, the system would be able
to incrementally discard data races without global cooperation, but such mechanisms would increase the complexity of the
underlying consistency protocol [10]. If global synchronization is either not used, or not used often enough, we can exploit
CVM routines that allow global state to be consolidated between synchronizations. Currently, this mechanism is only used
in CVM for garbage collection of consistency information in long-running, barrier-free programs.
6.3 Accuracy
Adve [2] discusses three potential problems in the accuracy of race detection schemes in concert with weak memory systems,
or systems that support memory models such as lazy release consistency.
The first is whether to return all data races, or only "first" data races [18, 2]. First races are essentially those that are
not caused or affected by any prior race. Determining whether a given race is affected by any other effectively consists of
deciding whether the operations of any other race precede (via hb1
!) the operations of the race in question. Our system currently
reports all data races. However, we could easily capture an approximation of first races by turning off reporting of any races
for which both accesses occur after (via hb1
!) the accesses of other data races.
The second problem with the accuracy of dynamic race-detection algorithms is the reliability of information in the presence
of races. Race conditions could cause wild accesses to random memory locations, potentially corrupting interval ordering
information or access bitmaps. This problem exists in any dynamic race-detection algorithm, but we expect it to occur
infrequently.
A final accuracy problem identified by Adve is that of systems that attempt to minimize space overhead by buffering only
limited trace information, possibly resulting in some races remaining undetected. Our system only discards trace information
when it has been checked for races, and hence does not suffer this limitation.
6.4 Limitations
We expect this technique to be applicable for a large class of applications. The applications in our test suite range from SOR,
which is bandwidth-limited, to Water, which both synchronizes and modifies data at a fine granularity. These applications
stress different portions of the race-detection technique: the raw cost of access instrumentation for SOR, and the complexity
and cost of dealing with large numbers of intervals for Water.
Furthermore, our applications (with the exception of SOR), have not been > modified in order to reduce false sharing.
Multi-writer LRC tolerates false sharing much better than most other > protocols. > Applications that have been tuned for
LRC tend not to have false sharing > removed. > Water, Spatial, and Barnes all have large amounts of false sharing.
Nonetheless, our methodology is clearly not applicable in all situations. Chaotic algorithms, for example, tolerate races
as a means of eliminating synchronization and improving performance. The false positives caused by tolerated races can
obscure unintended races, rendering this class of applications ill-suited for our techniques.
Similarly, protocol-specific optimization techniques may cause spurious races to be reported. An earlier version of Barnes
had a barrier that enforced only anti-dependences. This barrier has been removed in our version of Barnes. The application is
still correct because LRC delays the propagation of both consistency information and data. However, these anti-dependences
are flagged by our system as data races, and can interfere with the normal operation of our technique.
The techniques that we discuss in this paper are not necessarily limited to LRC systems and applications. Our approach
is essentially to use existing synchronization-ordering information to reduce the number of comparisons that have to be
made at runtime. This information could easily be collected in other distributed systems and memory models by putting
appropriate wrappers around synchronization calls, and appending a small amount of additional information to synchronization
messages. Multiprocessors could be supported by using wrappers and appending a small amount of data to synchronization
state. Application of the rest of the techniques should be straightforward.
6.5 Further Performance Enhancements
Performance of the underlying protocol could be improved by using our write instrumentation to create diffs, rather than
using the twin and page comparison method. We did not investigate this option because of its complexity. Integrating this
mechanism with the runtime code modification, for example, would be non-trivial.
Compiler techniques could be used to expose more opportunities for batching. Loop unrolling and trace scheduling would
be particularly effective for applications such as SOR, the application with the largest overhead in our application testbed.
Finally, the interval comparison algorithm could be improved significantly. While the overhead added by the comparison
algorithm was relatively small for our applications, better worst-case bounds would be desirable for a production system. One
promising approach is the use of hierarchical comparison algorithms. For example, if two processes create a large number of
intervals through exclusively pairwise synchronization, the number of intervals to be compared with other processes could
be reduced by first aggregating the intervals that were created in isolation, and then using these aggregations to compare with
intervals of other processes.
7 Related Work
There has been a great deal of published work in the area of data race detection. However, most prior work has dealt
with applications and systems in more specialized domains. Bitmaps have been used to track shared accesses before [7],
but we know of no other language independent implementation of on-the-fly data-race detection for explicitly-parallel,
shared-memory programs.
We previously [21] described the performance of a preliminary form of our race-detection scheme that ran on top of
CVM's single-writer LRC protocol [14]. This paper describes the performance of our race-detection scheme on top of CVM's
multi-writer protocol. This protocol is a more challenging target because it usually outperforms the single-writer protocol
significantly, making it more difficult to hide the race-detection overheads. Additionally, the work described in this paper
includes several optimizations to the basic system (i.e., batching, data-flow analysis, runtime code modification, and inlining).
Our work is closely related to work already alluded to in Section 6.3, a technique described (but not implemented) by
Adve et al. [2]. The authors describe a post-mortem technique that creates trace logs containing synchronization events,
information allowing their relative execution order to be derived, and computation events. Computation events correspond
roughly to CVM's intervals. Computation events also have READ and WRITE attributes that are analogous to the read and
page lists and bitmaps that describe the shared accesses of an interval. These trace files are used off-line to perform
essentially the same operations as in our system. We differ in that our minimally-modified system leverages off of the LRC
memory model in order to abstract this synchronization ordering information on-the-fly. We are therefore able to perform all
of the analysis on-the-fly as well, and do away with trace logs, post-mortem analysis, and much of the overhead.
Work on execution replay in TreadMarks could be used to implement race-detection schemes. The approach of the
Reconstruction of Lamport Timestamps (ROLT) [23] technique is similar to the technique we described in Section 6.1 for
identifying the instructions involved in races. Minimal ordering information saved during an initial run is used to enforce
exactly the same interleaving of shared accesses and synchronization in the second run. During the second run, a complete
address trace can be saved for post-mortem analysis, although the authors do not discuss race detection in detail. The
advantage of this approach is that the initial run incurs minimal overhead, ensuring that the tracing mechanism does not
perturb the normal interleaving of shared accesses.
The ROLT approach is complementary to the techniques described in this paper. The primary thrust of our work is in using
the underlying consistency mechanism to prune enough information on-the-fly so that post-mortem analysis is not necessary.
As such, our techniques could be used to improve the performance of the second phase of the ROLT approach. Similarly,
our system could be augmented to include an initial synchronization-tracing phase, allowing us to reduce perturbations of the
parallel computation.
Recently, work on Eraser [25] used verification of lock discipline to detect races in multi-threaded programs. Eraser's
main advantage is that it can detect races that do not actually occur in the instrumented execution. However, it does not
guarantee race-free behavior if no data-races are found, and can return false positives. Furthermore, the system does not
support distributed execution. Finally, the overhead of Eraser's approach is an order of magnitude higher than ours.
Work on detecting data-race detection for non-distributed multi-threaded programs has also been done for RecPlay [24],
a Record/Replay system for multi-threaded programs. This work is similar to ROLT approach discussed here, but applied to
multi-threaded programs. This work uses the happens-before relation to reconstruct and replay the execution of the initial
run, and then in the second run perform access checks.
Conclusions
This paper has presented the design and performance of a new methodology for detecting data races in explicitly-parallel,
shared-memory programs. Our technique abstracts synchronization ordering from consistency information already maintained
by multiple-writer lazy-release-consistent DSM systems. We are able to use this information to eliminate most access
comparisons, and to perform the entire data-race detection on-the-fly.
We used our system to analyze five shared-memory programs, finding data races in three of them. Two of those data races,
in standard benchmark programs, were bugs.
The primary costs of data-race detection in our system are in tracking shared data accesses. We were able to significantly
reduce these costs by using three optimization techniques: register data-flow analysis, batching, and inlining. Nonetheless,
the majority of the runtime calls to our library are for non-shared accesses. We therefore used runtime code-modification to
dynamically rewrite our instrumentation in order to eliminate access checks for instructions that accessed only non-shared
data. By combining all of the optimizations except inlining, we were able to reduce the average slowdown for our applications
to approximately 2.8, and to only 1.2 for one application. We expect that combining inlining with the other optimizations
would reduce the slowdown even further.
While the implementation described above is specific to LRC, our general approach is not. Our system exploits synchronization
ordering to eliminate the majority of shared accesses without explicit comparison. Fine-grained comparisons are
made onlywhere the coarse-grained comparisons fail to rule data races out. This approach could be used on systems supporting
other programming models by using "wrappers" around synchronization accesses to track synchronization ordering.
We believe that the utility of our techniques, in combination with the generality of the approach that we present, can help
data-race detection to become more widely used.
--R
A unified formalization of four shared-memory models
Detecting data races on weak memory systems.
Debugging fortran on a shared memory machine.
Race frontier: Reproducing data races in parallel program debugging.
A fast instruction-set simulator for execution profiling
A methodology for procedure cloning.
An empirical comparison of monitoring algorithms for access anomaly detection.
Memory consistency and event ordering in scalable shared-memory multiprocessors
Parallel program debugging with on-the-fly anomaly detection
Distributed Shared Memory Using Lazy Release Consistency.
Lazy release consistency for software distributed shared memory.
Treadmarks: Distributed shared memory on standard workstations and operating systems.
The Coherent Virtual Machine.
The relative importance of concurrent writers and weak consistency models.
How to make a multiprocessor computer that correctly executes multiprocess programs.
Improving the accuracy of data race detection.
On the complexity of event ordering for shared-memory parallel program executions
What are race conditions?
Online data-race detection via coherency guarantees
Instrumentation and optimization of win32/intel executables using etch.
Execution replay for TreadMarks.
Work in progress: An on-the-fly data race detector for recplay
A dynamic data race detector for multi-threaded programs
ATOM: A system for buildingcustomized program analysis tools.
The SPLASH-2 programs: Characterization and methodological considerations
--TR
--CTR
Edith Schonberg, On-the-fly detection of access anomalies, ACM SIGPLAN Notices, v.39 n.4, April 2004
Milos Prvulovic , Josep Torrellas, ReEnact: using thread-level speculation mechanisms to debug data races in multithreaded codes, ACM SIGARCH Computer Architecture News, v.31 n.2, May
Min Xu , Rastislav Bodk , Mark D. Hill, A serializability violation detector for shared-memory server programs, ACM SIGPLAN Notices, v.40 n.6, June 2005
Chen Ding , Xipeng Shen , Kirk Kelsey , Chris Tice , Ruke Huang , Chengliang Zhang, Software behavior oriented parallelization, ACM SIGPLAN Notices, v.42 n.6, June 2007
Bohuslav Krena , Zdenek Letko , Rachel Tzoref , Shmuel Ur , Tom Vojnar, Healing data races on-the-fly, Proceedings of the 2007 ACM workshop on Parallel and distributed systems: testing and debugging, July 09-09, 2007, London, United Kingdom
Sudarshan M. Srinivasan , Srikanth Kandula , Christopher R. Andrews , Yuanyuan Zhou, Flashback: a lightweight extension for rollback and deterministic replay for software debugging, Proceedings of the USENIX Annual Technical Conference 2004 on USENIX Annual Technical Conference, p.3-3, June 27-July 02, 2004, Boston, MA | shared memory;DSM;data races;on-the-fly |
355277 | Metric operations on fuzzy spatial objects in databases. | Uncertainty management for geometric data is currently an important problem for (extensible) databases in general and for spatial databases, image databases, and GIS in particular. In these systems, spatial data are traditionally kept as determinate and sharply bounded objects; the aspect of spatial vagueness is not and cannot be treated by these systems. However, in many geometric and geographical database applications there is a need to model spatial phenomena rather through vague concepts due to indeterminate and blurred boundaries. Following previous work, we first describe a data model for fuzzy spatial objects including data types for fuzzy regions and fuzzy lines. We then, in particular, study the important class of metric operations on these objects. | INTRODUCTION
Representing, storing, quering, and manipulating spatial information
is important for many non-standard database applications. But
so far, spatial data modeling has implicitly assumed that the extent
and hence the borders of spatial objects are precisely determined
("boundary syndrome"). Special data types called spatial
data types (see [7] for a survey) have been designed for modeling
these data in databases. We will denote this kind of entities as crisp
spatial objects.
In practice, however, there is no apparent reason for the whole
boundary of a region to be determined. On the contrary, the feature
of spatial vagueness is inherent to many geographic data [3]. Many
geographical application examples illustrate that the boundaries of
spatial objects (like geological, soil, and vegetation units) can be
partially or totally indeterminate and blurred; e.g., human concepts
like "the Indian Ocean" or "Southern England" are implicitly
vague. In this paper we focus on a special kind of spatial vagueness
called fuzziness. Fuzziness captures the property of many spatial
objects in reality which do not have sharp boundaries or whose
boundaries cannot be precisely determined. Examples are natu-
ral, social, or cultural phenomena like land features with continuously
changing properties (such as population density, soil quality,
vegetation, pollution, temperature, air pressure), oceans, deserts,
English speaking areas, or mountains and valleys. The transition
between a valley and a mountain usually cannot be exactly ascertained
so that the two spatial objects "valley" and "mountain" cannot
be precisely separated and defined in a crisp way. We will designate
this kind of entities as fuzzy spatial objects.
The goal of this paper is to deal with the important class of metric
operations on fuzzy spatial objects. Examples are the area operation
on fuzzy regions or the length operation on fuzzy lines. It turns
out that their definition is not as trivial as the definition of their crisp
counterparts. The underlying formal object model follows the au-
thor's previous work in [8] and offers fuzzy spatial data types like
fuzzy regions and fuzzy lines in two-dimensional Euclidean space.
Our concept to integrate fuzzy spatial data types into databases
is to design them as abstract data types whose values can be embedded
as complex entities into databases [9] and whose definition
is independent of a particular DBMS data model. They can,
e.g., be employed as attribute types in a relation. The future design
of an SQL-like fuzzy spatial query language will profit from the
abstract data type approach since it makes the integration of data
types, predicates, and operations into SQL easier.
The metric operations described in this paper are part of a so-called
abstract model for fuzzy spatial objects. This model focuses
on the nature of the problem and on its realistic description and
solution with mathematical notations; it employs infinite sets and
does not worry about finite representations of objects as they are
needed in computers. This is done by the so-called discrete model
which transforms the infinite representations into finite ones and
which realizes the abstract operations as algorithms on these finite
representations. It is thus closer to implementation.
Section 2 discusses related work. Section 3 introduces fuzzy
spatial objects. It gives some basic concepts of fuzzy set theory
and then informally presents the design of fuzzy regions and fuzzy
lines. Section 4 describes and formalizes metric operations on
fuzzy spatial objects and identifies the two classes of real-valued
and fuzzy-valued metric operations. Section 5 draws some conclusions
and discusses future work.
2. RELATED WORK
Mainly two kinds of spatial vagueness can be identified: uncertainty
is traditionally equated with randomness and chance occurrence
and relates either to a lack of knowledge about the position
and shape of an object with an existing, real boundary (positional
uncertainty) or to the inability of measuring such an object precisely
(measurement uncertainty). Fuzziness, in which we are only
interested in this paper, is an intrinsic feature of an object itself and
describes the vagueness of an object which certainly has an extent
but which inherently cannot or does not have a precisely definable
boundary (e.g., between a mountain and a valley). This kind of
vagueness results from the imprecision of the meaning of a con-
cept. Models based on fuzzy sets have, e.g., been proposed in [1,
2, 4, 8].
3. FUZZY SPATIAL OBJECTS
In this section we very briefly and informally present the basic elements
of an abstract model for fuzzy spatial objects as it has been
formalized in [8]. The model is based on fuzzy set theory (and
fuzzy topology) whose main concepts are introduced first, as far as
they are needed in this paper. Afterwards the design of spatial data
types for fuzzy regions and fuzzy lines is shortly discussed.
3.1 Crisp Versus Fuzzy Sets
Fuzzy set theory [10] is an extension and generalization of Boolean
set theory. It replaces the crisp boundary of a classical set by a
gradual transition zone and permits partial and multiple set mem-
bership. Let X be a classical (crisp) set of objects. Membership in a
classical subset A of X can then be described by the characteristic
function f0;1g such that for all x 2 X holds c A
and only if x 2 A and c A This function can be
generalized such that all elements of X are mapped to the real interval
[0,1] indicating the degree of membership of these elements
in the set in question. We call -
the membership function
of -
A, and the set -
A called a fuzzy set
in X . A [strict] a-cut of a fuzzy set -
A for a specified value a is the
crisp set A a [A
a
1g. The
strict a-cut for a = 0 is called support of -
. A
fuzzy set is convex if and only if each of its a-cuts is a convex set.
A fuzzy set -
A is said to be connected if its pertaining collection of
a-cuts is connected, i.e., for all points P;Q of -
A, there exists a path
lying completely within -
A such that -
A (R) - min(-
A (P);-
A (Q))
holds for any point R on the path.
3.2 Fuzzy Regions
We first describe some desired properties of fuzzy regions and also
discuss some differences in comparison with crisp regions. After
that, we informally outline a data type for fuzzy regions.
3.2.1 Generalization of Crisp to Fuzzy Regions
A very general model defines a crisp region as a regular closed set
[5, 6, 7] in the Euclidean space IR 2 . This model is closed under (ap-
propriately defined) geometric union, intersection, and difference.
Similar to the generalization of crisp sets to fuzzy sets, we strive
for a generalization of crisp regions to fuzzy regions on the basis of
the point set paradigm and fuzzy concepts.
Crisp regions are characterized by sharply determined boundaries
enclosing and grouping areas with equal properties or attributes
and separating different regions with different properties
from each other; hence qualitative concepts play a central role. For
fuzzy regions, besides the qualitative aspect, also the quantitative
aspect becomes important, and boundaries in most cases disappear
(between a valley and a mountain there is no boundary!). The distribution
of attribute values within a region and transitions between
different regions may be smooth or continuous. This important feature
just characterizes fuzzy regions.
A classification of fuzzy regions from an application point of
view together with application examples is given in [8].
3.2.2 Definition of Fuzzy Regions
We now briefly give an informal description of a data type for fuzzy
regions. A detailed formal definition can be found in [8]. A value
of type fregion for fuzzy regions is a regular open fuzzy set whose
membership function -
predominantly continuous.
F is defined as open due to its vaguenessand its lack of boundaries.
The property of regularity avoids possible "geometric anomalies"
(e.g., isolated or dangling line or point features, missing lines and
points) of fuzzy regions. The property of -
F to be predominantly
continuous models the intrinsic smoothness of fuzzy regions where
a finite number of exceptions ("continuity gaps") are allowed.
3.2.3 Fuzzy Regions As Collection of a-Level Regions
A "semantically richer" characterisation of fuzzy regions describes
them as collections of crisp a-level regions [8]. Given a fuzzy region
F , we represent a region F a for an a 2 [0; 1] as the regular
crisp set of points whose membership values in -
F are greater than
or equal to a. F a can have holes. The a-level regions of -
F are
nested, i.e., if we select membership values
a n ?a a n+1 .
3.3 Fuzzy Lines
In this section we informally describe a data type for fuzzy lines
whose detailed formal definition can be found in [8]. We start
with a simple fuzzy line -
l which is defined as a continuous curve
with smooth transitions of membership grades between neighboring
points of -
l , i.e., the membership function of -
l is continuous too.
The end points of -
l may coincide so that loops are allowed. Self-intersections
and equality of an interior with an end point, however,
are prohibited. If -
l is closed, the first end point must be the leftmost
point to ensure uniqueness of representation.
Let S be the set of fuzzy simple lines. An S-complex T is a finite
subset of S such that the following conditions are fulfilled. First, the
elements of T do not intersect or overlap within their interior. Sec-
ond, they may not be touched within their interior by an endpoint of
another element. Third, isolated fuzzy simple lines are disallowed
(connectivity property). Fourth, each endpoint of an element of T
must belong to exactly one or more than two incident elements of
T to support the requirement of maximal elements and hence to
achieve minimality of representation. Fifth, the membership values
of more than two elements of T with a common end point must
have the same membership value; otherwise we get a contradiction
saying that a point of an S-complex has more than one membership
value. All conditions together define an S-complex as a connected
planar fuzzy graph with a unique representation.
A value of the data type fline for fuzzy lines is then given as a
finite set of disjoint S-complexes.
4. METRIC OPERATIONS ON FUZZY
A very important class of operations on spatial objects are metric
operations usually getting one or two spatial objects as arguments
and yielding a numerical result. They compute metric (i.e., mea-
surable) properties such as area and perimeter and are commonly
used in the analysis of spatial phenomena. While their definitions
are well-known, clear, and relatively easy for crisp spatial objects,
it is not always obvious how to measure metric properties of fuzzy
spatial objects and hence how to define corresponding operations.
A central issue is whether the result of such a metric operation
is a crisp number or rather a "fuzzy" number. From an application
point of view both kinds of numerical results are acceptable and
even desirable. A resulting single crisp number can be interpreted
as an appropriately aggregated or weighted real value over all membership
values of a fuzzy spatial object. An arising fuzzy number
satisfies the expectation that, if the operands of a metric operation
are fuzzy, then the numerical result should be fuzzy too. Therefore,
we will consider the crisp (Section 4.1) and the fuzzy (Section 4.2)
variant of several metric operations. Both variants have in common
that they operate on fuzzy spatial objects and that they are reduced
to the ordinary definitions in the crisp case.
4.1 Crisp-Valued Numerical Operations
In this section we view operations on fuzzy spatial objects that yield
crisp numbers. We first present a special view on membership functions
which simplifies an understanding of the metric operations
discussed afterwards.
4.1.1 Membership Functions Considered as Functions
of Two Variables
Essentially, a membership function for a spatial object s associates
with each point
which p belongs to s. A slightly modified view considers - as a
function of the two variables x and y. This view has the benefit that
we can visualize how - "works" in terms of its graph. The graph
of - is the graph of the equation z = -(x;y) and comprises all three-dimensional
points
If s is a fuzzy region, the graph of the corresponding spatial membership
function of two variables is a collection of disjoint surfaces
(one for each fuzzy face) that lie above their domain s in the Euclidean
plane. Each surface determines a solid or volume bounded
above by the function graph and bounded below by a fuzzy face
of s. If s is a fuzzy line, we obtain a collection of disjoint three-dimensional
networks each consisting of a set of three-dimensional
curves. As an example, Figure 1 shows the three-dimensional view
of the membership function of a fuzzy region
(showing the expansion of air pollution caused by a power
station, for instance). Three-dimensional visualizations of membership
functions of fuzzy spatial objects lead to an easier understanding
of the metric operations discussed in the following.
Figure
1: 3D representation of the membership function of a
fuzzy region.
4.1.2 Metric Operations on Fuzzy Regions
Metric operations on fuzzy regions usually yield a real value as
a result and can be summarized as a collection of functions
real with different function names for g and, of course,
different semantics. Let -
fregion where the - f i are
the fuzzy faces of -
F .
The definition of an area operator applied to -
F requires that -
F is
integrable. This condition is always fulfilled since our definition of
a fuzzy region requires that -
F is continuous or at least piecewise
continuous. The area of -
F can be defined as the volume under the
membership function -
area( -
RR
F (x;y) dx dy
RR
F (x;y) dx dy
RR
Thus, the integration can be performed either over the entire Euclidean
plane, or equivalently over the support of -
F , which is always
bounded, or equivalently over the supports of the faces of -
F .
Note that -
Crisp holes enclosed by fuzzy faces do not cause problems during
the integration process; they simply do not contribute to the double
integral. Evidently, if -
G, we have area( -
G). In
particular, we obtain
area( -
RR
F (x;y) dx dy
RR
A special case arises if -
F is piecewise constant and -
F consists
of a finite collection fF a of crisp a-level regions. Then
the area of -
F is computed as the weighted sum of the areas of all
a-level regions F a :
area( -
RR
(x;y)2F a i
a
Next, we determine the height and the width of a fuzzy region. For
computing the height (width) of a fuzzy region -
F , all maximum
membership values along the y-axis (x-axis) and parallel to the x-axis
(y-axis) are aggregated and (their square roots are) added up,
F is projected onto the y-axis (x-axis), and the maximum membership
value is determined for each y-value (x-value).
R
F (x;y) dy
and similarly we obtain
R
F (x;y) dx
Both integrals are finite since -
F has bounded support. Let -
crisp. For the height operator we obtain max x2IR -2
1 for a fixed y 0 2 IR if the intersection of the horizontal line
with F is non-empty. Otherwise, the expression yields
Hence,
R
F (x;y) dy is the measure of the
set of y 0 's such that F intersects In the case that F consists
of several connected components, each component of F gives rise
to a y-interval. Then height(F) is the union of these intervals. If F
is connected, we only obtain one interval, and height(F) is just its
length. Analogous thoughts hold for the width operator.
So far, we have not explained the meaning and the necessity of
the exponent 1
2 as part of the integrands. The introduction of this
exponent is essential since it ensures our expectation that the area
of a fuzzy region is equal or less than its height times its width, i.e.,
LEMMA 1. area( -
PROOF. We can show this as follows:
area( -
RR
F (x;y) dx dy
R
F (x;y) dy \Delta R
F (x;y) dx
F (x;y) dy \Delta R
F (x;y) dx
Without the exponent 1
2 we would have area( -
F) which is a completely different result and which does not
correspond to our intuition. Hence, metric operations on fuzzy spatial
objects do not only dependon their geometric extent but also on
the nature of their membership values so that a kind of "compen-
sation" is necessary. In accordance with the definitions for height
and width, the exponent 1
will also appear in the definitions of the
following operators.
If -
F consists of a finite collection fF a of crisp a-level
regions, we obtain
R
dy
R
The diameter of a spatial object is defined as the largest distance
between any of its points. We will give here two definitions and
distinguish between the outer diameter and the inner diameter. For
the computation of the outer diameter we may leave the fuzzy re-
gion; for the computation of the inner diameter we have to remain
within its interior. The outer diameter of -
F is defined as
R
F (u;v) du
where u and v are any pair of orthogonal directions and where the
maximum is evaluated over all possible directions u (we can also
imagine that -
F is smoothly rotated within the Cartesian coordinate
system). If -
is crisp, the value u yielding the maximum is the
direction of the line along which the projection of -
F has the largest
size. Obviously, height( -
F) and width( -
F), and we can further conclude that area( -
The inner diameter operator only relates to connected fuzzy re-
gions. Let P and Q be any two points of -
F , and let p PQ be a path
from P to Q that lies completely in -
F . Such a path must exist since
F is assumed to be connected. The inner diameter is defined as
min
RR
F (x;y) dx dy
where the maximum is computed over all points P and Q of the
Euclidean plane and where the minimum is determined over all
paths between P and Q such that -
F (P);-
F (Q)) holds
for any point R on p PQ . Since -
F is connected, such a path always
exists.
If -
path p PQ from P to
Q will not yield the maximum. Otherwise, if both P and Q are in F ,
must lie completely in F so that min pPQ
RR
F (x;y) dx dy=
length(p PQ ). In this case, the meaning of innerDiameter(F) amounts
to its standard definition as the greatest possible distance between
any two points in F where only paths lying completely in F are
allowed.
The relationship between inner and outer diameter is different
for crisp and fuzzy regions. If -
is crisp, we can derive two
propositions:
LEMMA 2. If F is a connected crisp region, then
outerDiameter(F) - innerDiameter(F).
PROOF. Select a line (which is not necessarily unique) upon
which the projection of F onto the u-axis is largest. Since F is
connected, this projection is an interval, and its length is given by
outerDiameter(F). Let us assume that P and Q are those points of
F that coincide with the end points of this interval. Then the shortest
path p in F between P and Q is at least as long as the straight
line segment s joining P and Q. Segment s is at least as long as the
interval since the interval is a projection of s.
F
s
Figure
2: Example of the relationship between inner (p) and
outer diameter of a connected crisp region.
An example of this relationship is that s crosses the exterior of
F . Then path p is longer than s (Figure 2).
LEMMA 3. If F is a convex crisp region, then
PROOF. Since F is convex, F is connected so that
outerDiameter(F) - innerDiameter(F) holds according to
Lemma 2. Let P and Q be the end points of the shortest path
yielding the maximum in the definition of innerDiameter(F).
Since F is convex, the shortest path in F between P and Q is the
straight line segment joining P and Q. The projection of F on this
line segment thus has length at least innerDiameter(F) so that
outerDiameter(F) - innerDiameter(F).
For convex fuzzy regions the situation is different in the sense
that the outer diameter can even be greater than the inner diameter.
An illustration is given in Figure 3. Let c;d 2 IR ?0 . We consider a
fuzzy region -
F having the membership function
a
and thus consisting of two circular, concentric a-level regions F a 1
and F a 2 . Moreover, we assume that a 1 is much larger than a 2 and
that d is only slightly larger than c. If R and S are two points on
the boundary of F a 1 which are located on opposite sides so that
their straight connection passes the center of F a 1 , then for F a 1 the
inner and the outer diameter are equal according to Lemma 3, and
we obtain innerDiameter(F a 1
and Q are two points on the boundary of F a 2 , they have the largest
distance if they are located on opposite sides so that their straight
connection passes the center of F a 2 . This is just the outer diameter
of -
F , and we have outerDiameter( -
c). But the
inner diameter of -
F can be smaller if we consider a shortest path
This path avoids F a 1 with
its high membership value a 1 , and we can compute the value for
a 1 so that innerDiameter( -
F) holds. We must
require that length(p PQ ) \Delta a c). This is the case
if a 1 ?
a 2 (length(p PQ )\Gamma2(d \Gammac))
2c .
The observation that the outer diameter of a fuzzy region -
F can be
larger than its inner diameter is only valid if -
F is convex:
LEMMA 4. If -
F is a convex fuzzy region, then
F).
PROOF. If -
F is a convex fuzzy region, we know due to the definition
of connectedness that -
F (P);-
F (Q)) holds
for any point R on the straight line segment PQ between any two
R
d
c
F a 1
F a 2
Figure
3: Example of the relationship between inner and outer
diameter of a convex fuzzy region.
points P and Q. Therefore, we obtain RR
F (x;y) dx dy -
RR
F (x;y) dx dy for the shortest path p PQ between P and
Q. A projection of -
F onto the line segment PQ in any direction
u yields R
F (u;v) du - RR
F (x;y) dx dy. Fi-
nally, we obtain
RR
F (u;v) du
RR
F (x;y) dx dy
RR
F (x;y) dx dy
In all other cases where -
F is a more general fuzzy region, no
general statement can be made about the relationship between inner
and outer diameter.
Based on the concept of outer diameter we can specify two other
operators which characterize the shape of a fuzzy region. They rate
the opposite geometric properties "elongatedness"and "roundness"
and are defined in terms of the proportion of the minor outer diameter
to the major outer diameter. The first operator is given as
min
R
F (u;v) du
The geometric property "roundness" can be regarded as the complement
of "elongatedness":
The next operator of interest computes the perimeter of a fuzzy
region -
F . Assuming that the membership function -
F is continuous
and that -
-x and -
-y denote the partial derivatives of - with respect
to x and y, respectively, we can define the perimeter of -
F as
RR
-x -2
F
-y -2
F
dx dy
RR
-y - 1-
f
dx dy
One can prove that, if -
G, we obtain perimeter( -
G). If the membership function -
F is piecewise constant
so that -
F consists of a finite collection fF a of crisp
a-level regions, the perimeter of -
F is defined as
where calculates the length of the kth arc along which
the discontinuity between the a-level regions with membership degrees
a i and a j occurs and where this length is weighted by the
absolute difference of a i and a j .
4.1.3 Metric Operations on Fuzzy Lines
For fuzzy lines we consider operations g with the signature
l n g be a fuzzy line. We start with
the operation length measuring the size of -
L and first determine the
length of a simple fuzzy line -
l. We know that the membership function
of -
l is given as -
l just yields the support of -
l, i.e., supp( -
l ([0;1]). Hence
RR
l)
F
-y - 1-
F
dx dy
Consequently, we obtain
Another operation is strength which follows the principle that "a
line is as strong as its weakest link". It computes the minimum
membership value of a fuzzy line and is thus defined as
4.2 Fuzzy-Valued Metric Operations
Another interpretation of metric operations on fuzzy spatial objects
is that they yield a fuzzy numerical value as a result. This accords
with the fuzzy character of the operand objects. We first explain
the concept of fuzzy numbers needed for a description of the metric
operations afterwards.
4.2.1 Fuzzy Numbers
The concept of a fuzzy number arises from the fact that many quantifiable
phenomena do not lend themselves to a characterisation in
terms of absolutely precise numbers. For instance, frequently our
watches are somewhat inaccurate, so we might say that the time
is now "around five o'clock''. Or we might estimate the age of an
elder man at "nearly seventy-five years". Hence, a fuzzy number
is described in terms of a central value and a linguistic modifier
like nearly, around, or approximately. Intuitively, a concept captured
by such a linguistic expression is fuzzy, because it includes
some number values on either side of its central value. Whereas
the central value is fully compatible with this concept, the numbers
around the central value are compatible with it to lesser degrees.
Such a concept can be captured by a fuzzy number defined on IR.
Its membership function should assign the degree of 1 to the central
value and lower degrees to other numbers reflecting their proximity
to the central value according to some rule. The membership function
should thus decrease from 1 to 0 on both sides of the central
value.
Formally, a fuzzy number -
A is a convex normalized fuzzy set of
the real line IR such that (i) 9!x
A is called the
central of -
A is piecewise continuous. The membership
function of -
A can also be expressed in a more explicit form. Let
where a is a piecewise continuous function increasing
to 1 at point b, and g is a piecewise continuous function decreasing
from 1 at point b. We introduce freal as the type of all fuzzy (real)
numbers.
For the representation of the fuzzy-valued result of metric operations
we introduce two restricted kinds of fuzzy numbers. The
first kind contains numbers that are characterized by the property
that function f is lacking, i.e., these numbers only have a right-sided
membership function. The second kind comprises numbers
that are characterized by the property that function g is lacking, i.e.,
these numbers only have a left-sided membership function.
4.2.2 Metric Operations on Fuzzy Regions
For fuzzy regions we now consider fuzzy-valued operations g with
the signature first confine ourselves to a
subset of operations g 2 farea, perimeter, height, width, diame-
terg, because their result can be expressed in a generic way by a
restricted fuzzy number with a right-sided membership function,
as we will see.
To determine the result of g we switch to the
view of -
F as a collection of crisp a-level regions fF a
where n is possibly infinite. Since the regions F a i are crisp, we can
apply the corresponding known crisp operations g c to them. The
relationship between g c and the membership values a i is given in
the following lemma:
LEMMA 5. a ? b , g c
PROOF. From the definition of a fuzzy region as a collection
of a-level regions we know that a ? b , F a ' F b . Since g c is a
monotonically increasing function, we obtain F a ' F b , g c
We now define g( -
F) as the following fuzzy number:
This fuzzy number has a right-sided membership function, because
for the smallest a-level region F a 1 the membership value is a
and for all other a-level regions F a increasing i and thus
increasing g c the membership value a i decreases from 1 to 0.
In particular, the support of g( -
F) does not contain any smaller values
than g c
F is finite, we obtain a stepwise constant and
hence piecewise continuous membership function for g( -
F). Other-
wise, if -
F) is continuous, L -
F is infinite, and we get a continuous
membership function for g( -
F).
Intuitively, this result documents the vagueness of the operator
F), because with the increase of g c the certainty and
knowledge about its correctness decreases. We can confirm with
the membership value 1 that g c is the value of g( -
F). Thus,
c is the lower bound (or the core). But the value could be
higher, and if so, we can only confirm it with a lower membership
value. The membership value here indicates the degree of imprecision
of the operation g.
Let now g 2 felongatedness;roundnessg. These two operations
cannot be treated as fuzzy numbers with right-sided membership
functions since they are not monotonically increasing, i.e., in general
F a ' F b
How to measure these two operations as fuzzy-valued numbers is
currently an open issue. It is even doubtful whether they can be
represented as general fuzzy numbers at all.
4.2.3 Metric Operations on Fuzzy Lines
Each operation g 2 flength;strengthg on fuzzy lines has the signature
L be a fuzzy line, and let fL a
be a collection of crisp a-cuts where n is possibly infinite. For
both operations we pursue a similar strategy as for the operations
on fuzzy regions. The main difference in the definition of both operations
is that length is an increasing function whereas strength
is a decreasing function. Hence, analogously to Lemma 5 we can
conclude that
a
But since L a ' L b , strength c (L a ) - strength c (L b ), we obtain
a
Nevertheless, we can define the value of g( -
L) in the same manner
for both operations as the fuzzy number
For length this leads to a fuzzy number with a right-sided membership
function, and for strength we obtain a fuzzy number with a
left-sided membership function.
5. CONCLUSIONS AND FUTURE WORK
As part of an abstract model for fuzzy spatial objects in the Euclidean
space, we have defined metric operations that can have either
real-valued or fuzzy-valued results. All operations considered
have been unary functions. For future work we will also have to
consider binary metric operations, and here we will have, in par-
ticular, to deal with fuzzy distance and fuzzy direction operations.
Having achieved a formal and rather complete data model for fuzzy
spatial objects, we can transform it to a discrete model where we
have to think about finite representations for the objects and algorithms
for the operations. These are efforts that should later lead to
an efficient implementation.
6.
--R
Fuzzy Set Theoretic Approaches for Handling Imprecision in Spatial Analysis.
Natural Objects with Indeterminate Boundaries
Geographic Objects with Indeterminate Boundaries.
Qualitative Spatial Reasoning: A Semi-Quantitative Approach Using Fuzzy Logic
Vague Regions.
Spatial Data Types for Database Systems - Finite Resolution Geometry for Geographic Information Systems
Uncertainty Management for Spatial Data in Databases: Fuzzy Spatial Data Types.
Inclusion of New Types in Relational Database Systems.
Fuzzy Sets.
--TR | fuzzy sets;fuzzy-valued metric operation;fuzzy line;real-valued metric operation;fuzzy region;fuzzy spatial data type |
355442 | The Volumetric Barrier for Semidefinite Programming. | We consider the volumetric barrier for semidefinite programming, or "generalized" volumetric barrier, as introduced by Nesterov and Nemirovskii. We extend several fundamental properties of the volumetric barrier for a polyhedral set to the semidefinite case. Our analysis facilitates a simplified proof of self-concordance for the semidefinite volumetric barrier, as well as for the combined volumetric-logarithmic barrier for semidefinite programming. For both of these barriers we obtain self-concordance parameters equal to those previously shown to hold in the polyhedral case. | Introduction
This paper concerns the volumetric barrier for semidefinite programming. The volumetric
barrier for a polyhedral set A is an m \Theta n matrix, was introduced
by Vaidya (1996). Vaidya used the volumetric barrier in the construction of a cutting plane
algorithm for convex programming; see also Anstreicher (1997b, 1999a, 1999b). Subsequently
Vaidya and Atkinson (1993) (see also Anstreicher (1997a)) used a hybrid combination of the
volumetric and logarithmic barriers for P to construct an O(m 1=4 n 1=4 L)-iteration algorithm for
a linear programming problem defined over P, with integer data of total bit size L. For m AE n
this complexity compares favorably with O(
mL), the best known iteration complexity for
methods based on the logarithmic barrier.
Nesterov and Nemirovskii (1994, Section 5.5) proved self-concordance results for the volu-
metric, and combined volumetric-logarithmic, barriers that are consistent with the algorithm
complexities obtained in Vaidya and Atkinson (1993). In fact Nesterov and Nemirovskii (1994)
obtain results for extensions of the volumetric and combined barriers to a set of the form
and C are m \Theta m symmetric matrices, and -
denotes the semidefinite ordering. The set S is a strict generalization of P, since P can be represented
by using diagonal matrices in the definition of S. Optimization over a set of the form
S is now usually referred to as semidefinite programming; see for example Alizadeh (1995) or
Vandenberghe and Boyd (1996). It is well-known (see Nesterov and Nemirovskii (1994)) that
an extension of the logarithmic barrier to S obtains an m-self-concordant barrier. In Nesterov
and Nemirovskii (1994) it is also shown that semidefinite extensions of the volumetric, and
combined volumetric-logarithmic barrier are O(
mn), and O(
mn), self-concordant barriers
for S, respectively.
The self-concordance proofs in Nesterov and Nemirovskii (1994, Section 5.5) are extremely
technical, and do not obtain the constants that would be needed to actually implement algorithms
using the barriers. Simplified proofs of self-concordance for the volumetric and combined
barriers for P are obtained in Anstreicher (1997a). In particular, it is shown these barriers
are 225
mn, and 450
mn self-concordant barriers for P, respectively. The proofs of these
self-concordance results use a number of fundamental properties of the volumetric barrier established
in Anstreicher (1996, 1997a). Unfortunately, however, the analysis of Anstreicher
(1997a) does not apply to the more general semidefinite constraint defining S, as considered
in Nesterov and Nemirovskii (1994). With the current activity in semidefinite programming
the extension of results for the volumetric and combined barriers to S is of some interest. For
example, in Nesterov and Nemirovskii (1994, p.204) it is argued that with a large number
of low-rank quadratic constraints, the combined volumetric-logarithmic barrier applied to a
semidefinite formulation obtains a lower complexity than the usual approach of applying the
logarithmic barrier directly to the quadratic constraints.
The purpose of this paper is to extend the analysis of the volumetric and combined barriers
in Anstreicher (1996, 1997a) to the semidefinite case. This analysis is by necessity somewhat
complex, but in the end we obtain semidefinite generalizations for virtually all of the fundamental
results in Anstreicher (1996, 1997a). These include:
ffl The semidefinite generalization of the matrix Q(x) having
where V (\Delta) is the volumetric barrier.
ffl The semidefinite generalization of the matrix \Sigma, which in the polyhedral case is the
diagonal matrix Representations of rV (x) and Q(x) in terms of \Sigma clearly
show the relationship with the polyhedral case (see Table 1, at the end of Section 3).
ffl Semidefinite generalizations of fundamental inequalities between Q(x) and the Hessian
of the logarithmic barrier (see Theorems 4.2 and 4.3).
ffl Self-concordance results for the volumetric, and combined, barriers identical to those
obtained for the polyhedral case. In particular, we prove that these barriers are 225
mn,
and 450
mn self-concordant barriers for S, respectively.
The fact that we obtain self-concordance results identical to those previously shown to
hold in the polyhedral case is somewhat surprising, because one important element in the
analysis here is significantly different than in Anstreicher (1997a). In Anstreicher (1997a), self-
concordance is established by proving a relative Lipschitz condition on the Hessian r 2 V (\Delta).
This proof is based on Shur product inequalities, and an application of the Gershgorin circle
theorem. The use of the Lipschitz condition is attractive because it eliminates the need to
explicitly consider the third directional derivatives of the volumetric barrier. We have been
unable to extend this proof technique to the semidefinite case, however, and consequently here
we explicitly consider the third directional derivatives of V (\Delta). The proof of the main result
concerning these third derivatives (Theorem 5.1) is based on properties of Kronecker products.
Despite the fact that on this point the analytical techniques used here and in Anstreicher
(1997a) are quite different, the final self-concordance results are identical.
An outline of the paper follows. In the next section we briefly consider some mathematical
preliminaries. The most significant of these are well-known properties of the Kronecker prod-
uct, which we use extensively throughout the paper. In Section 3 we define the logarithmic,
volumetric, and combined barriers for S, and state the main self-concordance theorems. The
proofs of these results are deferred until Section 5. Section 4 considers a detailed analysis of the
volumetric barrier for S. We first obtain Kronecker product representations for the gradient
and Hessian of V (\Delta), which are then used to prove a variety of results generalizing those in
Anstreicher (1996, 1997a). Later in the section the matrix \Sigma is defined, and alternative representations
of rV (x) and Q(x) in terms of \Sigma are obtained (see Table 1). Section 5 considers
the proofs of self-concordance for the volumetric and combined barriers. The main work here
is to obtain Kronecker product representations for the third directional derivatives of V (\Delta), and
then prove a result (Theorem 5.1) relating the third derivatives to Q(x).
Preliminaries
In this section we briefly consider several points of linear algebra and matrix calculus that will
be required in the sequel. To begin, let A and B be m \Theta m matrices. We use tr(A) to denote
the trace of A, and A ffl B to denote the matrix inner product
Let oe A denote the vector of singular values of A, that is, the positive square roots of the
eigenvalues of A T A. The Frobenius norm of A is then and the
spectral norm is We say that a matrix A is positive semidefinite (psd) if A is
symmetric, and has all non-negative eigenvalues. We use - to denote the semidefinite ordering
for symmetric matrices: A - B if A \Gamma B is psd. For a vector is the n \Theta n
diagonal matrix with Diag(v) for each i. We will make frequent use of the following
elementary properties of tr(\Delta). Parts (1) and (3) of the following proposition are well-known,
and parts (2) and (4) follow easily from (1) and (3), respectively.
Proposition 2.1 Let A and B be m \Theta m matrices. Then
1.
2. If A is symmetric, then
3. If A and B are psd, then A ffl B - 0, and A ffl
4. If A - 0 and
Let A and B be m \Theta n, and k \Theta l, matrices, respectively. The Kronecker product of A
and B, denoted
A\Omega B, is the mk \Theta nl block matrix whose block is a ij B,
n. For our purposes it is also very convenient to define a "symmetrized" Kronecker
product:
For an m \Theta n matrix A, mn is the vector formed by "stacking" the columns of
A one atop another, in the natural order. The following properties of the Kronecker product
are all well known, see for example Horn and Johnson (1991), except for (2), which follows
immediately from (1) and the definition
of\Omega S .
Proposition 2.2 Let A, B, C, and D be conforming matrices. Then
1.
(A\Omega B)(C\Omega
AC\Omega BD;
2.
3.
4. If A and B are nonsingular, then
A\Omega B is nonsingular, and
5.
6. If A and B are psd, then
A\Omega B is psd.
Lastly we consider two simple matrix calculus results. Let X be a nonsingular matrix with
(see for example Graham (1981, p.75)),
@
and also (see for example Graham (1981, p.64)),
@
where e i denotes the ith elementary vector.
3 Main Results
Let G be a closed convex subset of ! n , and let F (\Delta) be a C 3 , convex mapping from Int(G) to
!, where Int(\Delta) denotes interior. Then (Nesterov and Nemirovskii (1994)) F (\Delta) is called a #-
self-concordant barrier for G if F (\Delta) tends to infinity for any sequence approaching a boundary
point of G,
for every x 2
sup
As shown by Nesterov and Nemirovskii (1994, Theorem 3.2.1), the existence of a #-self-
concordant barrier for G implies that a linear, or convex quadratic, objective can be minimized
on G to within a tolerance ffl of optimality using O(
iterations of Newton's method.
Consider a set S ae ! n of the form
where A i , and C are m \Theta m symmetric matrices. We assume throughout that the
matrices fA i g are linearly independent, and that a point x with S(x) - 0 exists. It is then
easy to show that 0g. The logarithmic barrier for S is the function
defined on the interior of S. As shown by Nesterov and Nemirovskii (1994, Proposition 5.4.5),
f(\Delta) is an m-self-concordant barrier for S, implying the existence of polynomial-time interior-point
algorithms for linear, and convex quadratic, semidefinite programming.
The volumetric barrier V (\Delta) for S, as defined in Nesterov and Nemirovskii (1994, Section
5.5), is the function
The first main result of the paper is the following improved characterization of the self-
concordance of V (\Delta).
Theorem 3.1
each A i , and C,
are m \Theta m symmetric matrices. Then 225 m 1=2 V (\Delta) is a #-self-concordant barrier for S, for
Theorem 3.1 generalizes a result for the polyhedral volumetric barrier (Anstreicher (1997a,
Theorem 5.1)), and provides an alternative to the semidefinite self-concordance result of Nesterov
and Nemirovskii (1994, Theorem 5.5.1). It is worthwhile to note that in fact the analysis
in Nesterov and Nemirovskii (1994, Section 5.5) does not apply directly to the barrier V (\Delta) for
S as given here, because Nesterov and Nemirovskii assume that the "right-hand side" matrix
C is zero. In practice, this assumption can be satisfied by extending S to the cone
and then intersecting K with the linear constraint x recover S. The analysis in
Nesterov and Nemirovskii (1994) would then be applied to the volumetric barrier -
V (\Delta) for K.
The advantage of working with K is that some general results of Nesterov and Nemirovskii
can then be applied, because -
V (\Delta) is (n 1)-logarithmically homogenous; see Nesterov and
Nemirovskii (1994, Section 2.3.3). (For example Theorem 4.4, required in the analysis of
Section 5, could be replaced by the fact that r -
and Nemirovskii (1994, Proposition 2.3.4).) Our analysis shows, however, that the homogeneity
assumption used in Nesterov and Nemirovskii (1994) is not needed to prove self-concordance
for the semidefinite volumetric barrier.
The combined volumetric-logarithmic barrier for S is the function
where V (\Delta) is the volumetric barrier, f(\Delta) is the logarithmic barrier, and ae is a positive scalar.
The combined barrier was introduced for polyhedral sets in Vaidya and Atkinson (1993), and
extended to semidefinite constraints in Nesterov and Nemirovskii (1994). Our main result on
the self-concordance of V ae (\Delta) is the following.
Theorem 3.2 Let
each A i , and C,
are m \Theta m symmetric matrices. Assume that n ! m, and let
ae (\Delta) is a #-self-concordant barrier for S, for
Theorems 3.1 and 3.2 imply that if m AE n, then the self-concordance parameter # for
the volumetric or combined barrier for S (particularly the latter) can be lower than m, the
parameter for the logarithmic barrier. It follows that for m AE n the complexity of interior-point
algorithms for the minimization of a linear, or convex quadratic, function over S may be
improved by utilizing V (\Delta) or V ae (\Delta) in place of f(\Delta).
4 The Volumetric Barrier
Let f(\Delta) be the logarithmic barrier for S, as defined in the previous section. It is well known
(see for example Vandenberghe and Boyd (1996)) that the first and second partial derivatives
of f(\Delta) at an interior point of S are given by:
where throughout we use possible to reduce notation. Let A be the m 2 \Theta n
matrix whose ith column is
where the second equality uses Proposition 2.2 (5), the Hessian matrix
can be represented in the form (see Alizadeh (1995))
Note that positive definite under the assumptions that and that
the matrices fA i g are linearly independent.
Our first goal in this section is to obtain Kronecker product representations for the gradient
and Hessian of V (\Delta). To start, it is helpful to compute
where the second equality uses (2). In addition, using (4) and the definitions
of\Omega and\Omega S , it
is easy to see that
@
\Gamma1\Omega
Now applying the chain rule, (1), and (5), we find that
'-
A
\Gamma1\Omega
A
\Gamma1\Omega
\Gamma1=2\Omega S I
where the last equality uses Proposition 2.2 (1), S \Gamma1=2 is the unique positive definite matrix
having (S \Gamma1=2
\Gamma1=2\Omega S \Gamma1=2 ]A
A T [S
A T [S
is the orthogonal projection onto the range of [S
\Gamma1=2\Omega S \Gamma1=2 ]A. Note that the jth column of
[S
\Gamma1=2\Omega S \Gamma1=2 ]A is exactly
[S
using Proposition 2.2 (5). It follows that P is a representation, as an m 2 \Theta m 2 matrix, of the
projection onto the subspace of R m\Thetam spanned by fS \Gamma1=2 A j S \Gamma1=2 ng.
We will next compute the second partial derivatives of V (\Delta). To start, using (2) and (5),
we obtain
\Gamma1\Omega
\Gamma1\Omega
Also, using (4) and the definition
@
\Gamma1\Omega
'\Omega
\Gamma1\Omega
j\Omega
\Gamma1\Omega
Combining (6), (9), and (10), and using Proposition 2.1 (2), we obtain
are the n \Theta n matrices having
\Gamma1\Omega
\Gamma1\Omega
\Gamma1\Omega
\Gamma1\Omega
Theorem 4.1 For any x having
\Gamma1\Omega
2\Omega
I
\Gamma1\Omega
\Gamma1\Omega
Note that from Proposition 2.1 (3) and Proposition 2.2 (6) we immediately have - T Q- 0
are all psd. Since - is arbitrary, it follows that Q - 0, T - 0.
In addition, the fact that P is a projection implies that
2\Omega
where the last equality uses Proposition 2.2 (2). Applying Proposition 2.1 (4), we conclude
that
2\Omega
which is exactly is arbitrary, we have shown that T -
(1=2)(Q +R), which together with (11), Q - 0, and T - 0 implies that
To complete the proof we must show that R(x) - Q(x). Let - i ,
eigenvectors of -
B, with corresponding eigenvalues - i , m. Then (see Horn and
Johnson (1991, Theorem 4.4.5)) -
2\Omega
S I has orthonormal eigenvectors -
with corresponding eigenvalues (1=2)(- 2
Johnson (1991, Theorem
B has the same eigenvectors -
corresponding eigenvalues - i - j . It then
follows from (-
2\Omega
and Proposition 2.1 (4) then implies that P ffl ( -
2\Omega
B), which is exactly
is arbitrary we have shown that Q - R, as required. 2
Theorem 4.1 generalizes a similar result (Anstreicher (1996, Theorem A.4)) for the polyhedral
volumetric barrier. It follows from Theorem 4.1 that V (\Delta) is convex on the interior of
S. In the next theorem we demonstrate that in fact V (\Delta) is strictly convex. Theorem 4.2 is
also a direct extension of a result for the polyhedral volumetric barrier; see Anstreicher (1996,
Theorem A.5).
Theorem 4.2 Let x have S(x) - 0. Then Q(x) - (1=m)H(x).
are the eigenvalues of -
B, with corresponding orthonormal eigenvectors
As described in the proof of Theorem 4.1, -
2\Omega
I then has a full set of
orthonormal eigenvectors -
corresponding eigenvalues (1=2)(- 2
It follows from (13) that
On the other hand, P
B) implies that
together imply that - T Q-
k-k, from which it follows that - T Q- (1=m)k-k
For a given -
the conclusion of Theorem 4.2 is that
A strengthening of (23), using j -
Bj in place of k -
Bk, is a key element in our analysis of the
self-concordance of V (\Delta), in the next section. The next theorem gives a remarkably direct generalization
of a result for the polyhedral volumetric barrier; see Anstreicher (1996, Proposition
2.3).
Theorem 4.3 Let x have
be orthonormal eigenvectors of -
B, with corresponding eigenvalues
B) can be written P
j we have
loss of generality (scaling - as needed, and re-ordering
indeces) we may assume that 1. Then (24) implies that
i , from (21), we are naturally led to consider the optimization problem
e:
For fixed the constraint in (25) implies that
so the objective value in (25) can be no lower than
A straightforward differentiation shows that the minimal value for (26) occurs when v 2
1=
m, and the value is
We have thus shown that if j -
Next we will obtain alternative representations of rV (x) and Q(x) that emphasize the
connection between the semidefinite volumetric barrier and the volumetric barrier for a polyhedral
set. For fixed x with
the linear span of fU ng is equal to the span of f -
ng. (Such fU i g may
be obtained by applying a Gram-Schmidt procedure to f -
U be the m 2 \Theta n matrix
whose ith column is vec(U i ), and let
k . Then from (8), can be written
in the form It follows, from (7), that
A
A
A
A
Similarly, from (12) we have
A
A
A
The characterizations of rV (x) and Q(x) given in (27) and (28) are very convenient for
the proof of the following theorem, which will be required in the analysis of self-concordance
in the next section.
Theorem 4.4 Let x have
Proof: From (28) we have
using Proposition 2.2 (5). Letting -
A be the m 2 \Theta n matrix whose ith column is vec( -
can then write
A T
In addition, it follows from (27) that
A T vec(\Sigma): (30)
Combining (29) and (30), we obtain
A
A T
A
A T vec(\Sigma)
A
A T
A
A T
because
k , and tr(U 2
each k, by construction. 2
One final point concerning the matrix \Sigma is the issue of uniqueness, for a given
Since \Sigma is defined above in terms of fU i g, and the fU i g are not unique, it is not at all obvious
that \Sigma is unique. We will now show that \Sigma is invariant to the choice of fU i g, and is therefore
unique. To see this, note that by definition
denotes the ith column of U k (recall that each U k is symmetric by construction).
denote the ith elementary vector, and let I be an m \Theta m identity matrix. By
inspection [e
i\Omega I] T U is then the m \Theta n matrix whose kth column is (U k ) i . It follows from (31)
that
j\Omega I][e
where P is the projection from (8). Since P is uniquely determined by fA i g and
is also unique, as claimed.
In the following table we give a summary of first and second order information for the logarithmic
and volumetric barriers, for polyhedral and semidefinite constraints. For the polyhedral
case we have A is an m \Theta n matrix whose ith column is a i . Given
x with
be the vector whose components are those of the diagonal of P , and
the volumetric barrier, in both the polyhedral and semidefinite cases, the matrix Q satisfies
3Q(x). Note that, as should be the case, all formulas for the semidefinite
case also apply to the polyhedral case, with -
Table
1: Comparison of Logarithmic and Volumetric Barriers
Polyhedral Semidefinite
Logarithmic
Volumetric
5 Self-concordance
In this section we obtain proofs for the self-concordance results in Theorems 3.1 and 3.2. We
begin with an analysis of the third directional derivatives of V (\Delta). Let x have S(x) - 0, and
Using (4), (9), and (13), we immediately obtain
@
\Gamma1\Omega
\Gamma1\Omega
[S
\Gamma1\Omega
. Moreover, from (4) it is immediate that
@
[S
\Gamma1\Omega
j\Omega
\GammaS
\Gamma1\Omega
Combining (32) and (33), and using Proposition 2.1 (2), we conclude that the first directional
derivative of - T Q(x)-, in the direction -, is given by
@
\Gamma1\Omega
\Gamma1\Omega
\Gamma3AH
\Gamma1\Omega
\GammaAH
\Gamma1\Omega
2\Omega
3\Omega
2\Omega
and P is defined as in (8). Very similar computations, using (14)
and (15), result in
2\Omega
2\Omega
Combining (11) with (34), (35), and (36), we obtain the third directional derivative of V (\Delta):
Theorem 5.1 Let x have
Proof: Using the fact that
2\Omega
3\Omega
2\Omega
from Proposition 2.2 (2), (37) can be re-written as
2\Omega
2\Omega
We will analyze the two terms in (38) separately. First, from (17) we have
2\Omega
2\Omega
B]:
Using (18), and the similar relationship [ -
2\Omega
S I], it follows that
2\Omega
2\Omega
2\Omega
be the eigenvalues of -
B. Then (see Horn and Johnson (1991, Theorem
4.4.5)) the eigenvalues of [ -
are of the form (1=2)(-
Bj I
Using (39), (40), the fact that [ -
2\Omega
Proposition 2.1 (4), we then obtain
2\Omega
2\Omega
In addition, the fact that [ -
2\Omega
have the same eigenvectors implies that
2\Omega
2\Omega
2\Omega
and therefore fi fi fi fi
2\Omega
2\Omega
The proof is completed by combining (38), (41), (42), and (13). 2
Using Theorem 5.1 we can now proof the first main result of the paper, characterizing the
self-concordance of V (\Delta).
Proof of Theorem 3.1: The fact that V (x) !1 as x approaches the boundary of S follows
from (3), and the fact that S(x) is singular on the boundary of S. Combining the results of
Theorems 4.3 and 5.1, we obtain
using the fact that - T Q(x)- T r 2 V (x)[-], from Theorem 4.1. In addition,
(see Horn and Johnson (1985, Corollary
7.7.4)), so Theorem 4.4 implies that
The proof is completed by noting the effect on (43) and (44) when V (\Delta) is multiplied by the
Next we consider the self-concordance of the combined volumetric-logarithmic barrier V ae (\Delta),
as defined in Section 3. We begin with some well-known properties of the logarithmic barrier
f(\Delta).
Lemma 5.2 Let x have
Proof: From (3) and (5) we obtain
\Gamma1\Omega
\Gamma1=2\Omega
A
for each i. It follows easily from (45) that
A
and therefore
It is then immediate from the fact that j -
Bj (see the proof of Theorem 5.1) that
where the final equality uses (19). 2
It follows from Lemma 5.2, the fact that j -
Bk, and
that f(\Delta) is an m-self-concordant barrier for S, as shown by Nesterov and Nemirovskii (1994).
Using Lemma 5.2 we immediately obtain the following generalization of Theorem 5.1.
Corollary 5.3 Let x have
Proof: Combining Theorem 5.1 and Lemma 5.2, we obtain
Next we require a generalization of Theorem 4.3 that applies with Q(x)+aeH(x) in place of
Q(x). The following theorem obtains a direct extension of a result for the polyhedral combined
barrier (Anstreicher (1996, Theorem 3.3)) to the semidefinite case. To prove the theorem we
will utilize the matrices fU k g, as defined in Section 4, to reduce the theorem to a problem
already analyzed in the proof of Anstreicher (1996, Theorem 3.3).
Theorem 5.4 Let x have
be orthonormal eigenvectors of -
B, with corresponding eigenvalues
. By the definition of fU k g, there is a vector -
so that
and therefore k -
-k. It follows from (46) that for each
and therefore
-, where W is the m \Theta n matrix with
Let U be the m 2 \Theta n matrix whose kth column is vec(U k ), and let V be the m 2 \Theta m matrix
whose ith column is -
From (47) we then can then write
the ith row of W . Then
where P is the projection matrix from (8). Using (19), (21), and (48) we then have
(w T
Moreover it is clear that
(w T
-k1 . We are
now exactly in the structure of the proof of Anstreicher (1996, Theorem 3.3), with U of that
proof replaced by the matrix W . In that proof it is shown that the solution objective value of
the problem
min
(w T
(w T
can be no lower thanq
It follows that - T (Q(x)
The final ingredient needed to prove the self-concordance of V ae (\Delta) is the following simple
generalization of Theorem 4.4.
Theorem 5.5 Let x have
Proof: From the representations in Table 1 we easily obtain
A:
Let It follows that
ae
ae
A
A[I\Omega
A
A T
[I\Omega \Sigma 1=2
ae
ae
ae
Using the above results we can now prove the second main result of the paper, characterizing
the self-concordance of the combined volumetric-logarithmic barrier for S.
Proof of Theorem 3.2: Combining the results of Corollary 5.3 and Theorem 5.4, with
mp
(D
using the fact that - T Q(x)- T r 2 V (x)[-], from Theorem 4.1. In addition,
(1985, Corollary 7.7.4)), so Theorem 5.5 implies that
The proof is completed by noting the effect on (49) and (50) when V ae (\Delta) is multiplied by the
--R
Interior point methods in semidefinite programming with applications to combinatorial optimization.
Large step volumetric potential reduction algorithms for linear pro- gramming
Volumetric path following algorithms for linear programming.
On Vaidya's volumetric cutting plane method for convex program- ming
Towards a practical volumetric cutting plane method for convex programming.
Ellipsoidal approximations of convex sets based on the volumetric barrier.
Kronecker Products and Matrix Calculus: with Applications
Matrix Analysis
Topics in Matrix Analysis
A new algorithm for minimizing convex functions over convex sets.
A technique for bounding the number of iterations in path following algorithms.
programming.
--TR | self-concordance;volumetric barrier;semidefinite programming |
355443 | General Interior-Point Maps and Existence of Weighted Paths for Nonlinear Semidefinite Complementarity Problems. | Extending the previous work of Monteiro and Pang (1998), this paper studies properties of fundamental maps that can be used to describe the central path of the monotone nonlinear com-plementarity problems over the cone of symmetric positive semidefinite matrices. Instead of focusing our attention on a specific map as was done in the approach of Monteiro and Pang (1998), this paper considers a general form of a fundamental map and introduces conditions on the map that allow us to extend the main results of Monteiro and Pang (1998) to this general map. Each fundamental map leads to a family of "weighted" continuous trajectories which include the central trajectory as a special case. Hence, for complementarity problems over the cone of symmetric positive semidefinite matrices, the notion of weighted central path depends on the fundamental map used to represent the central path. | Introduction
section1
In a series of recent papers (see Kojima, Shida and Shindoh 1997, 1998, 1999, Kojima, Shindoh
and Hara 1997, Shida and Shindoh 1996, Shida, Shindoh and Kojima 1997, 1998), Hara, Kojima,
Shida and Shindoh have introduced the monotone complementarity problem in symmetric matrices,
studied its properties, and developed interior-point methods for its solution. A major source where
this problem arises is a convex semidefinite program which has in the last couple years attracted a
great deal of attention in the mathematical programming literature (see for example Alizadeh 1995,
Alizadeh, Haeberly and Overton 1997, 1998 Luo, Sturm and Zhang 1996, 1998, Monteiro 1997, 1998,
Monteiro and Tsuchiya 1999, Monteiro and Zhang 1998, Nesterov and Nemirovskii 1994, Nesterov
This work has been partly supported by the National Science Foundation under grants CCR-9700448, CCR-9902010
and INT-9600343.
y School of Industrial and Systems Engineering, Georgia Tech, Atlanta, GA 30332. (monteiro@isye.gatech.edu)
and Todd 1997, 1998, Potra and Sheng 1998, Ramana, Tun-cel and Wolkowicz 1997, Shapiro 1997,
Vandenberghe and Boyd 1996, Zhang 1998).
denote the m-dimensional real Euclidean space, S n denote the space of n \Theta n real
symmetric matrices, S n
++ denote the subsets of S n consisting of the positive semidefinite
and positive definite matrices, respectively, and C(-) j
We denote the closure of a subset E of a metric space by cl E. Given a continuous map
the complementarity problem which we shall study in this paper is
to find a triple (X; Y; z) 2 S n \Theta S n \Theta ! m such that
eq:psd cp
F (X; Y;
It is known (see the cited references) that there are several equivalent equations to represent the
complementarity condition (X; Y problem. Associated with each of these equivalent
equations, we can define an interior-point map that not only provides the foundation for developing
path-following interior point methods for solving problem (1) but also serves to generalize the notion
of weighted central path from linear programming to the context of the complementarity problem (1).
In the work of Monteiro and Pang (1998), they focus on the equivalent complementarity equation
(XY +Y several properties of the associated interior-point map. The main goal of
this paper is to generalize the work of Monteiro and Pang (1998) to other equivalent complementarity
equations having the general form \Phi(X; Y is a continuous map such that:
++ \Theta S n
In terms of
\Phi, problem (1) becomes equivalent to finding a triple (X; Y; z) 2 D \Theta ! m such that H(X; Y;
m is the fundamental interior-point map (associated with \Phi)
defined by
F (X; Y; z)
For the map \Phi(X; Y
and the open convex cone
++ , Monteiro and Pang (1998) establish under some monotonicity conditions on the map F
that the system
sys.H H(X; Y;
A
have the following properties:
(P1) it has a solution for every (A; B) 2 cl V \Theta F++ , where F++ j F (S n
++ \Theta S n
(P2) the solution, denoted (X(A; B); Y (A; B); z(A; B)), is unique when (A;
if a sequence converges to a limit (A1 ; B1 then the sequence
converges to (X(A1 ; B1
if a sequence converges to a limit (A1 cl V \Theta F++ , then the
sequence ))g is bounded, and that any of its accumulation
point
Note that when 0 2 F++ , (P1) implies the existence of a solution of (1), and (P2) implies the
well-definedness of the weighted central paths passing through points in V , that is paths consisting
of the unique solutions of system (3) with is a fixed
point in V .
Under appropriate conditions on the map \Phi and the open convex cone V , we show in this paper
that the above properties also hold with respect to the general interior-point map (2). We also
illustrate how our general framework applies to the following specific central-path maps:
where LX is the lower Cholesky factor of X , U Y is the upper Cholesky factor of Y and W is the unique
symmetric positive definite matrix such that
be easily verified to be a special case of our general framework using the results of Monteiro and
Pang 1998.) Observe that the third and fourth maps are only defined for points (X; Y ) in the set
++ \Theta S n
++ ), and hence they illustrate the need for considering maps \Phi whose domain are not
the whole set S n
. The approach of this paper is based on the theory of local homeomorphic
maps. One of the preliminary steps of our analysis is to establish that H restricted to the set U \Theta ! m
is a local homeomorphism, where U (V). For this property to hold for the above cited maps, it
is necessary to choose the set V to be an open convex cone smaller than S n
++ , which is the cone used
in connection with the map \Phi(X; Y in the analysis of Monteiro and Pang (1998).
In fact, the sets V associated with the four maps above are appropriate conical neighborhoods of the
line contained in S n
++ .
Sturm and Zhang (1996) define a notion of weighted center for semidefinite programming based
on the central path map \Phi(X; Y denotes the eigenvalues of XY arranged in
nondecreasing order. According to their definition, given a vector w
++ such that w
a point (X; Y; z) 2 S n
++ \Theta S n
is an w-weighted center if F (X; Y;
For linear maps F associated with semidefinite programming problems, they show the existence of
an w-center for any such w. However, their w-center is not unique and hence does not lead to the
notion of weighted central paths as our approach does.
Finally, we observe that interior-point algorithms for solving the complementarity problem (1)
which makes use of the fundamental interior-point map (2) have been proposed in Wang et al. (1996)
and Monteiro and Pang (1999).
This paper is organized as follows. Section 2 contains an exposition of the main results of this
paper whose proofs are given in the subsequent sections. Section 2 is divided into three subsections.
Subsection 2.1 summarizes some important definitions and facts from the theory of local homeomorphic
maps. Subsection 2.2 introduces the key conditions on the map F and the pair (\Phi; V), and gives
a few examples of pairs (\Phi; V) which satisfy these conditions. In Subsection 2.3, we state the main
result of this paper and some of its consequences, including its specialization to the context of the
convex nonlinear semidefinite programming problem. In Section 3, we provide the proof of the main
result. Finally, in Section 4, we develop some technical results which allow us to verify that the pairs
introduced in Section 2 satisfy our basic assumptions.
The following notation is used throughout this paper. The symbols - and - denote, respectively,
the positive semidefinite and positive definite ordering over the set of symmetric matrices; that is, for
positive semidefinite, and X - Y (or Y OE X) means
positive definite. The set of all n \Theta n matrices with real entries is denoted by M n . Let M n
and M n
++ denote the set of matrices
? denote the subspace of M n consisting of the skew-symmetric matrices. For A 2 M n , let
A ii denote the trace of A. For
inner product in M n . For any matrix A 2 M n , let
- is an eigenvalue of A T Ag
. For a matrix A 2 M n with all real eigenvalues, we denote its smallest and
largest eigenvalues by - min (A) and - max (A), respectively. Finally, define the sets
Main Assumptions and Results
section2
In this section, we state the assumptions imposed on the map F and the pair (\Phi; V) and give a
few examples of pairs (\Phi; V) satisfying the required conditions. We also state the main result and
its consequences, including its specialization to the context of the convex nonlinear semidefinite programming
problem. This section is divided into three subsections. The first subsection summarizes
a theory of local homeomorphic maps defined on metric spaces; the discussion is very brief. We refer
the reader to Section 2 and the Appendix of Monteiro and Pang (1996), Chapter 5 of Ortega and
Rheinboldt (1970), and Chapter 3 of Ambrosetti and Prodi (1993) for a thorough treatment of this
theory. The second subsection introduces the key conditions on the map F and the pair (\Phi; V), and
gives a few examples of pairs (\Phi; V) which satisfy these conditions. The third subsection states the
main result and its corollaries.
2.1 Local homeomorphic maps
subsection2.1
This subsection summarizes some important definitions and facts from the theory of local homeomorphic
maps.
If M and N are two metric spaces, we denote the set of continuous functions from M to N
by C(M;N) and the set of homeomorphisms from M onto N by Hom(M;N ). For G 2 C(M;N ),
Eg. The set
simply denoted by G \Gamma1 (v). Given G 2 C(M;N ),
that G(D) ' E, the restricted map ~
defined by ~
denoted by
then we write this ~
G simply as GjD . We will also refer to Gj (D;E) as "G restricted
to the pair (D; E)", and to GjD as "G restricted to D". The closure of a subset E of a metric space
will be denoted by cl E. Any continuous function from a closed interval of the real line ! into a
metric space will be called a path. We say that partition of the set V if
space M is said to be connected if there exists no
partition of M for which both O 1 and O 2 are non-empty and open.
In the rest of this subsection, we will assume that M and N are two metric spaces and that
1 The map G 2 C(M;N) is said to be proper with respect to the set E ' N if G
M is compact for every compact set K ' E. If G is proper with respect to N , we will simply say
that G is proper.
Our analysis rely upon the following two well-known results. The first one is a classical topological
result whose proof can be found in Chapter 3 of Ambrosetti and Prodi (1993) and in the Appendix
of Monteiro and Pang (1996).
Proposition 1 main Assume that G : M ! N is a local homeomorphism. If N is connected
G is proper if and only if the number of elements of G \Gamma1 (v) is finite and constant for v 2 N .
The next result is derived in Section 2 of Monteiro and Pang (1996) and its proof is an immediate
consequence of classical topological results.
Proposition 2 cor1 Let M 0 ' M and N 0 ' N be given sets satisfying the following conditions:
is a local homeomorphism and ; . Assume that G is proper with respect to
some set E such that N 0 ' E ' N . Then G restricted to the local
homeomorphism. If, in addition, N 0 is connected, then G(M 0 cl N 0 .
2.2 Fundamental assumptions and examples of central-path maps
subsection2.2
In this subsection, we state the conditions that will be imposed on the map F and the pair (\Phi; V)
during our analysis of the properties of the fundamental map (2). We also give a few examples of
pairs (\Phi; V) which satisfy the required conditions.
We start by giving a few definitions.
defined on a subset dom (J) of M n \Theta M n \Theta ! m
is said to be (X; Y )-equilevel-monotone on a subset H ' dom (J) if for any (X; Y; z) 2 H and
dom (J), we will simply say that J is (X; Y )-equilevel-monotone.
In the following two definitions, we assume that W , Z and N are three normed spaces and that
/(w; z) is a function defined on a subset of W \Theta Z with values in N .
z-boundedness The function /(w; z) is said to be z-bounded on a subset H '
dom (/) if for every sequence f(w k ; z k )g ' H such that fw k g and f/(w k ; z k )g are bounded, the
sequence fz k g is also bounded. When dom (/), we will simply say that / is z-bounded.
Definition 4 z-injective The function /(w; z) is said to be z-injective on a subset H '
dom (/) if the following implication holds: (w; z)
dom (/), we will simply say that / is z-injective.
We now state the main assumptions that will be made on the map F and the pair (\Phi; V).
Assumption 1 ass1 The
monotone, z-injective on S n
++ \Theta S n
Assumption 2 ass2 We impose the following conditions on the pair (\Phi; V):
is a continuous map such that C(0) [ (S n
++ \Theta S n
is an open
connected set such that
(b) for any (X; Y ) 2 D and - 0, we have
is a continuous strictly increasing function such that
(c) if the sequence f(X k ; Y k )g ' D is such that f\Phi(X k ; Y k )g is bounded, then fX k ffl Y k g is
(d) the set U j \Phi \Gamma1 (V) is contained in S n
++ \Theta S n
is a closed set;
(e) \Phi is differentiable on S n
++ \Theta S n
++ and for every (X; Y the following implication holds:
We now make a few remarks regarding the above assumptions. First, condition (a) is stated so
as to cover some relevant examples of maps \Phi which are defined only in the set C(0) [ (S n
++ \Theta S n
(see Examples 4 and 5 below). Second, Assumption 2(c) implies that the map ' is onto, and hence
that its inverse is defined everywhere in !+ . Third, conditions (a), (b) and (d) of Assumption 2
imply that U is a nonempty open set. Fourth, condition (b) implies that the curve consisting of the
solutions of the systems
eee
as - ? 0 varies, is a parametrization of the central path associated with the complementarity problem
(1), that is the set of solutions of the systems
condition (e) is probably the most crucial one. Under Assumption 1, this condition is equivalent to
the well-definedness of the Newton direction with respect to system (4) for any point (X; Y; z) in U .
We next give some examples of pairs (\Phi; V) which satisfy Assumption 2.
Example 1: Let D j S n
be the map defined by \Phi(X; Y
++ .
Example 2: Let D j S n
be the map defined by \Phi(X; Y
Example 3: Let D j S n
be the map defined by \Phi(X; Y
Example 4: Let D
++ \Theta S n
be the map defined by
++ \Theta S n
where LX is the lower Cholesky factor of X , that is LX is the unique lower triangular matrix
with positive diagonal elements satisfying
Example 5: Let D
++ \Theta S n
be the map defined by
++ \Theta S n
where LX is the lower Cholesky factor of X and U Y is the upper Cholesky factor of Y , that is U Y is
the unique upper triangular matrix with positive diagonal elements satisfying U Y U T
Example
++ \Theta S n
be the map defined by
++ \Theta S n
is the unique symmetric positive definite matrix satisfying
The map of Example 1 was extensively studied in Monteiro and Pang (1998), and is a special
case of the general framework studied in this paper. The verification that these examples satisfy
Assumption 2 will be given in Section 4. The next result shows that each map above generates a
whole family of maps that satisfy Assumption 2.
Proposition 3 scale-family Let (\Phi; V) be a pair satisfying Assumption 2. Then:
i) for every ff ? 0, the pair (ff\Phi; ffV) satisfies Assumption 2;
ii) for every nonsingular matrix P , the pair (\Phi P ; V) satisfies Assumption 2, where \Phi
S n is the map defined as \Phi P (X; Y
Proof. The proof is just a simple verification.
In all six examples above, the domain D P of the map \Phi P remains invariant, that is D
every P . For a pair (V ; \Phi) and a nonsingular matrix P , let U
P (V). In general, U P may differ
from U , but for the maps of Examples 2 and 4, we have U every nonsingular matrix P .
Moreover, the maps \Phi P corresponding to Examples 1-6 become
e
e
Y
e
Y
respectively, where L e
X is the lower Cholesky factor of e
Y is the upper Cholesky
factor of e
In the last equation, we used the fact that f
is the unique
symmetric matrix such that f
Y .
2.3 The main result and its consequences
subsection2.3
In this subsection we state the main result of this paper. We also state some of its consequences,
including its specialization to the context of the convex nonlinear semidefinite programming problem.
The main result extends Theorem 2 of Monteiro and Pang (1998) to fundamental maps (2) associated
with pairs (\Phi; V) satisfying Assumption 2. It essentially states that the fundamental map (2) has
some nice homeomorphic properties.
Theorem 1 main2 Assume that F : S n
Assumption 1 and the
Assumption 2. Then the following statements hold for the map H:
(a) H is proper with respect to cl V \Theta F++ ;
(b) H maps U \Theta ! m homeomorphically onto V \Theta F++ , where U
cl V \Theta F++ .
Theorem 1 establishes the claimed properties (P1)-(P4) of the map H stated in the Introduction.
Indeed, (P1) follows from conclusion (c); (P2) and (P3) follow from (b); and (P4) follows from (a).
In what follows, we give two important consequences of the above theorem, assuming that 0 2 F++ .
The first one, Corollary 1, has to do with the central path for the semidefinite complementarity
problem (1); the second one, Corollary 2, is a solution existence result for the same problem.
Corollary 1 co:path existence Suppose that F : S n
Assumption
1 and the pair (\Phi; V) satisfies Assumption 2. Assume that (0; cl V \Theta F++ and let
be paths such that P
every t 2 (0; 1]. Then there exist (unique) paths
++ such that
Moreover, every accumulation point of (X(t); Y (t); z(t)) as t tends to 0 is a solution of the complementarity
problem (1).
Corollary 2 co:solution existence Assume that F : S n
Assumption 1 and the pair (\Phi; V) satisfies Assumption 2. If there exists (X
that F (X cl V, the system
cl U
has a solution, which is unique when A 2 V.
We now discuss the specialization of Theorem 1 to the context of the convex nonlinear semidefinite
program. The complementarity problem (1) arises as the set of first-order necessary optimality
conditions for the following nonlinear semidefinite program (see for example Shapiro 1997):
eq:psd nlp
minimize '(x)
subject to G(x) 2 \GammaS n
are given smooth maps. Indeed, it is well-known
(see for example Shapiro 1997) that, under a suitable constraint qualification, if x is a local optimal
solution of the semidefinite program, then there must exist j
such that
is the Lagrangian function defined by
Letting
r x L(x; U;
we see that the first-order necessary optimality conditions for problem (5) are equivalent to the
complementarity problem (1) with
The is said to be positive semidefinite convex (psd-convex) if
The following result follows as an immediate consequence of Theorem 1 and the results of Section 5
of Monteiro and Pang (1996b).
Theorem 2 properties of F Suppose that (\Phi; V) is a pair satisfying Assumption 1, the function
continuously differentiable and convex, G continuously differentiable
and is an affine function such that the (constant) gradient matrix rh(x)
has full column rank and the feasible set X j fx any one
of the following conditions holds:
(a) ' is strictly convex;
(b) for every U 2 S n
++ , the map x strictly convex;
(c) each G ij is an analytic function,
then the following statements hold for the maps F and H given by (8) and (2), respectively:
(i) H is proper with respect to cl V \Theta F (S n
++ \Theta S n
homeomorphically onto V \Theta F (S n
++ \Theta S n
cl V \Theta F (S n
++ \Theta S n
(iv) the set F (S n
++ \Theta S n
Proof. The assumptions of the theorem together with Proposition 4, Lemmas 7, 8 and 9 of Monteiro
and Pang (1996b) imply that the map F given by (8) is (U; V )-equilevel-monotone on S n
(x; j)-injective on S n
++ \Theta S n
j)-bounded on S n
Hence, the conclusions
(i), (ii) and (iii) of the theorem follows directly from Theorem 1. It was shown in Theorem 4(iii) of
Monteiro and Pang (1996b) that the set F ( e
U \Theta ! m+p ) is convex, where e
++ \Theta S n
g. Since C(-) ' e
U
++ \Theta S n
++ for all - ? 0, it follows from Lemma 1(b) that
U \Theta ! m+p
++ \Theta S n
holds.
3 Proof of the Main Results
section3
The goal of this section is to give the proof of Theorem 1. In the process of doing this, we will
first establish Theorem 1 for the case in which the map F is affine. Apart from the fact that an affine
considerably simplifies the analysis, we obtain through this case an important technical result
(Lemma 5) which plays an important role in establishing Theorem 1 for the case of a nonlinear map
F satisfying Assumption 1.
We first establish two important technical lemmas that hold for nonlinear maps F satisfying
Assumption 1.
m be a map satisfying Assumption 1.
Then, the following statements hold:
(a) F restricted to S n
++ \Theta S n
is an open map; in particular, F++ is an open set;
Proof. We first establish (a). Let ~
m be defined by ~
F (X; Y;
. Using the fact that F satisfies Assumption 1, it
is easy to see that ~
F is (X; Y )-equilevel-monotone, z-injective on M n
++ \Theta S n
Then, it follows from Lemma 11 of Monteiro and Pang (1998) that the map ~
defined by
~
~
F (X; Y; z)
maps M n
++ \Theta S n
homeomorphically onto ~
++ \Theta S n
By the domain invariance
theorem, this implies that ~
H , and hence ~
F , restricted to M n
++ \Theta S n
is an open map. This
conclusion together with a simple argument shows that F restricted to S n
++ \Theta S n
is also an
open map.
We now show (b). Clearly, F (C(-) To prove the other inclusion,
let B be an arbitrary element of F++ . Since by Corollary 3 of Monteiro and Pang (1998), the system
F (X; Y; has a (unique) solution in S n
++ \Theta S n
conclude that
A consequence of Lemma 1(b) is as follows. Assumption 2(a) implies the existence of
such that - 0 I 2 V . This together with (b) and (d) of Assumption 2 imply that C('
++ \Theta S n
++ . Hence, in view of Lemma 1(b), it follows that F (U \Theta ! m
properness
m be a map satisfying Assumption 1,
(\Phi; V) be a pair satisfying Assumption 2, and m be the map defined in
(2). Then, H is proper with respect to cl V \Theta F++ .
Proof. Let K be a compact subset of cl V \Theta F++ . We will show that H \Gamma1 (K) is compact from which
the result follows. We first show that H \Gamma1 (K) is closed. Since K is closed and H is continuous,
closed with respect to dom Hence, the closeness of H \Gamma1 (K) follows
if we show that H \Gamma1 (K) is contained in a closed subset of dom (H). Indeed, the definition of
H implies that H \Gamma1 (K) is contained in \Phi \Gamma1 (cl V) \Theta ! m , which by Assumption 2(d), is a closed
set. We next show that H \Gamma1 (K) is bounded. Indeed, suppose for contradiction that there exists a
sequence is compact and
we may assume without loss of generality that there exists F 1 2 F++ such
that
Clearly, we have F
++ \Theta S n
++
and Y 1 - 0, there exists j ? 0 such that the set
++ \Theta S n
contains clearly have that N1 is open, and using Lemma 1(a), it follows that
is an open set. Thus, by (10) and the fact that F 1 2 F (N1 ), we conclude that for all k
sufficiently large, say k
This inequality together with fact that ( e
imply
for every k - k 0 . Using the fact that fH(X k ; Y k ; z k )g ' K and K is bounded, we conclude that
Using Assumption 2(c) we have that fX k ffl Y k g is bounded. This fact
together with the above inequality implies that the sequences fX k g and fY k g are bounded. Since
must have lim k!1 kz k 1. From the fact that F is z-bounded,
we conclude that lim k!1 contradicting (10).
We now turn our attention to the case of an affine map F and derive a version of Theorem 1 in
this context (see Theorem 3 below). We begin with a result that contains some elementary properties
of affine maps.
Lemma 3 easy Assume that G is an affine map and let G
its linear part. Then the following statements hold:
(a) G is (X; Y )-equilevel-monotone if and only if
iii1
(b) G is z-injective if and only if
(c) G is z-injective if and only if G is z-bounded.
Proof. See Lemma 3 in Monteiro and Pang (1998).
The next lemma is an important step towards establishing the main result for the case in which
F is affine.
m be an affine map which is (X; Y )-
equilevel-monotone and z-injective, and let (\Phi; V) be a pair satisfying Assumption 2. Then the map
H defined in (2) restricted to U \Theta ! m is a local homeomorphism.
Proof. Since U \Theta ! m is an open set, it is sufficient to show that the derivative map H 0 (X;
is an isomorphism for every (X; Y; z) 2 U \Theta ! m . For this purpose,
fix any (X; Y; z) 2 U \Theta ! m . Since H 0 (X; Y; z) is linear and is a map between identical spaces, it is
enough to show that
Indeed, assume that the left-hand side of the above implication holds. By the definition H , we have
By (14) and Lemma 3(a), we have \DeltaX ffl \DeltaY - 0. This together with (15) and Assumption 2(e)
imply that Hence, by (14) and Lemma 3(b) we conclude that \Deltaz = 0. We have
thus shown that the implication (13) holds.
We are now ready to establish a version of the main result for the affine case.
Theorem 3 main1 Assume that F is an affine map which is
(X; Y )-equilevel-monotone and z-injective, and (\Phi; V) is a pair satisfying Assumption 2. Then, the
following statements hold:
(a) H maps U \Theta ! m homeomorphically onto V \Theta F++ ;
cl V \Theta F++ .
Proof. Let cl V \Theta F++ .
We will show first that these sets and the map G j H satisfy the assumptions of Proposition 2.
Indeed, first observe that H j M 0
is a local homeomorphism due to Lemma 4. The assumption that F
is (X; Y )-equilevel-monotone and z-injective together with Lemma 3(c) imply that F is z-bounded.
Hence, it follows from Lemma 2 that H is proper with respect to E. By the definition of M 0 ,
To show that H note that by Assumption 2(a),
there exists Letting we have by Assumption 2(b)
that \Phi(- 0 I
By Assumption 2(a) and the fact that a continuous map carries connected sets onto connected
sets, we conclude that N 0 j V \Theta F++ is a connected set. By Proposition 2, we conclude that H
restricted to the pair (H local homeomorphism, H(M 0
cl N 0 . Clearly, the last inclusion implies the second inclusion in b). The
first inclusion follows from the fact that \Phi \Gamma1 (cl V) being closed by Assumption 2(d) implies that
cl U ' \Phi \Gamma1 (cl V) ' D.
We now show (a). We have already shown that H maps M 0 onto N 0 and that ~
is a proper local homeomorphism. It remains to show that ~
H is one-to-one. Since N 0 is connected,
by Proposition 1 it remains to show that for some (A; B) 2 N , the inverse image
~
has at most one element. Indeed, let I and take B 2 F++ arbitrarily. Assume
that ( -
is such that ~
by the
definition of H , we have F ( -
I , which by Assumption 2(b) implies that
by Corollary 3 of Monteiro and Pang (1998), the system F (X; Y;
has a unique solution in S n
++ \Theta S n
++ \Theta S n
we conclude that ( -
is the only solution of ~
We use the above result to prove the following very important technical lemma.
Lemma 5 technical Let (\Phi; V) be a pair satisfying Assumption 2. Let (X
be elements of U such that
Proof. Assume for contradiction that (X It has been shown in the proof of Lemma 5
of Monteiro and Pang (1998) that there exists an affine which is (X; Y )-
equilevel-monotone and satisfies F (X Assumption 1 with
variable z is present). By Theorem 3(a), it follows that the associated map H (with
restricted to U is one-to-one. Moreover, (16) and the relation F (X imply that
The last two conclusions together with the fact that (X
then imply that (X
We are now ready to give the proof of Theorem 1.
Proof of Theorem 1: The proof is close to the one given for Theorem 3. It consists of showing
that the sets M j D cl V \Theta F++ ,
together with the map G j H , satisfy the assumptions of Proposition 2. But instead of using
Lemma 4 to show that H j M 0
is a local homeomorphism, we use Lemma 5 to prove that H j M 0
maps
homeomorphically onto H(M 0
is a continuous map from an open subset of the
vector space S n \Theta S n \Theta ! m into the same space, by the domain invariance theorem it suffices to show
that
is one-to-one. For this purpose, assume that H( -
U
Then, by the definition of H , we have F ( -
z) and
Y ). Since F is (X; Y )-equilevel-monotone, we conclude that ( -
Hence, by Lemma 5, we have ( -
Y ). This implies that F ( -
z), and by the
z-injectiveness of F it follows that - z = ~ z. We have thus proved that H j M 0
maps M 0 homeomorphically
is clearly a local homeomorphism and F is (X; Y )-equilevel-monotone and
z-bounded by assumption, it follows from Lemma 2 that a) holds. The proofs of b) and c) use the
same arguments as in the proof of Theorem 3.
4 Verification of Assumption 2 for Several Central-Path Maps
section4
In this section we verify that the maps of Examples 1 to 6 satisfy Assumption 2.
Verification for Example 1: Let D j S n
be the map defined by \Phi(X; Y
++ . Conditions (a), (b) and (c) of Assumption 2 are straightforward, and
clearly we have that U is contained in S n
++ \Theta S n
++ . Since \Phi is continuous and its domain is closed, the
set \Phi \Gamma1 (cl V) is closed, and hence Assumption 2(d) holds. Assumption 2(e) follows from Theorem
3.1(iii) in Shida, Shindoh and Kojima (1998).
Verification for Example 2: Let D j S n
be the map defined by \Phi(X; Y
Conditions (a), (b) and (c) of
Assumption 2 are straightforward. As in Example 1, we easily see that \Phi \Gamma1 (cl V) is closed. Hence,
Assumption 2(d) holds. Finally, it follows from the proof of Lemma 2.3 of Monteiro and Tsuchiya
that the implication of Assumption 2(e) holds.
Verification for Example 3: Let D j S n
be the map defined by \Phi(X; Y
Conditions
(a), (b) and (c) of Assumption 2 are straightforward. As in Example 1, we easily see that \Phi \Gamma1 (cl V)
is closed. Hence, Assumption 2(d) holds. Finally, Assumption 2(e) follows from Proposition 4 below
with
Verification for Example 4: Let D
++ \Theta S n
be the map defined by
++ \Theta S n
where LX is the lower Cholesky factor of X . Also, let
Observe that the continuity of the map \Phi required in Assumption 2(a) follows from Lemma 6. The
remaining requirements in (a) and conditions (b) and (c) of Assumption 2 are straightforward. We
easily see that
++ \Theta S n
which, in view of Lemma 6, is a closed set. Hence, Assumption 2(d) holds. Finally, Assumption 2(e)
follows from Lemma 2.4 of Monteiro and Zanj'acomo (1997) with
Verification for Example 5: Let D
++ \Theta S n
be the map defined by
++ \Theta S n
where LX and U Y are the lower Cholesky factor of X and the upper Cholesky factor of Y , respectively.
Also, let
0g. Observe that the continuity of
the map \Phi required in Assumption 2(a) follows from Lemma 6. The remaining requirements in (a)
and conditions (b) and (c) of Assumption 2 are straightforward. We easily see that
++ \Theta S n
-pfor some - ? 0
which, in view of Lemma 6, is a closed set. Hence, Assumption 2(d) holds. Finally, Assumption 2(e)
follows from Proposition 5 below with
Verification for Example
++ \Theta S n
be the map defined by
++ \Theta S n
is the unique positive definite matrix such that
0g. Observe that the continuity of the map \Phi required in
Assumption 2(a) follows from Lemma 6. The remaining requirements in (a) and conditions (b) and
(c) of Assumption 2 are straightforward. We easily see that
ae
++ \Theta S n
oe
which, in view of Lemma 6, is a closed set. Hence, Assumption 2(d) holds. Assumption 2(e) follows
from Proposition 6 below.
Note that all the sets V in the above examples are convex cones containing the line f-I
We now present the technical results needed for the verification of the Examples 1 to 6 described
above.
Lemma 6 sets Let jjj \Delta jjj be an arbitrary norm in S n such that jjjH jjj - kHk for all H 2 S n .
++ \Theta S n
denote the map of Example 4, 5 or 6 described above. Then, \Phi is
a continuous map and, for any fl 2 (0; 1), the set defined by
++ \Theta S n
is a closed set.
Proof. First note that for (X; Y
++ \Theta S n
++ , we have
and
where the first equality of (19) follows from the fact that W 1=2 XYW \Gamma1=2 is a symmetric matrix. To
establish the continuity of the three maps, it is sufficient to show that if f(X
++ \Theta S n
converges to a point ( -
then the sequences f(W k
converge to 0. Indeed, ( -
implies that fX k ffl Y k g converges to 0, which, in
view of (18) and (19), implies that these three sequences converge to 0.
We now establish the closedness of the set (17), which we denote by -
N fl . It is sufficient to show
that if f(X
converges to ( -
.
and hence
due to the fact that jjj \Delta jjj - k \Delta k. Using (18), we easily see that fE k g is bounded. In view of (21),
this implies that f- k g is bounded. Without loss of generality, we may assume that f- k g converges to
some -
We now consider the two possible cases: -
implies that
equivalently lim k!0 \Phi(X For the map of Example 4, this means that
lim k!0 L T
and for the one in Example 5, that lim k!0 U T
Using
the fact that U T
lower triangular, the last limit implies that lim k!0 U T
both maps \Phi, it follows from (18) that fX k ffl Y k g converges to 0, and hence that ( -
Assume now that - ? 0. We will show that ( -
++ \Theta S n
++ , from which the conclusion that
letting k tend to infinity in (20) and using the fact that -
is continuous on S n
++ \Theta S n
++ . Indeed, assume for contradiction that ( -
++ \Theta S n
++ . Without
loss of generality, we may assume that -
X is singular. Consider first the map \Phi of Example 4. We
have
and since lim k!0 - min (X k Y k
we conclude that lim k!0 - min By (21) this
implies that -
contradicting the assumption that -
Consider now the map \Phi of Example 5. We have
where the inequality follows from the fact that the real part of the spectrum of a real matrix is
contained between the largest and the smallest eigenvalues of its Hermitian part (see p. 187 of Horn
and Johnson (1991), for example) and the fact that U T
being lower triangular has only real
eigenvalues. Note that the sequences fU Y k g and fL X kg are bounded, since kU Y k
U and -
L be accumulation points of fU Y k g and fL X kg, respectively. Clearly,
X, and since -
X is singular, it follows that -
L is also singular. Letting k tend to infinity in
(22), we conclude that
which as above yields the desired contradiction.
Finally, consider the map \Phi of Example 6. We have
and since lim k!0 - min (X k Y k
we conclude that lim k!0 - min By (21) this
implies that -
contradicting the assumption that -
The next three lemmas are used in the proof of Proposition 4.
Lemma 7 ineqtr Let G 2 M n be such that tr
Proof. Defining
? , and using the identities
easily see that
tr
The result now follows by adding the two identities and using the assumption that tr
be a matrix whose eigenvalues are all real and let
(0; 1=
be given. Then, the following implication holds:
Proof. Assume that the left hand side of the implication holds. Using the fact that E, and hence
I , has only real eigenvalues, it is easy to see that tr
and hence the first inequality of the right hand side of the implication holds. The second inequality
follows from Lemma 2.3.3 of Golub and Van Loan [5] and the fact that
Lemma 9 Lyap The following statements hold:
a) for every A 2 S n
++ and H 2 S n , the equation AU has a unique solution U
moreover, this solution satisfies kAUk F - kHk
++ denote the square root function is an analytic function
is the unique solution of X (X) is the
derivative of ' at X and ' 0 (X)H is the linear map ' 0 (X) evaluated at H.
Proof. See Lemmas 2.2 and 2.3 of Monteiro and Tsuchiya (1999).
Proposition 4 tech-bound2 Suppose that
the map of Example 3. If (X; Y
++ is a point such that k(X 1=2 Y 1=2 +Y 1=2 X 1=2 )=2\Gamma-I k F -
fl- for some - ? 0, then
F
where g
Proof. Using Lemma 9(a), it is easy to see that the equation \Phi 0 (X; Y )(\DeltaX; \DeltaY
to
where U and V are the unique solutions of the Lyapunov equations
Multiplying the first (resp., second) equation on the left and on the right by X \Gamma1=2 (resp., Y \Gamma1=2 )
and using Lemma 9(a), we obtain
Using the definition of g
\DeltaX and g
\DeltaY , it follows from (24) and (25) that
Taking the Frobenius norm of both sides of the last equation and using (26), Lemma 8 with
the triangular inequality for norms and the fact that g
obtain
F
F
\DeltaY k F
F
from which (23) follows.
The next result has a proof similar to the one of Proposition 4, and hence we omit it. Instead of
Lemma 9, its proof relies on lemma 2.2 of Monteiro and Zanj'acomo (1997).
Proposition 5 tech-bound3 Suppose that
++ \Theta
is the map in Example 5. If (X; Y
++ \Theta S n
++ is a point such that k(U T
F
where d
Y \DeltaX L \GammaT
X and d
Y \DeltaY LX .
Proposition 6 proposNT Let \Phi : C(0)[ (S n
++ \Theta S n
denote the map of Example 6 and
++ \Theta S n
++ be a point such that k\Phi(X; Y
Then,
Proof. Using the identity easily see that \Phi(X; Y Letting
we have that \Phi(X; Y Hence, the directional derivative of
denote the directional derivative of the function V at the point (X; Y ) along the
direction (\DeltaX; \DeltaY
++ , the condition \Phi 0 (X; Y )(\DeltaX; \DeltaY is equivalent to V
denote the directional derivative of the function W at the point (X; Y ) along the
direction (\DeltaX; \DeltaY ). Then, T satisfies the following equation
obtained from the identity Using Lemma 9(b) and the definition of V , it is easy to see
that the condition is equivalent to
where U is the unique solution of the Lyapunov equation
Letting g
\DeltaX
\DeltaY
the
identities (29), (30) and (31) become
~
\DeltaY tec2NT2 (32)
~
~
Subtracting (33) from (32) and using (34), we obtain that g
U T . This identity together
with (33) and the fact that
imply that
F
last relation implies that ~
Hence, by (32)-(34), we conclude that
equivalently
--R
Interior point methods in semidefinite programming with application to combinatorial optimization.
Complementarity and nondegeneracy in semidefinite programming.
A Primer of Nonlinear Analysis
Matrix Computations: Second Edition
Topics in Matrix Analysis
Reduction of monotone linear complementarity problem over cones to linear programs over cones.
Local convergence of predictor-corrector infeasible- interior-point algorithms for SDPs and SDLCPs
A predictor-corrector interior-point algorithm for the semidefinite linear complementarity problem using the Alizadeh-Haeberly-Overton search direction
Duality and self-duality for conic convex program- ming
Superlinear convergence of a symmetric primal-dual path following algorithm for semidefinite programming
Polynomial convergence of primal-dual algorithms for semidefinite programming based on Monteiro and Zhang family of directions
Properties of an interior-point mapping for mixed complementarity problems
On two interior-point mappings for nonlinear semidefinite complementarity problems
A potential reduction Newton method for constrained equations.
Polynomial convergence of a new family of primal-dual algorithms for semidefinite programming
Implementation of primal-dual methods for semidefinite programming based on Monteiro and Tsuchiya Newton directions and their variants
A unified analysis for a class of path-following primal-dual interior-point algorithms for semidefinite programming
Interior Point Methods in Convex Programming: Theory and Application
Iterative Solution of Nonlinear Equations in Several Variables
A superlinearly convergent primal-dual infeasible-interior-point algorithm for semidefinite programming
Strong duality for semidefinite programming.
Convex Analysis
First and second order analysis of nonlinear semidefinite programs.
Monotone semidefinite complementarity problems
Centers of monotone generalized complementarity problems.
Existence of search directions in interior-point algorithms for the SDP and the monotone SDLCP
On Weighted Centers for Semidefinite Programming
programming.
An interior point potential reduction method for constrained equations.
On extending some primal-dual interior-point algorithms from linear programming to semidefinite programming
--TR | interior point methods;maximal monotonicity;generalized complementarity problems;monotone maps;mixed nonlinear complementarity problems;nonlinear semidefinite programming;weighted central path;continuous trajectories |
355485 | Building tractable disjunctive constraints. | Many combinatorial search problems can be expressed as 'constraint satisfaction problems'. This class of problems is known to be NP-hard in general, but a number of restricted constraint classes have been identified which ensure tractability. This paper presents the first general results on combining tractable constraint classes to obtain larger, more general, tractable classes. We give examples to show that many known examples of tractable constraint classes, from a wide variety of different contexts, can be constructed from simpler tractable classes using a general method. We also construct several new tractable classes that have not previously been identified. | INTRODUCTION
Many combinatorial search problems can be expressed as 'constraint satisfaction
problems' [Montanari 1974; Mackworth 1977], in which the aim is to nd an assignment
of values to a given set of variables subject to specied constraints. For ex-
ample, the standard propositional satisability problem [Garey and Johnson 1979]
may be viewed as a constraint satisfaction problem where the variables must be
assigned Boolean values, and the constraints are specied by clauses.
The general constraint satisfaction problem is known to be NP-hard [Montanari
1974; Mackworth 1977]. However, by imposing restrictions on the constraint inter-connections
[Dechter and Pearl 1989; Freuder 1985; Gyssens et al. 1994; Montanari
1974], or on the form of the constraints [Cooper et al. 1994; Jeavons et al. 1997;
Jeavons and Cooper 1995; Kirousis 1993; Montanari 1974; van Beek and Dechter
1995; van Hentenryck et al. 1992], it is possible to obtain restricted versions of the
problem that are tractable.
Now that a number of tractable constraint types have been identied, it is of
considerable interest to investigate how these constraint types can be combined,
to yield more general problem classes that are still tractable. This paper presents
the rst general results of this kind: we identify conditions under which dierent
tractable constraint classes can be combined, in order to construct larger, more
general, tractable constraint classes.
We focus specically on 'disjunctive constraints', that is, constraints which have
the form of the disjunction of two constraints of specied types. We show that
whenever we are given tractable constraint types with certain properties, then the
class of problems involving all possible disjunctions of constraints of these types is
also tractable. This allows new tractable constraint classes to be constructed from
simpler tractable classes, and so extends the range of known tractable constraint
classes.
We give examples to show that many known examples of tractable disjunctive
constraints over both nite and innite domains can be constructed from simpler
classes using these results. In particular, we demonstrate that ve out of the six
tractable classes of Boolean constraints identied by Schaefer in [Schaefer 1978] can
be obtained in this way (see Examples 5, 7 and 9). These include the standard Horn
clauses and Krom clauses of propositional logic. Furthermore, we show that similar
results hold for the 'max-closed' constraints rst identied in [Jeavons and Cooper
1995], the 'connected row-convex' constraints rst identied in [Deville et al. 1997]
(see also [Jeavons et al. 1998]), the ORD-Horn constraints over temporal intervals
described in [Nebel and Burckert 1995], the disjunctive linear constraints over the
real numbers described in [Jonsson and Backstrom 1998; Koubarakis 1996], the
'extended Horn clauses' described in [Chandru and Hooker 1991], and the tractable
set constraints described in [Drakengren 1997; Drakengren and Jonsson 1998; Drak-
engren and Jonsson 1997]. In all of these cases our results lead to simplications
of earlier proofs, and in many cases we are able to generalise the earlier results to
Building Tractable Disjunctive Constraints 3
obtain larger families of tractable constraint classes. We also describe some new
tractable classes of constraints that can be derived from the same results.
The paper is organised as follows. In Section 2 we give the basic denitions
for the constraint satisfaction problem, and dene the notion of a tractable set of
constraints. In Section 3 we describe how sets of constraints can be combined to
form disjunctive constraints, and identify a number of dierent conditions that are
sucient to ensure tractability of these disjunctive constraints. In Section 4 we give
examples to illustrate how these results can be used to establish the tractability of
a wide variety of tractable constraint classes.
2. THE CONSTRAINT SATISFACTION PROBLEM
The 'constraint satisfaction problem' was introduced by Montanari in 1974 [Mon-
tanari 1974] and has been widely studied.
Denition 1. An instance of a constraint satisfaction problem consists of:
|A nite set of variables,
|A set of values, D, (which may be nite or innite);
|A nite set of constraints g.
Each constraint c i is a pair (S i is a set of variables, called
the constraint scope, and R i is a set of (total) functions from S i to D, called the
constraint relation.
The elements of a constraint relation indicate the allowed combinations of simultaneous
values for the variables in the constraint scope. The number of variables in
the scope of a constraint will be called the 'arity' of the constraint. In particular,
unary constraints specify the allowed values for a single variable, and binary constraints
specify the allowed combinations of values for a pair of variables. There is
a unique 'empty constraint', (;; ;), for which the scope and the constraint relation
are both empty.
Note that we are representing constraint relations as sets of functions rather than
the usual representation as sets of tuples. These two representations are clearly
equivalent, since by xing an ordering for the variables in the scope of a constraint,
we can associate each function with a corresponding tuple of values. However, the
use of the functional representation simplies some of the denitions below.
Example 1. When the set of values D is the set of real numbers R, then a relation
on some set of variables, say fu; v; wg, is a set of total functions from fu; v; wg to
R. For example, the following is a typical relation
If we x any ordering on the variables, say (w; then the same relation can be
represented as a set of 3-tuples of real numbers:
A solution to a constraint satisfaction problem instance is a function, f , from the
set of variables of that instance to the set of values, such that for each constraint
the restriction of f to S i , denoted f j S i
, is an element of R i .
4 D.Cohen, P.Jeavons, P.Jonsson, and M. Koubarakis
In order to simplify the presentation we shall make a number of simplifying assumptions
throughout the paper about the way in which constraint satisfaction
problem instances are specied. First, we shall assume that we have a xed countable
universe of possible variable names, and that every variable
occurring in a problem instance is a member of this set. Furthermore, we shall
assume that every variable in a problem instance occurs in the scope of some con-
straint. This means that for any problem instance, the set of variables, V , does not
need to be specied explicitly, but is given by the union of the constraint scopes
of that instance. Finally, we shall assume that the set of values, D, for any instance
does not need to be specied explicitly, but will be understood from the
context. With these assumptions, a constraint satisfaction problem instance can be
specied simply by specifying the corresponding set of constraints. Hence, we shall
talk about 'solutions to a set of constraints'. The set of all solutions to the set of
constraints C will be denoted Sol(C).
In order to determine the computational complexity of a constraint satisfaction
problem we need to specify how the constraints are encoded in nite strings of
symbols. We shall assume in all cases that this representation is chosen so that the
complexity of determining whether a constraint allows a given assignment of values
to the variables in its scope is bounded by a polynomial function of the length of
the representation.
Example 2. Constraint relations may be nite or innite. A constraint with a
constraint relation can be represented simply by giving an explicit list of all
the elements in that relation, but constraints with innite relations clearly cannot.
In both cases it is possible to use a suitable specication language, such as logical
formulas, or linear equations. For example, when the set of possible values for the
variables is ftrue; falseg, representing the Boolean values 'true' and `false', then
the logical formula 'x 1 _ x 2 _:x 3 ' can be used to specify the constraint with scope
and relation
Similarly, when the set of possible values is the real numbers, R, then the equation
can be used to specify the constraint with scope fx 1 and relation
Deciding whether or not a given set of constraints has a solution is known to be
NP-hard in general [Montanari 1974; Mackworth 1977]. In this paper we shall consider
how restricting the allowed constraints aects the complexity of this decision
problem. We therefore make the following denition.
Denition 2. For any set of constraints, , CSP() is dened to be the decision
problem with
Instance:. A nite set of constraints C .
Question:. Does C have a solution?
If there is some algorithm which solves every instance in CSP() in polynomial
time, then we shall say that CSP() is 'tractable', and refer to as a tractable set
of constraints.
Building Tractable Disjunctive Constraints 5
Example 3. For any set of possible values D, and any pair of variables, x and y,
the binary disequality constraint with scope fx; yg and set of values D is dened
as follows:
denotes the constraint (fx;
Since we are assuming that we have a xed universe of possible variable names,
we can consider the set of all possible binary disequality constraints over D, for all
possible choices of a pair of variables. This set will be denoted 6= D
For any nite set D, the decision problem CSP( 6= D
corresponds precisely to
the Graph jDj-Colourability problem [Garey and Johnson 1979]. This problem
is well-known to be tractable when jDj 2 and NP-complete when jDj 3.
3. TRAC
The remainder of the paper focuses on the complexity of constraint satisfaction
problems involving disjunctive constraints.
We rst dene how individual constraints can be combined disjunctively.
Denition 3. Let c be two constraints with a
common set of possible values D.
The disjunction of c 1 and c 2 , denoted c 1 _ c 2 , is dened as follows:
The idea behind this denition is that an assignment satises the disjunction of
two constraints if it satises either one of them. Note that for any constraint c, the
disjunction of c and the empty constraint, (;; ;), gives c.
Now we dene how a set of disjunctive constraints can be obtained from two
arbitrary sets of constraints over the same set of possible values.
Denition 4. For any two sets of constraints and , with a common set of
possible values, dene the set of constraints _ as follows:
The set of constraints _ (read as ' or-cross ') contains the disjunction of each
possible pair of constraints from and .
In many cases of interest and will both contain the empty constraint, (;; ;),
and in these cases _ [. In most cases of this kind _ will be much larger
than [ , and will therefore allow a much richer class of constraint satisfaction
problems to be expressed.
The next example shows that when tractable sets of constraints are combined
using the disjunction operation dened in Denition 4 the resulting set of disjunctive
constraints may or may not be tractable.
Example 4. Let be the set containing all Boolean constraints which can be
specied by a formula of propositional logic consisting of a single literal (where a
literal is either a variable or a negated variable).
The set of constraints is clearly tractable, as it is straightforward to verify in
linear time whether a collection of simultaneous literals has a solution.
6 D.Cohen, P.Jeavons, P.Jonsson, and M. Koubarakis
Now consider the set of constraints _. This set contains all Boolean constraints
specied by a disjunction of 2 literals. The problem CSP( _2 ) corresponds
to the 2-Satisfiability problem, which is well-known to be tractable [Garey and
Johnson 1979].
Finally, consider the set of constraints _. This set contains all
Boolean constraints specied by a disjunction of 3 literals. The problem CSP( _3 )
corresponds to the 3-Satisfiability problem, which is well-known to be NP-complete
[Garey and Johnson 1979].
In many of the examples below we shall be concerned with constraints that are
specied by disjunctions of an arbitrary number of constraints from a given set. To
provide a uniform notation for such constraints we make the following denition.
Denition 5. For any set of constraints, , dene the set as follows:
=[
_i
where
The nal piece of machinery that we shall need to deal with disjunctive sets of
constraints is a uniform way to recover the separate components in the disjunction.
Denition 6. For any disjunctive set of constraints _, we dene two operations
such that for any c 2 _,
We shall assume that the constraints in _ are represented in such a way that 1
and 2 can be computed in linear time.
In the following sections we identify certain conditions on sets of constraints
and which are sucient to ensure that _ is tractable.
3.1 The guaranteed satisfaction property
The rst condition we identify is rather trivial, but it is included here for com-
pleteness, and because it is sucient to show how two of the six tractable classes
of Boolean constraints identied by Schaefer in [Schaefer 1978] can be constructed
from simpler classes.
Denition 7. A set of constraints, , has the guaranteed satisfaction property if
every nite C has a solution.
Theorem 1. For any sets of constraints and , if has the guaranteed satisfaction
property, then CSP( _) also has the guaranteed satisfaction property, and
is therefore tractable.
Proof. Every constraint in _ is of the form c_d for some c 2 and some d 2
. Hence if has the guaranteed satisfaction property, then any problem instance
in CSP( _) has a solution satisfying the rst disjunct of each constraint.
Example 5. Recall the set of unary Boolean constraints, , dened in Example 4,
which contains all constraints specied by a single literal.
Building Tractable Disjunctive Constraints 7
Let be the subset of containing only the constraints specied by a single
negative literal.
It is clear that has the guaranteed satisfaction property, since any problem
instance in CSP() has the solution which assigns the value false to all variables.
Hence, by Theorem 1, _ has the guaranteed satisfaction property and CSP( _ )
is tractable. This tractable set contains all constraints specied by Boolean clauses
containing at least one negative literal. Examples of such clauses include the following
The constraint relations dened by conjunctions of clauses of this form are precisely
the elements of the rst class of tractable Boolean relations identied by Schaefer
in [Schaefer 1978] (which he calls '0-valid' relations).
A symmetric argument shows that if contains only the constraints specied
by a single positive literal, then CSP( _ ) is again tractable. This tractable set
contains all constraints specied by Boolean clauses containing at least one positive
literal. The constraint relations dened by conjunctions of clauses of this form are
precisely the elements of the second class of tractable Boolean relations identied
by Schaefer in [Schaefer 1978] (which he calls '1-valid' relations).
Example 6. Let be the set of all constraints specied by a linear disequality
over the real numbers, that is, an expression of the form
where the a i
and b are (real-valued) constants.
It is clear that has the guaranteed satisfaction property, since any problem
instance in CSP() only rules out a nite number of hyperplanes from R n .
Hence, by Theorem 1, for any set of constraints, , over the real numbers, _
has the guaranteed satisfaction property and CSP( _ ) is tractable.
For example, let be the set containing all the constraints in together with
all constraints specied by a single (weak) linear inequality, that is, an expression
of the form
where the a i and b are (real-valued) constants. In this case
_ contains constraints such as the following:
This example should be compared with the similar, but much more signicant,
tractable class dened in Example 13 below.
3.2 The independence property
In this section we identify a rather more subtle condition which can be used to
construct tractable disjunctive constraints. We rst need the following denition:
Denition 8. For any sets of constraints and , dene CSPk( [ ) to be
the subproblem of CSP( [ ) consisting of all instances containing at most k
constraints which are members of .
8 D.Cohen, P.Jeavons, P.Jonsson, and M. Koubarakis
Using this denition, we now dene what it means for one set of constraints to be
'k-independent' with respect to another.
Denition 9. For any sets of constraints and , we say that is k-independent
with respect to if the following condition holds: any set of constraints C in
solution provided every subset of C belonging to CSPk( [
has a solution.
The intuitive meaning of this denition is that the satisability of any set of constraints
chosen from the set can be determined by considering those constraints
k at a time, even in the presence of arbitrary additional constraints from . In the
examples below we shall demonstrate that several important constraint types have
the 1-independence property.
A more restricted notion of 1-independence has been widely studied in the literature
of constraint programming, where it has been called simply 'independence'
(see [Lassez and McAloon 1989; Lassez and McAloon 1992; Lassez and McAloon
1991], for example). The earlier property applies to an individual constraint class
containing positive constraints and negative constraints, and has been used in the
development of consistency checking algorithms and canonical forms [Lassez and
McAloon 1991; Lassez and McAloon 1992]. However, we will show below that the
more general notion of 1-independence of one class with respect to another, introduced
here, can be used to prove the tractability of a wide variety of disjunctive
constraint classes for which the earlier notion of independence does not hold.
Consider the algorithm shown in Figure 1 for a function Ind-Solvable which
determines whether or not a nite set of constraints C _ has a solution.
Ind-Solvable(C: finite subset of _)
repeat
return true
else
S
endif
return false
Fig. 1. An algorithm for the function Ind-Solvable
The next result shows that Ind-Solvable correctly determines whether or not
a nite set of constraints chosen from _ has a solution in all cases where is
1-independent with respect to .
Lemma 1. If C is a nite subset of _, and is 1-independent with respect to
, then the function Ind-Solvable dened in Figure 1 correctly determines whether
or not C has a solution.
Building Tractable Disjunctive Constraints 9
Proof. The algorithm shown in Figure 1 clearly terminates, because C is nite.
Assume that C is a nite subset of _ for some and such that is
1-independent with respect to . We rst prove by induction that after every
assignment to S, all of the constraints in S must be satised in order to satisfy the
original set of constraints C.
This is vacuously true after the rst assignment to S, because S is then equal to
the empty set.
At each subsequent assignment, S is augmented with the constraints obtained
by applying 1 to the constraints in X . Now, the elements of X are constraints c
of C such that 2 (c) is incompatible with S. Hence the only way such a c can be
satised together with S is to satisfy the other disjunct of c, that is, the constraint
given by 1 (c). Hence, by the inductive hypothesis, each constraint added to S
must be satised in order to satisfy the original set of constraints C, and the result
follows, by induction.
This result establishes that when Ind-Solvable(C) returns false, C has no
solutions.
Conversely, when Ind-Solvable(C) returns true, then we know that X is empty.
This implies that for each constraint c in C either 1 (c) belongs to S, or else 2 (c)
is compatible with the constraints in S. Now, using the fact that is 1-independent
with respect to , we conclude that S [ f 2 (c) j c 2 C; 1 (c) 62 Sg has a solution,
and hence C has a solution.
By analysing the complexity of the algorithm in Figure 1 we now establish the
following result:
Theorem 2. For any two sets of constraints and , if CSP1( [ ) is
tractable, and is 1-independent with respect to , then CSP( _) is tractable.
Proof. By Lemma 1, it is sucient to show that the algorithm in Figure 1 runs
in polynomial time. We can bound the time complexity of this algorithm as follows.
First note that jSj increases on each iteration of the repeat loop, but jSj is
bounded by jCj since each constraint in S arises from an element of C. Hence there
can be at most jCj iterations of this loop.
Now let l(C) be the length of the string specifying C. During each iteration of
the loop the algorithm determines whether or not there is a solution to S[ 2 (c) for
each c remaining in C. Since this set of constraints is a member of CSP1( [
which is assumed to be tractable, these calculations can each be carried out in
polynomial time in the size of their input. Note also that the length of this input is
less than or equal to l(C). Hence, the time complexity of each of these calculations
is bounded by p(l(C)), for some polynomial p.
At the end of each iteration of the loop the algorithm determines whether or not
there is a solution to S. Since S is also an element of CSP1( [ ), and the
length of the specication of S is less than or equal to l(C), this calculation can
also be carried out in at most p(l(C)) time.
Hence, the total time required to complete the algorithm is O(jCj(jCj+1)p(l(C))),
which is polynomial in the size of the input.
Finally, we show that this result can be extended to arbitrary disjunctions of constraints
in .
P.Jonsson, and M. Koubarakis
Lemma 2. For any set of constraints , if is 1-independent with respect to ,
then is also 1-independent with respect to .
Proof. Assume that is 1-independent with respect to and let C be an
arbitrary nite subset of [ . We need to show that if every subset of C which
belongs to CSP 1( [ ) has a solution, then so does C.
Let C 0 be a maximal subset of C belonging to CSP 1( [ ) and let s be a
solution to C 0 . Since C 0 is maximal, it contains the set C , consisting of all the
constraints in C which are elements of n , so s is a solution to C . If C 0 also
contains a constraint d 2 , then there must be at least one constraint d 0 in
such that s is a solution to d 0 , by the denition of . Hence, we can replace d
with a (possibly) more restrictive constraint d 0 2 , without losing the solution s.
If we carry out this replacement for each C 0 , then we have a set of constraints
in CSP( [ ) such that each subset belonging to CSP1( [ ) has a solution.
Now, by the fact that is 1-independent with respect to , it follows that this
modied set of constraints has a solution, and hence the original set of constraints
C has a solution, which gives the result.
Corollary 1. For any two sets of constraints and , if CSP1( [ ) is
tractable, and is 1-independent with respect to , then CSP( _ ) is tractable.
Proof. If
each instance of CSP 1( [ ) only contains at most one disjunctive
constraint belonging to , and each disjunct of this constraint can be considered
separately in polynomial time.
Furthermore, if is 1-independent with respect to , then by Lemma 2 is
also 1-independent with respect to .
Hence, Theorem 2 can be applied to and , giving the result.
Example 7. Recall the set of unary Boolean constraints, , dened in Example 4,
which contains all constraints specied by a single literal.
Let be the subset of containing only the constraints specied by a single
positive literal, and let ;)g. Let be the subset of containing only
the constraints specied by a single negative literal. Note that the set of constraints
, constructed according to Denition 5, is equal to the set of constraints specied
by arbitrary nite disjunctions of negative literals (including the empty disjunction).
Now it is easily shown that is 1-independent with respect to 0 (since any
collection of positive and negative literals has a solution if and only if all subsets
containing at most one negative literal have a solution). Also,
tractable, since each instance is specied by a conjunction of zero or more positive
literals together with at most one negative literal. Hence, by Corollary 1, we
conclude that 0 _ is tractable. But 0 _ is the set of constraints specied by
a disjunction of literals containing at most one positive literal. Examples of such
clauses include the following:
It is easy to see that CSP( 0 _ ) corresponds exactly to the Horn-Clause Satisfiability
problem [Garey and Johnson 1979]. The constraint relations dened
Building Tractable Disjunctive Constraints 11
by conjunctions of clauses of this form are precisely the elements of the third class
of tractable Boolean relations identied by Schaefer in [Schaefer 1978] (which he
calls 'weakly negative' relations).
By a symmetric argument, it follows that the set of constraints specied by a
disjunction of literals containing at most one negative literal is also a tractable set
of constraints. The constraint relations dened by conjunctions of clauses of this
form are precisely the elements of the fourth class of tractable Boolean relations
identied by Schaefer in [Schaefer 1978] (which he calls 'weakly positive' relations).
We have shown that 1-independence can be used to establish tractability, and further
examples are given in Section 4 below. To conclude this section we show that,
if a set of constraints is k-independent with respect to a set , for some value of
larger than one, then this may not be sucient to ensure tractability of _.
Example 8. In this example we construct two sets of constraints, and , such
that CSP2( [ ) is tractable, and is 2-independent with respect to , but
CSP( _) is NP-complete.
Recall the set of unary Boolean constraints, , dened in Example 4, which
contains all constraints specied by a single literal.
With these denitions, CSP( [ ) is equivalent to the standard 2-Satisfiability
problem, and hence CSP2( [ ) is tractable. Furthermore, the 2-Satisfiability
problem has the property that 'path-consistency' guarantees global consistency [Jeav-
ons et al. 1998], which implies that any minimal insoluble subset of clauses in an
instance of 2-Satisifiability contains at most two single literals, and hence is
2-independent with respect to ,
However, as discussed in Example 4, CSP( _) corresponds to the 3-satisfiability
problem, and is therefore NP-complete.
3.3 The Krom property
In this section we identify a nal sucient condition for constructing tractable
disjunctive constraints.
Denition 10. A set of constraints, , has the Krom property if it is 2-independent
with respect to the empty set.
Note that has the Krom property if and only if for every nite C having no
solution, there exists a pair of (not necessarily distinct) constraints c
that has no solution.
The name \Krom property" is chosen to emphasise the close connection with
Krom clauses (i.e., Boolean clauses of length 2) [Denenberg and Lewis 1984],
which will be demonstrated later.
Consider the algorithm shown in Figure 2, for a function Krom-Solvable, which
determines whether or not a nite set of constraints C _ has a solution.
The next result shows that Krom-Solvable correctly determines whether or not
a nite set of constraints chosen from _ has a solution in all cases when has
the Krom property.
Lemma 3. If C is a nite subset of _, and has the Krom property, then
the function Krom-Solvable dened in Figure 2 correctly determines whether or
not C has a solution.
12 D.Cohen, P.Jeavons, P.Jonsson, and M. Koubarakis
Krom-Solvable(C: finite subset of _)
Define a set of Boolean variables fqc j c 2 Pg
A := f(:q c 0 _ :q c 00
then return true
else return false
Fig. 2. An algorithm for a function Krom-Solvable
Proof. We show that the function Krom-Solvable dened in Figure 2 returns
true when applied to C if and only if C has a solution.
only-if:. Assume that Krom-Solvable returns true. This implies that there
exists a satisfying truth assignment, , for A [ B. Dene the set of constraints
We rst show that C 0 has a solution. Since has the Krom property, C 0 has no
solution only if there exist c has no solution. However,
this cannot happen because satises the formulae in the set A.
Now, let the function f be a solution of C 0 . For each constraint c 2 C we know
that at least one of 1 (c) and 2 (c) is a member of C 0 , because satises the
formulae in B. Since f is a solution of C 0 , it follows that f can be extended (if
necessary) to a solution of C (by assigning an arbitrary value to any variable not
constrained by C 0 ).
if:. Assume that C has a solution and let f be any such solution. Dene the
truth assignment : fq c j c 2 Pg ! ftrue; falseg as follows:
We show that is a satisfying truth assignment of A[B by considering the elements
of A and B in turn.
(1) For each formula (:q c 0 _ :q c 00 ) 2 A, we know that fc 0 ; c 00 g has no solutions.
Hence it cannot be the case that (q c 0 true, which means that
(:q c _ :q c 0 ) is satised by .
(2) For each formula (q c 0 _ q c 00 know that there is a constraint c 2 C such
that 1 is a solution to C, f satises either c,
c 0 , or both. Hence, assigns true to at least one of q c 0 ; q c 00 , which means that
satised by .
It follows that A [ B is satisable, so Krom-Solvable returns true.
By analysing the complexity of the algorithm in Figure 2 we now establish the
following result:
Theorem 3. For any set of constraints , if CSP() is tractable, and has
the Krom property, then CSP( _) is tractable.
Building Tractable Disjunctive Constraints 13
Proof. By Lemma 3, it is sucient to show that the algorithm in Figure 2 runs
in polynomial time. We can bound the time complexity of this algorithm as follows.
Let C be a nite subset of _ and let l(C) be the length of the string specifying
C.
Since computing 1 and 2 takes linear time, the set P can be computed in
O(l(C)) time. It contains at most 2jCj elements.
To compute the set A, the algorithm must determine whether or not there is
a solution to fc for every pair of constraints c is an
instance of CSP(), which is assumed to be tractable, these calculations can each
be carried out in polynomial time in the size of their input. Hence the time required
to compute the set A is O(jCj 2 p(l(C))), for some polynomial p. The set A contains
at most jCj(2jCj 1) elements.
The set B can clearly be computed in time O(jCjl(C)) and contains jCj elements.
Finally, the algorithm must decide the satisability of a set of Krom clauses
containing at most jCj(2jCj 1)+ jCj elements. By using the linear time algorithm
for this problem given in [Aspvall et al. 1979], this step can be carried out in O(jCj 2 )
time.
Hence, the total time required by the algorithm is O(jCj 2 p(l(C))), which is
polynomial in the size of the input.
The proof of Theorem 3 uses the well-known result about the tractability of solving
Krom clauses. The next example shows that these two results are equivalent.
Example 9. Recall the set of unary Boolean constraints, , dened in Example 4,
which contains all constraints specied by a single literal.
It is easily shown that has the Krom property (since any collection of positive
and negative literals has a solution unless it contains two dierent literals involving
the same variable). Hence, by Theorem 3, _ is tractable.
However, the set _ contains all constraints specied by Boolean clauses containing
at most two literals, that is, all Krom clauses.
The constraint relations dened by conjunctions of clauses of this form are precisely
the elements of the fth class of tractable Boolean relations identied by
Schaefer in [Schaefer 1978] (which he calls 'bijunctive' relations).
To conclude this section we show that if a set of constraints has a higher level
of k-independence with respect to the empty set, then this may not be sucient to
ensure tractability of _.
Example 10. In this example we construct a tractable set of relations such that
is 3-independent with respect to the empty set, but we show that CSP( _) is
NP-complete.
For all possible variables x and y, we dene the unary constraints zero(x); one(x)
and the binary constraint 6= N (x; y) as follows:
(1) zero(x) denotes the constraint
(2) one(x) denotes the constraint
(3) 6= N (x; y) denotes the constraint (fx;
Dene to be the set containing all possible constraints of these three types.
(Note that is well-dened since we are assuming that we have a xed universe of
14 D.Cohen, P.Jeavons, P.Jonsson, and M. Koubarakis
possible variable names.)
Let C be a nite subset of . We will show that if C has no solution, then (at
least) one of the following constraint sets is a subset of C, for some variables x and
y.
To establish this fact, assume that none of the above stated sets of constraints are
subsets of C. We will show that in this case it is always possible to construct a
solution.
Let the set of variables appearing in the constraints of C be g.
Dene the function f
For each constraint c 2 C we can reason as follows:
|Since
|Since c is the constraint 6= N
|Since neither S3 nor S4 is a subset of C, if c is the constraint 6= N
Hence, in all cases c is satised by f , so f is a solution to C.
Since none of the sets S1,S2,S3,S4 contains more than 3 elements, we have
established that is 3-independent with respect to the empty set, and that is
tractable.
To establish the NP-completeness of CSP( _), we construct a polynomial time
reduction from the NP-complete problem 4-Colourability [Garey and Johnson
1979], as follows.
be an arbitrary graph and construct an instance of CSP( _)
as follows: for each v 2 V , introduce two variables v 0 and v 00 together with the
constraints
For each edge (v; w) 2 E, introduce the constraint
This transformation can obviously be carried out in polynomial time and the resulting
set of constraints is a subset of _.
To see that the resulting instance has a solution if and only if the graph G is
4-colourable, identify colour 1 with
and so on. For every pair of adjacent nodes v; w in G, the constraint imposed on
the corresponding variables v ensures that v and w must be assigned
dierent colours.
Building Tractable Disjunctive Constraints 15
4. APPLICATIONS
In this section we will use the results established above to demonstrate that many
known tractable sets of constraints can be obtained by combining simpler tractable
sets of constraints using the disjunction operation dened in Denition 4. We will
also describe some new tractable sets of constraints which have not previously been
identied.
Example 11. [Max-closed constraints]
The class of constraints known as 'max-closed' constraints was introduced in [Jeav-
ons and Cooper 1995] and shown to be tractable. This class of constraints has been
used in the analysis and development of a number of industrial scheduling tools [Le-
saint et al. 1998; Purvis and Jeavons 1999].
Max-closed constraints are dened in [Jeavons and Cooper 1995] for arbitrary
nite sets of values which are totally ordered. This class of constraints includes all
of the 'basic constraints' over the natural numbers in the constraint programming
language CHIP [van Hentenryck et al. 1992]. The following are examples of max-
closed constraints when the set of possible values is any nite set of natural numbers:
In this example we will show that the tractability of max-closed constraints is a
simple consequence of Corollary 1. Furthermore, by using Corollary 1 we are able
to generalise this result to obtain tractable constraints over innite sets of values.
Max-closed constraints were originally dened in terms of an algebraic closure
property on the constraint relations [Jeavons and Cooper 1995]. However, it is
shown in [Jeavons and Cooper 1995] that they can also be characterised as those
constraints which can be specied by a conjunction of disjunctions of inequalities
of the following form:
In this expression the x i are variables and the a i are constants.
To apply the results of Section 3.2, let the set of possible values, D, be any totally
ordered set. Dene to be the set of all constraints specied by a single inequality
of the form x i < a i , for some a i 2 D, together with the empty constraint. Dene
to be the set of all constraints specied by a single inequality of the form x i > a i , for
some a i 2 D. Note that the set of constraints , constructed as in Denition 5, is
equal to the set of constraints specied by arbitrary nite disjunctions of inequalities
of the form x i > a i .
It is easily shown that is 1-independent with respect to . Also, CSP1( [
is tractable, since each instance consists of a conjunction of upper bounds for individual
variables together with at most one lower bound. Hence, by Corollary 1,
_ is tractable. By the result quoted above, this establishes that max-closed
constraints are tractable.
Unlike the arguments used previously to establish that max-closed constraints
P.Jonsson, and M. Koubarakis
are tractable [Jeavons and Cooper 1995; Jeavons et al. 1995], the argument above
can still be applied when the set of values D is innite.
Example 12. [Connected row-convex constraints]
The class of binary constraints known as 'connected row-convex' constraints was
introduced in [Deville et al. 1997] and shown to be tractable. This class properly
includes the 'monotone' relations, identied and shown to be tractable by Montanari
in [Montanari 1974].
In this example we will show that the tractability of connected row-convex constraints
is a simple consequence of Theorem 3. Furthermore, by using Theorem 3
we are able to generalise this result to obtain tractable constraints over innite sets
of values.
Let the set of possible values D be the ordered set fd 1
. The denition of connected row-convex constraints given in [Deville
et al. 1997] uses a standard matrix representation for binary relations: the binary
relation R over D is represented by the mm 0-1 matrix M , by setting
if the relation contains the pair hd
A relation is said to be connected row-convex if the following property holds:
the pattern of 1's in the matrix representation (after removing rows and columns
containing only 0's) is connected along each row, along each column, and forms a
connected 2-dimensional region (where some of the connections may be diagonal).
Here are some examples of connected row-convex
An alternative characterisation of this class of constraints, in terms of an algebraic
closure property was given in [Jeavons et al. 1998].
Here we obtain another alternative characterisation by noting that the corresponding
matrices have a very restricted structure. If we eliminate all rows and
columns consisting entirely of zeros, and then consider any remaining zero in the
matrix, all of the ones in the same row as the chosen zero must lie one side of it
(because of the connectedness condition on the row). Similarly, all of the ones in
the same column must lie on one side of the chosen zero. Hence there is a complete
path of zeros from the chosen zero to the edge of the matrix along both the row
and column in one direction. But this means there must be a complete rectangular
sub-matrix of zeros extending from the chosen zero to one corner of the matrix
(because of the connectedness condition).
This implies that the whole matrix can be obtained as the intersection (conjunc-
tion) of 0-1 matrices that contain all ones except for a submatrix of zeros in one
corner (simply take one such matrix, obtained as above, for each zero in the matrix
to be constructed).
There are four dierent forms of such matrices, depending on which corner sub-
Building Tractable Disjunctive Constraints 17
matrix is zero, and they correspond to constraints expressed by disjunctive expressions
of the four following forms:
In these expressions x are variables and d i ; d j are constants.
Finally, we note that a row or column consisting entirely of zeros corresponds to
a constraint of the form for an appropriate choice of d 1 and
Hence, any connected row-convex constraint is equivalent to a conjunction of
expressions of these forms.
To apply the results of Section 3.3, dene to be the set of all unary constraints
specied by a single inequality of the form x i d i or x i d i , for some d i 2 D.
It is easily shown that has the Krom property and CSP() is tractable, since
each instance consists of a conjunction of upper and lower bounds for individual
variables. Hence, by Theorem 3, _ is tractable. By the alternative characterisation
described above, this establishes that connected row-convex constraints are
tractable.
Unlike the arguments used previously to establish that connected row-convex
constraints are tractable [Deville et al. 1997; Jeavons et al. 1998], the argument
above can still be applied when the set of values D is innite.
Example 13. [Linear Horn constraints]
The class of constraints over the real numbers known as 'linear Horn' constraints
was introduced in [Jonsson and Backstrom 1998; Koubarakis 1996] and shown to
be tractable.
A linear Horn constraint is specied by a disjunction of weak linear inequalities
and linear disequalities where the number of inequalities does not exceed one. The
following are examples of linear Horn constraints:
Linear Horn constraints form an important class of linear constraints with explicit
connections to temporal reasoning [Jonsson and Backstrom 1998]. In particular, the
class of linear Horn constraints properly includes the point algebra of [Vilain et al.
1989], the (quantitative) temporal constraints of [Koubarakis 1992; Koubarakis
1995] and the ORD-Horn constraints of [Nebel and Burckert 1995]. All these
classes of temporal constraints can therefore be shown to be tractable using the
framework developed here.
Let the set of possible values be the real numbers (or the rationals). Dene
to be the set of all constraints specied by a single (weak) linear inequality (e.g.,
together with the empty constraint. Dene to be the set of
all constraints specied by a single linear disequality (e.g., x 1
P.Jonsson, and M. Koubarakis
Note that is the set of constraints specied by a disjunction of disequalities,
and the problem CSP( [ ) corresponds to deciding whether a convex polyhe-
dron, possibly minus the union of a nite number of hyperplanes, is the empty set.
It was shown in [Lassez and McAloon 1989] that the set [ is independent (us-
ing their more restrictive notion of independence referred to in Section 3.2, above),
and hence that this problem is tractable.
However, the set of constraints specied by linear Horn constraints corresponds
to the much larger set _ , and this set is not independent in the sense dened
in [Lassez and McAloon 1989] (see [Koubarakis 1996]). In order to establish that
this larger set of constraints is tractable we shall use the more general notion of
1-independence introduced in this paper.
Consider any set of constraints C in CSP( [ (the subset of
C which is specied by weak linear inequalities), and let C
of C which is specied by linear disequalities). By considering the geometrical
interpretation of the constraints as half spaces and excluded hyperplanes in R n , it
is clear that C is consistent if and only if C 0 is consistent, and, for each c 2 C 00 , the
set consistent (see [Koubarakis 1996]). Hence, is 1-independent with
respect to .
Now Lemma 2 and Lemma 1 imply that the function Ind-Solvable dened in
Figure
1 can be used to determine whether CSP( _ ) has a solution. (In fact, the
algorithm in Figure 1 can be seen as a generalisation of the algorithm Consistency
which was developed specically for this problem in [Koubarakis 1996].)
Finally, to establish tractability, we note that whether a set of inequalities, C 0 ,
is consistent or not can be decided in polynomial time, using Khachian's linear
programming algorithm [Khachian 1979]. Furthermore, for any single disequality
constraint, c, we can detect in polynomial time whether C 0 [ fcg is consistent by
simply running Khachian's algorithm to determine whether C 0 implies the negation
of c. Hence, CSP1( [ ) is tractable, so we can apply Corollary 1 and conclude
that CSP( _ ) is tractable.
Example 14. [Extended Horn clauses]
It was shown by Chandru and Hooker in [Chandru and Hooker 1991] that the
class of Horn clauses may be generalised to a much larger class of tractable sets of
clauses, which they refer to as 'extended Horn clauses'.
To establish that any extended Horn set of clauses can be solved in polynomial
time, Chandru and Hooker give a very indirect argument based on a result from the
theory of linear programming [Chandru and Hooker 1991]. In this example we shall
establish the tractability of a slightly more general class of constraint sets using a
much more direct argument, based on Corollary 1.
In order to dene this new class of tractable constraint sets over Boolean variables,
we rst need to describe how a set of Boolean constraints can be associated with a
tree structure.
Let T be a rooted, undirected, tree in which the edges are labelled with propositional
literals in such a way that each variable name occurs at most once. Note
that T may have an innite number of edges. If we select an edge of T , then it can
be oriented in two dierent ways: either towards the root, or else away from the
root. An edge of T , together with a particular selected orientation, will be called
Building Tractable Disjunctive Constraints 19
an arc of T .
For each possible arc, a, of T we dene an associated literal, in the following
way. If a is oriented away from the root, then we dene the associated literal to be
the label of a in T , otherwise, if a is oriented towards the root, then we dene the
associated literal to be the negation of the label of a in T .
x3
Fig. 3. A labelled tree
For example, let T be the tree shown in Figure 3, with root node n 0 . and consider
the arc from node n 0 to n 1 , denoted [n The literal associated with [n
according to the denition just given, is x 1 . Now consider the arc from node n 5 to
The literal associated with [n 5
Given any set of arcs, we dene an associated clause consisting of the disjunction
of the associated literals. For example, the clause associated with the set of arcs
Some sets of arcs have the special property that they form a path. For example,
in the tree shown in Figure 3, the arcs [n a path of length 2.
The arcs [n 6 a path of length 4.
For any rooted, undirected, tree T , we dene T to be the set of all Boolean
constraints specied by a clause associated with some path in T .
For example, when T is the tree shown in Figure 3, then T includes the clauses
but does not include the clause
or the clause x 1 _ :x 2 .
We rst note that for any tree T , including innite trees, the set of constraints
T is tractable. This is because any instance of CSP( T ) can be solved by the
following polynomial-time algorithm (adapted from [Chandru and Hooker 1991]):
(1) If the instance contains any clauses associated with paths of length one (i.e.,
unit clauses), then assign values to the corresponding variables to satisfy these
20 D.Cohen, P.Jeavons, P.Jonsson, and M. Koubarakis
clauses. If any variable receives contradictory assignments then report that the
instance is insoluble and stop.
(2) If any variables have been assigned values, then remove all clauses which
are satised by these assignments, contract the corresponding edges in T (i.e.,
remove these edges and identify each pair of end points), and return to Step 1.
(3) Report that the problem is soluble. (Since all remaining clauses correspond
to paths of length two or more in T , there is guaranteed to be a solution which
can be obtained by assigning the values true and false alternately along each
branch of T , starting with true.)
However, if we allow disjunctions of constraints in T , then we no longer have
tractability. In fact, we will now show that CSP( T
can be NP-complete.
x3
Fig. 4. A labelled star
This intractability result holds even when the trees are restricted to be stars (that
is, a tree where every edge is incident to the root). Dene Sn to be a star with n
edges labelled x . For example, the star S 5 is shown in Figure 4. Now
consider the innite star, S1 , with edges labelled x :. The set of constraints
consists of all constraints specied by clauses of the form x i , :x i , or x i _:x j ,
for all choices of i and j. Hence the class of constraints S1
_ S1 consists of
all constraints specied by disjunctions of these clauses, or, in other words, by
clauses of the form x
possible choices of i; j; k; l. This set of clauses does not
fall into any of the 6 tractable classes identied by Schaefer in [Schaefer 1978], and
hence the corresponding satisability problem, CSP(S1 _ S1 ), is NP-complete.
Building Tractable Disjunctive Constraints 21
In order to obtain tractable sets of disjunctive constraints, we now identify a
restricted subset of T which is 1-independent with respect to T .
For any rooted, undirected, tree T , we dene T to be the set of all Boolean
constraints specied by a clause associated with some path in T which ends at the
root node of T .
For example, when T is the tree shown in Figure 3, then T includes the clauses
but does not include the clause x 4 _ :x 3 , or
the clause :x 5 _ x 1 . As a second example, when T is a star labelled with positive
literals, such as the one shown in Figure 4, then T consists of all constraints
specied by a single negative literal.
Since T T , we can use the same algorithm as before to show that the
problem To show that T is 1-independent with
respect to T we note that any minimal incompatible subset of clauses can involve
at most one clause from T . In view of these facts, we can apply Corollary 1 to
conclude that T
_
T is tractable.
Notice that in the special case when T is a star labelled with positive literals the
set of constraints T
_
corresponds precisely to the set of Horn clauses over the
variables labelling T .
In the more general case when T is an arbitrary tree labelled with positive literals,
the set of constraints T
_
T includes the extended Horn set of clauses associated
with T , as dened by Chandru and Hooker [Chandru and Hooker 1991]. Further-
more, in the most general case when T is an arbitrary tree labelled with arbitrary
literals, the set of constraints T
_
T includes the hidden extended Horn clauses
associated with T , as dened by Chandru and Hooker [Chandru and Hooker 1991].
Hence our results are sucient to show that extended Horn clauses and hidden
extended Horn clauses are tractable.
However, we point out that the extended Horn set associated with a tree T , as
dened in [Chandru and Hooker 1991], is a little more restricted than the set of
constraints T
_
T , and hence our result represents a generalisation of the result
in [Chandru and Hooker 1991]. This is because the literals in any clause from
an extended Horn set are required to form an 'extended star-chain' pattern in
the corresponding tree [Chandru and Hooker 1991]. Such a pattern consists of
an arbitrary number of edge disjoint paths terminating at the root node, together
with at most one other path. The literals of the clauses representing constraints in
_
similar patterns, but there is no requirement for the paths into the
root to be disjoint.
For example, when T is the tree shown in Figure 3, then T
_
T includes the
clause :x 1 _:x 2 _:x 3 _x 6 (formed from the disjunction of :x 1 _:x 2 and :x 1 _:x 3
and x 6 ). However, this clause is not in the (hidden) extended Horn set associated
with T .
Example 15. [Extended Krom clauses]
The ideas described in Example 14, together with Theorem 3, can be used to
identify a new class of tractable Boolean constraint sets which we will call 'extended
Krom' sets.
As in Example 14, we associate propositional clauses with sets of arcs in a labelled
tree. Let T be a rooted, undirected tree, whose edges are labelled with propositional
22 D.Cohen, P.Jeavons, P.Jonsson, and M. Koubarakis
literals in such a way that each variable name appears at most once. Note that T
may have an innite number of edges. We dene T to be the set of all Boolean
constraints specied by a clause associated with some path in T which either starts
or ends at the root node of T .
For example, when T is the tree shown in Figure 3, then T includes the clauses
but does not include the clause x 4 _:x 3 , or the clause
As a second example, when T is a star labelled with positive literals,
such as the one shown in Figure 4, then T consists of all constraints specied by
a single positive or negative literal.
The algorithm described in Example 14 can be used to show that CSP( T ) is
tractable. Furthermore, it is easy to show that any set of clauses chosen from T
is satisable, unless it contains the unit clauses x i and :x i for some i. Hence T
has the Krom property. In view of these facts, we can apply Theorem 3 to conclude
that T
_ T is tractable.
Notice that in the special case when T is a star the set of constraints T
corresponds precisely to the set of Krom clauses over the variables labelling T .
In the more general case when T is an arbitrary tree, the set of constraints
includes a wider variety of clauses. For example, when T is the tree shown
in
Figure
3, then T includes the following clauses:
Example 16. [Tractable set constraints]
In [Drakengren and Jonsson 1998; Drakengren and Jonsson 1997], Drakengren
and Jonsson identify a number of tractable classes of set constraints that are spec-
ied by expressions involving set-valued variables and the relation symbols , disj
(denoting disjointness) and 6=.
In this example we will show that these tractability results can be obtained as a
simple consequence of Theorem 2. In this way we provide a shorter proof and show
that these results about set constraints conform to one of the general patterns for
tractable disjunctive constraints described in this paper.
We rst give the relevant denitions from [Drakengren and Jonsson 1998; Drak-
engren and Jonsson 1997]. An atomic set constraint or atomic set relation is an
expression of the form x i x disjunctive set constraint
or disjunctive set relation (DSR) is a disjunction of atomic constraints. A DSR is
called Horn if it consists of zero or more disjuncts of the form x i
and at most one disjunct of the form x i x j . A DSR is called 2-Horn if it consists
of zero or more disjuncts of the form x i 6= x j and at most one disjunct of the form
. The following are examples of Horn DSRs.
The latter two examples in this list are also 2-Horn.
An -interpretation is a function that maps all set variables in a problem instance
to (possibly empty) sets. An S-interpretation is a function that maps all
Building Tractable Disjunctive Constraints 23
set variables to nonempty sets. An atomic constraint x 1 R x 2 is satised by
an S ; -interpretation (S-interpretation), I , if and only if I(x 1
there exists an S ; -interpretation
which satises some disjunct d i of d. A set C of DSRs is
there exists an S ; -interpretation (S-interpretation)
which satises all members of C. Such a satisfying S ; -interpretation (S-interpretation)
is called an S ; -model (S-model) of C.
The following decision problems are studied in [Drakengren and Jonsson 1998;
Drakengren and Jonsson 1997]:
Instance:. A nite set C of Horn DSRs.
Question:. Does there exist an S ; -model of C?
|HornDSRSat:
Instance:. A nite set C of Horn DSRs.
Question:. Does there exist an S-model of C?
Instance:. A nite set C of 2-Horn DSRs.
Question:. Does there exist an S ; -model of C?
|2HornDSRSat:
Instance:. A nite set C of 2-Horn DSRs.
Question:. Does there exist an S-model of C?
The problem HornDSRSat ; is NP-complete [Drakengren and Jonsson 1998].
To show that the problem HornDSRSat can be solved in polynomial time we
dene to be the set of all constraints specied by expressions of the form x y
(where x and y are set-valued variables which must be assigned non-empty sets),
together with the empty constraint. We dene to be the set of all constraints
specied by expressions of the form x 6= y or x disj y (where x and y are set-variables
which must be assigned non-empty sets), together with the empty constraint. Then
is the set of all constraints specied by arbitrary disjunctions of expressions of
the form x 6= y or x disj y.
In [Drakengren and Jonsson 1997] it is shown that the problem of deciding
whether a set of atomic set constraints has an S-model can be solved in polynomial
time. The algorithm presented in [Drakengren and Jonsson 1997] represents set
constraints by a labeled directed graph and essentially consists of two tests. First,
if there is a triple of set variables x; y and z for which the constraints z x; z y
and x disj y are implied by the given constraint set, the input set is rejected (because
variable z is forced to be equal to ;). Secondly, if there is a pair of set variables x
and y for which the constraints x are implied by the given
constraint set, then the input is rejected (because these constraints cannot all be
satised). If a constraint set passes both of these tests then it is accepted.
The existence of this simple algorithm implies that CSP( [ ) and hence also
can be solved in polynomial time. It also implies that is 1-
independent with respect to . Hence, by Corollary 1 we can immediately conclude
that CSP( _ ), which corresponds to the decision problem HornDSRSat, can
be solved in polynomial time. In fact the algorithm dened in Figure 1 can be
P.Jonsson, and M. Koubarakis
seen as a generalisation of the iterative version of Algorithm Horn-Sat presented
in [Drakengren and Jonsson 1997].
The problem 2HornDSRSat is a subproblem of HornDSRSat so it can also
be solved in polynomial time.
To show that the decision problem 2HornDSRSat ; can be solved in polynomial
time, dene to be the set of all constraints specied by an expression which is
either of the form x y or else of the form x disj y (where x and y are set-valued
variables which can be assigned arbitrary sets), together with the empty
constraint. Dene to be the set of all constraints specied by an expression of
the form x 6= y (where x and y are set-valued variables which can be assigned
arbitrary sets), together with the empty constraint. In this case is the set of
constraints specied by a disjunction of expressions of the form x 6= y.
In [Drakengren and Jonsson 1998] it is shown that the problem of deciding
whether a set of atomic set constraints has an S ; -model can be solved in polynomial
time. As in the above case, set constraints are represented by a labeled
directed graph. The algorithm proceeds in two steps. First, for any triple of set
variables x; y and z for which the constraints z x; z y and x disj y are implied
by the given constraint set, variable z is forced to be equal to ;. Then the
constraints are examined to nd out whether a contradictory triple of constraints
x implied by the given constraint set. If this is the case,
then the set is rejected. Otherwise, it is accepted.
The existence of this simple algorithm implies that CSP( [ ) and hence also
can be solved in polynomial time. It also implies that is 1-
independent with respect to . Hence, by Corollary 1 we can immediately conclude
that CSP( _ ), which now corresponds to the decision problem 2HornDSRSat ; ,
can be solved in polynomial time. As in the above case, the algorithm dened in
Figure
1 can be seen as a generalisation of the iterative version of Algorithm 2-
Horn-Sat presented in [Drakengren and Jonsson 1998].
Having identied the key properties underlying all of these tractable classes, we
are now in a position to identify new tractable classes simply by searching for
appropriate sets of tractable constraints which have these properties.
Example 17. [Disjunctive congruences]
Constraints in the form of congruence relations over the integers are used in a
variety of applications, including the representation of large integers in computer
algebra systems [von zur Gathen and Gerhard 1999], and the representation of
periodic events in temporal databases [Kabanza et al. 1995; Bertino et al. 1998].
One of the fundamental results of elementary number theory is the Chinese Remainder
Theorem [Jackson 1975], which states that a collection of simultaneous
linear congruences
x a
x a
x an (mod mn )
is solvable if and only if the greatest common divisor of m i and m j divides a i a j ,
Building Tractable Disjunctive Constraints 25
for all distinct i and j. When this condition holds between a pair of congruences,
we shall say that they are compatible (Note that compatibility can be decided in
polynomial time.)
Using this result, together with the results established earlier in this paper, we
will construct a number of new tractable classes of constraints.
Let the set of possible values be the set of integers. Dene to be the set of
all unary constraints which are specied by a linear congruence of the form x a
(mod m), for some natural number a, and some modulus m, together with the
empty constraint. For each natural number b, dene b to be the subset of
containing all unary constraints which are specied by a congruence of the form
x b (mod m), for some m.
A problem instance in CSP() is specied by a collection of simultaneous congru-
ences. For example, a typical set of constraints in CSP(), involving the variables
x 1 and x 2 , would be:
The Chinese Remainder Theorem implies that any set of constraints in CSP()
has a solution, unless it contains a pair of constraints which are incompatible.
In view of this fact, the results obtained in this paper can be used to construct
tractable disjunctive constraints in the following three ways:
|For any natural number b, it is clear from the denition of compatibility above
that every pair of constraints in b is compatible. Hence, b has the guaranteed
satisfaction property, and by Theorem 1 we conclude that CSP( b
_ ) is
tractable. This means that there is an ecient way to solve any collection of
simultaneous disjunctions of congruences which all have the property that at
least one disjunct comes from b . For example, when have the
following collection:
|For any natural number b, CSP b 1( [ b ) is tractable, because we can determine
whether or not any given instance has a solution in polynomial time by
examining each pair of constraints to see whether they are compatible. Further-
more, no pair of constraints which are both in b can be incompatible, so b
is 1-independent with respect to . Hence, by Corollary 1, we conclude that
b ) is also tractable, for any natural number b. This means that there
is an ecient way to solve any collection of simultaneous disjunctions of congruences
which all have the property that at most one disjunct comes from and
the remainder (if any) from b . For example, when have the
following collection of congruences:
26 D.Cohen, P.Jeavons, P.Jonsson, and M. Koubarakis
|From the observations already made it is clear that the set has the Krom prop-
erty. Hence, by Theorem 3 we conclude that CSP( _) is tractable. This means
that there is an ecient way to solve any collection of simultaneous disjunctions
of congruences which all contain at most two disjuncts. For example, we might
have the following collection:
Note that the new tractable disjunctive constraints sets constructed in these three
distinct ways are all incomparable with each other.
5. CONCLUSION
In this paper we have established three sucient conditions for tractability of disjunctive
constraints. We have shown that these conditions account for a wide variety
of known tractable constraint classes, over both nite and innite sets of values,
and that they aid the search for new tractable constraint classes. The examples we
have given of new tractable classes obtained in this way are as follows:
|A generalisation of max-closed constraints to innite (ordered) domains (Exam-
ple 11);
|A generalisation of connected row-convex constraints to innite (ordered) domains
(Example 12);
|A (slight) generalisation of extended Horn Clauses (Example 14);
|The new class of extended Krom clauses (Example 15);
|Three new classes of tractable disjunctive congruences (Example 17).
These results provide the rst examples of a constructive approach to obtaining
tractable constraints, based on combining known tractable classes.
This new approach to obtaining tractable classes leads to results of great gener-
ality, as we have shown in this paper. It raises the possibility that on any given
domain there are only a small number of 'basic' tractable constraint types, and all
other tractable constraint classes can be built up from these using a small number
of standard construction techniques.
--R
A linear time algorithm for testing the truth of certain quanti
An access control model supporting periodicity constraints and temporal reasoning.
Tractable disjunctive constraints.
Characterising tractable constraints.
Tree clustering for constraint networks.
The complexity of the satis
Constraint satisfaction over connected row convex constraints.
Algorithms and Complexity for Temporal and Spatial Formalisms.
Qualitative reasoning about sets applied to spatial reasoning.
Reasoning about set constraints applied to tractable inference in intuitionistic logic.
Computers and Intractability: A Guide to the Theory of NP-Completeness
Decomposing constraint satisfaction problems using database techniques.
Number Theory.
A unifying framework for tractable con- straints
Closure properties of constraints.
Tractable constraints on ordered domains.
Journal of Computer and System Sciences
A polynomial time algorithm for linear programming.
Fast parallel constraint satisfaction.
Dense time and temporal constraints with 6
From local to global consistency in temporal constraint networks.
A canonical form for generalized linear constraints.
In TAPSOFT
A constraint sequent calculus.
A canonical form for generalized linear constraints.
Journal of Symbolic Computation
Engineering dynamic scheduler for work manager.
Consistency in networks of relations.
Networks of constraints: Fundamental properties and applications to picture processing.
Reasoning about temporal relations: a maximal tractable subclass of Allen's interval algebra.
Constraint tractability theory and its application to the product development process for a constraint-based scheduler
The complexity of satis
On the minimality and decomposability of row- convex constraint networks
A generic arc-consistency algorithm and its specializations
Constraint propagation algorithms for temporal reasoning: A revised report.
Modern Computer Algebra.
--TR
A sufficient condition for backtrack-bounded search
Tree clustering for constraint networks (research note)
Constraint propagation algorithms for temporal reasoning: a revised report
Extended Horn sets in propositional logic
A canonical form for generalized linear constraints
A generic arc-consistency algorithm and its specializations
A constraint sequent calculus
Fast parallel constraint satisfaction
Decomposing constraint satisfaction problems using database techniques
Characterising tractable constraints
Reasoning about temporal relations
On the minimality and global consistency of row-convex constraint networks
Handling infinite temporal data
Tractable constraints on ordered domains
Closure properties of constraints
Constraints, consistency and closure
A unifying approach to temporal constraint reasoning
An access control model supporting periodicity constraints and temporal reasoning
Modern computer algebra
Computers and Intractability
Engineering Dynamic Scheduler for Work Manager
Independence of Negative Constraints
A Unifying Framework for Tractable Constraints
From Local to Global Consistency in Temporal Constraint Networks
The complexity of satisfiability problems
--CTR
D. A. Cohen, Tractable Decision for a Constraint Language Implies Tractable Search, Constraints, v.9 n.3, p.219-229, July 2004
Mathias Broxvall, A method for metric temporal reasoning, Eighteenth national conference on Artificial intelligence, p.513-518, July 28-August 01, 2002, Edmonton, Alberta, Canada
David Cohen , Peter Jeavons , Richard Gault, New tractable constraint classes from old, Exploring artificial intelligence in the new millennium, Morgan Kaufmann Publishers Inc., San Francisco, CA,
David Cohen , Peter Jeavons , Richard Gault, New Tractable Classes From Old, Constraints, v.8 n.3, p.263-282, July
Mathias Broxvall , Peter Jonsson , Jochen Renz, Disjunctions, independence, refinements, Artificial Intelligence, v.140 n.1-2, p.153-173, September 2002
Claudio Bettini , X. Sean Wang , Sushil Jajodia, Solving multi-granularity temporal constraint networks, Artificial Intelligence, v.140 n.1-2, p.107-152, September 2002
Andrei Krokhin , Peter Jeavons , Peter Jonsson, Reasoning about temporal relations: The tractable subalgebras of Allen's interval algebra, Journal of the ACM (JACM), v.50 n.5, p.591-640, September
Mathias Broxvall , Peter Jonsson, Point algebras for temporal reasoning: algorithms and complexity, Artificial Intelligence, v.149 n.2, p.179-220, October | constraint satisfaction problem;relations;complexity;NP-completeness;disjunctive constraints;independence |
356483 | The CREW PRAM Complexity of Modular Inversion. | One of the long-standing open questions in the theory of parallel computation is the parallel complexity of the integer gcd and related problems, such as modular inversion. We present a lower bound $\Omega (\log n)$ for the parallel time on a concurrent-read exclusive-write parallel random access machine (CREW PRAM) computing the inverse modulo certain n-bit integers, including all such primes. For infinitely many moduli, our lower bound matches asymptotically the known upper bound. We obtain a similar lower bound for computing a specified bit in a large power of an integer. Our main tools are certain estimates for exponential sums in finite fields. | Introduction
. In this paper we address the problem of parallel computation of the inverse of
integers modulo an integer M . That is, given positive integers M # 3 and x < M , with gcd(x,
we want to compute its modular inverse inv M (x) # N defined by the conditions
is the Euler function, inversion can be considered as a special
case of the more general question of modular exponentiation. Both these problems can also be considered
over finite fields and other algebraic domains.
For inversion, exponentiation and gcd, several parallel algorithms are in the literature [1, 2, 3, 9, 10, 11,
12, 13, 14, 15, 18, 20, 21, 23, 28, 30]. The question of obtaining a general parallel algorithm running in
poly-logarithmic time (log n) O(1) for n-bit integers M is wide open [11, 12].
Some lower bounds on the depth of arithmetic circuits are known [11, 15]. On the other hand, some
examples indicate that for this kind of problem the Boolean model of computation may be more powerful
than the arithmetic model; see discussions of these phenomena in [9, 11, 15].
In this paper we show that the method of [5, 26] can be adapted to derive non-trivial lower bounds on
Boolean concurrent-read exclusive-write parallel random access machines (CREW PRAMs). It is based
on estimates of exponential sums.
Our bounds are derived from lower bounds for the sensitivity #(f) (or critical complexity) of a Boolean
function . It is defined as the largest integer m # n such
that there is a binary vector
the vector obtained from x by flipping its ith coordinate. In other words, #(f) is the maximum, over all
input vectors x, of the number of points y on the unit Hamming sphere around x with f(y) #= f(x); see
e.g., [31].
Since [4], the sensitivity has been used as an e#ective tool for obtaining lower bounds of the CREW PRAM
complexity, i.e., the time complexity on a parallel random access machine with an unlimited number of
all-powerful processors, where each machine can read from and write to one memory cell at each step, but
where no write conflicts are allowed: each memory cell may be written into by only one processor, at each
time step.
Fachbereich Mathematik-Informatik, Universitat Paderborn, 33095 Paderborn, Germany (gathen@uni-paderborn.de)
School of MPCE, Macquarie University, Sydney, NSW 2109, Australia (igor@mpce.mq.edu.au)
J. VON ZUR GATHEN AND I. E. SHPARLINSKI
By [22], 0.5 log 2
(#(f)/3) is a lower bound on the parallel time for computing f on such machines, see
also [6, 7, 8, 31]. This yields immediately the lower bound #(log n) for the OR and the AND of n input
bits. It should be contrasted with the common CRCW PRAM, where write conflicts are allowed, provided
every processor writes the same result, and where all Boolean functions can be computed in constant time
(with a large number of processors).
The contents of the paper is as follows. In Section 2, we prove some auxiliary results on exponential sums.
We apply these in Section 3 to obtain a lower bound on the sensitivity of the least bit of the inverse modulo
a prime. In Section 4, we use the same approach to obtain a lower bound on the sensitivity of the least
bit of the inverse modulo an odd squarefree integer M . The bound is somewhat weaker, and the proof
becomes more involved due to zero-divisors in the residue ring modulo M , but for some such moduli we are
able to match the known upper and the new lower bounds. Namely, we obtain the lower bound #(log n) on
the CREW PRAM complexity of inversion modulo an n-bit odd squarefree M with not 'too many' prime
divisors, and we exhibit infinite sequences of M for which this bound matches the upper bound O(log n)
from [11] on the depth of P-uniform Boolean circuits for inversion modulo a 'smooth' M with only `small'
prime divisors; see (4.6) and (4.7). For example, the bounds coincide for moduli
are any #s/ log s# prime numbers between s 3 and 2s 3 .
We apply our method in Section 5 to the following problem posed by Allan Borodin (see Open Question 7.2
of [11]): given n-bit positive integers m, x, e, compute the mth bit of x e .
Generally speaking, a parallel lower bound #(log n) for a problem with n inputs is not a big surprise. Our
interest in these bounds comes from their following features:
. some of these questions have been around for over a decade;
. no similar lower bounds are known for the gcd;
. on the common CRCW PRAM, the problems can be solved in constant time;
. for some types of inputs, our bounds are asymptotically optimal;
. the powerful tools we use from the theory of finite fields might prove helpful for other problems
in this area.
2. Exponential sums. The main tool for our bounds are estimates of exponential sums. For positive
integers M and z, we write eM
The following identity follows from the formula for a geometric sum.
Lemma 2.1. For any integer a,
Lemma 2.2. For positive integers M and H, we have
X
is the remainder of H - 1 modulo M .
THE CREW PRAM COMPLEXITY OF MODULAR INVERSION 3
Proof. We note that
Thus
X
From Lemma 2.1 we see that the last sum is equal to MW , where W is the number of (x, y) with
It is easy to see that
and the result follows.
Taking into account that (r we derive from Lemma 2.2 that the bound
X
holds for any H and M .
Also, it is easy to see that for H # M , then the identity of Lemma 2.2 takes the form
X
Finally, we have
X
Indeed, this sum is smaller by the term corresponding to a = 0, which equals H 2 .
4 J. VON ZUR GATHEN AND I. E. SHPARLINSKI
In the sequel, we consider several sums over values of rational functions in residue rings, which may not
be defined for all values. We use the symbol
P # to express that the summation is extended over those
arguments for which the rational function is well-defined, so that its denominator is relatively prime to the
modulus. We give an explicit definition only in the example of the following statement, which is known as
the Weil bound ; see [19, 25, 32].
Lemma 2.3. Let f, g # Z[X] be two polynomials of degrees n, m, respectively, and p a prime number such
that the rational function f/g is defined and not constant modulo p. Then
X
# (n +m- 1)p 1/2 .
Let #(k) denote the number of distinct prime divisors of an integer k. The following statement is a
combination of the Chinese Remainder Theorem and the Weil bound.
Lemma 2.4. Let M # N be squarefree with M # 2, d a divisor of M , and f, g # Z[X] of degrees n, m,
respectively, such that the rational function f/g is defined and not constant modulo each prime divisor
of M . Then
X
Proof. In the following, p stands for a prime divisor of M . We define M p # N by the conditions
Then, one easily verifies the identity
We use the estimate of Lemma 2.3 for those p for which p |
/ d and p > max{n, m}, and estimate trivially
by p the sum for each other p. Then
X
# Y p |
(n +m- 1)p 1/2
Y p|d
Since #(M/d) #(M), we obtain the desired estimate.
Lemma 2.5. Let M # 2 be a squarefree integer, f, g # Z[X] of degrees n, m, respectively, such that f/g
is defined and neither constant nor a linear function modulo each prime divisor p of M . Then for any
N,H, d # N with H # M and d|M , we have
X
Proof. From Lemma 2.1 we obtain
X
X
X
X
X
From Lemma 2.4 we see that for each a < M the sum over u can be estimated as
X
d. Applying the estimate (2.2), we obtain the result.
The following result is the particular case Theorem 1 of [29].
Lemma 2.6. There exists a constant c such that for all polynomials
with gcd(a t , . , a 1 ,
X
For a 0 , . , a k-1 # Z, not all zero, we define (a 0 , . , a k-1 ) to be the largest exponent e for which 2 e
divides a 0 , . , a k-1 .
Lemma 2.7. Let a 0 , . , a k-1 # Z not be all zero, and
a
Proof. We extend to Q by (a/b) = (a)-(b) and to nonzero matrices in Q kk by taking the minimum
value at all nonzero columns. Then (U v) # (U)+(v) for a matrix U and a vector v such that Uv #= 0.
. The determinant of this Vandermonde matrix has value
6 J. VON ZUR GATHEN AND I. E. SHPARLINSKI
We consider an entry of the adjoint adC k of C k . Each of the summands contributing to the determinant
expansion of that entry is divisible by
so that
(In fact, we have equality, since det C k-1 has the right hand side as its -value and is one entry of adC k .)
Therefore
Now from the inequality
(b) the result follows.
We also need an estimate on the number of terms in the sum of Lemma 2.5. For a polynomial g # Z[X]
and M,H # Z, we denote by T g (M,H) the number of x # Z for which 0 # x < H and gcd (g(x), M) = 1.
The following result is, probably, not new and can be improved via more sophisticated sieve methods.
Lemma 2.8. Let M > 1 be squarefree and g # Z[x] of degree m such that gcd(g(x),
Z. Then for all integers H # M , we have
Proof. We denote by #(M, H) the number of x # {0, . , H - 1} such that
and set is squarefree, the inclusion-exclusion principle yields
#(d)=k
#(d, H).
For any divisor d of M we have
d
#(p).
Therefore,
#(d)=k
#(d)
#(d)
By assumption, g takes a nonzero value modulo every prime divisor p of M . Thus #(p) # min{m, p - 1},
and the claim follows.
Throughout this paper, log z means the logarithm of z in base 2, ln z means the natural logarithm, and
1, if z # 1.
Lemma 2.9. For positive integers m and M , with M > 1 squarefree, we have
Y
Proof. We split the logarithm of the product as follows
p#2m
p>2m
,
and prove a lower bound on each summand. For the first one, we use that
for x > 1
by [24], (3.15). Thus, for m > 1
p#2m
p#2m
It is easy to verify that for the sum on the left hand side does not exceed 3m as well.
For the second summand, we use that (1
This implies that
p>2m
p>2mp
From [24], (3.20), we know that
s be the sth prime number, so that p s # s 2 for s # 2.
Thus for s # 2 we have
2.
The inequality between the first and last term is also valid for s = 1. Now (2.4), (2.5), and (2.6) imply
the claim.
8 J. VON ZUR GATHEN AND I. E. SHPARLINSKI
3. PRAM complexity of the least bit of the inverse modulo a prime number. In this
section, we prove a lower bound on the sensitivity of the Boolean function representing the least bit of
the inverse modulo p, for an n-bit prime p. For x # N with gcd(x, we recall the definition of
in (1.1). Furthermore, for x 0 , . , x n-2 # {0, 1}, we let
We consider Boolean functions f with n - 1 inputs which satisfy the congruence
for all x 0 , . , x n-2 # {0, 1} with no condition is imposed for the value
of f(0, . , 0).
Finally we recall the sensitivity # from the introduction.
Theorem 3.1. Let p be a su#ciently large n-bit prime. Suppose that a Boolean function f(x 0 , . , x n-2 )
satisfies the congruence (3.2). Then
log n - 1.
Proof. We let k be an integer parameter to be determined later, with 2 # k # n - 3, and show that
large enough. For this, we prove that there is some integer z with 1 # z # 2 n-k-1 and
provided that p is large enough. We note that all these 2 k z and 2 k z are indeed invertible modulo p.
We set e k. Then it is su#cient to show that there exist
integers z, w 0 , . , w k with
Next we set k. Then it is su#cient
to find integers x, y, u 0 , . ,
Indeed from each solution of the system (3.4) we obtain a solution of the system (3.3) by putting
On the other hand, the system (3.4) contains more variables
and is somewhat easier to study. A typical application of character sum estimates to systems of equations
proceeds as follows. One expresses the number of solutions as a sum over a # Z p , using Lemma 2.1, then
isolates the term corresponding to a = 0, and (hopefully) finds that the remaining sum is less than the
isolated term. Usually, the challenge is to verify the last part. In the task at hand, Lemma 2.1 expresses
the number of solutions of (3.4) as
THE CREW PRAM COMPLEXITY OF MODULAR INVERSION 9
a
a
a
where the first summand corresponds to a R to the remaining sum, and we used
(2.2). For other the sum over x, y satisfies the conditions of Lemma 2.5, with
a
where
a i
Therefore f/g is neither constant nor linear modulo p. Thus,
Y
X
We have left out the factors |e p (-a i # i )|, which equal 1, transformed the summation index 2a i into a i ,
and used the identity (2.2).
It is su#cient to show that H 2 K 2(k+1) is larger than |R|, or that
Since K # (p - 6)/4, it is su#cient that
(3.
J. VON ZUR GATHEN AND I. E. SHPARLINSKI
We now set log n)/6#, so that 6(k 2. Now (1
real z > 0, and
2.
Furthermore, p 1/2 < 2 n/2 and 32n/3 < n 3/2 , and (3.6) follows from
log n
Hence the inequality (3.5) holds, and we obtain #(f) # k # n/6 - 0.5 log n - 1.
From [22] we know that the CREW PRAM complexity of any Boolean function f is at least 0.5 log(#(f)/3),
and we have the following consequence.
Corollary 3.2. Any CREW PRAM computing the least bit of the inverse modulo a su#ciently large
n-bit prime needs at least 0.5 log n - 3 steps.
4. PRAM complexity of inversion modulo an odd squarefree integer. In this section, we
prove a lower bound on the PRAM complexity of finding the least bit of the inverse modulo an odd
squarefree integer.
To avoid complications with gcd computations, we make the following (generous) definition. Let M be an
odd squarefree n-bit integer, and f a Boolean function with n inputs. Then f computes the least bit of the
inverse modulo M if and only if
for all x # {0, 1} n-1 with gcd(num(x), is the nonnegative integer with binary
representation x, similar to (3.1). Thus no condition is imposed for integers x # 2 n or that have a
nontrivial common factor with M .
Theorem 4.1. Let M > 2 be an odd squarefree integer with #(M) distinct prime divisors, and f the
Boolean function representing the least bit of the inverse modulo M , as above. Then
4Lnln #(M) +O(1) .
Proof. We let k an integer parameter to be determined later. We want to show that
there is some integer z with 1 # z # 2 n-k-1 for which
As in the proof of Theorem 3.1 we see that in this case #(f) # k.
We put e k. It is su#cient to show that there exist integers
such that
Next, we set As in the proof of
Theorem 3.1 we see that it is su#cient to find integers x, y, u 0 , . , satisfying the following
THE CREW PRAM COMPLEXITY OF MODULAR INVERSION 11
conditions for
Lemma 2.1 expresses the number of solutions as
a
a
a
a
where S d is the subsum over those 0 # a 0 , . , a k < M for which
It is su#cient to show that
d<M
|S d |.
First we note that SM consists of only one summand corresponding to a
to be added equal 1, we only have to estimate the number of terms for which the argument of
defined. For each y with 0 # y < H, we apply Lemma 2.8 to the polynomial
of degree k + 1. We set using Lemmas 2.8 and 2.9, we deduce that
The other |S d | are bounded from above by
X
a
J. VON ZUR GATHEN AND I. E. SHPARLINSKI
a
Now let
a i
d
Then
a i
d
and f/g is neither constant nor linear modulo any prime divisor . Thus we can apply
Lemma 2.5 and find that
X
a
the hypothesis of the lemma is satisfied because M is squarefree. If d < M , then a
M/d, with at least one b i #= 0. Then
e M/d
e M/d
1#b0<M/d
X
X
Since M/d is odd, we may replace the summation index 2b i by b i . From the inequalities (2.3) and (2.1)
we find
1#b0<M/d
X
X
Combining these inequalities, we obtain
therefore
d<M
d -3/2
where
Using (4.1) and (4.2) it is now su#cient to prove that
for some
To do so we suppose that
and will show that k satisfies the opposite inequality. Obviously, we may assume that
We also recall that K # (M - 6)/4 and
# M2 -k-3 . Now if
immediately obtain (4.3). Otherwise, we derive from (4.4) that
Comparing this inequality with the inequality (4.3) we obtain the desired statement.
Our bound takes the form
for an odd squarefree n-bit M with #(M) # ln M/LnlnM for some constant # < 0.5. We recall that
almost all odd squarefree
numbers M .
14 J. VON ZUR GATHEN AND I. E. SHPARLINSKI
We denote by i PRAM (M) and i BC (M) the CREW PRAM complexity and the Boolean circuit complexity,
respectively, of inversion modulo M . We know from [11, 21] that
for any n-bit integer M . The smoothness #(M) of an integer M is defined as its largest prime divisor, and
M is b-smooth if and only if #(M) # b. Then
Since we are mainly interested in lower bounds in this paper, we do not discuss the issue of uniformity.
Corollary 4.2.
for any odd squarefree n-bit integer M with #(M) # 0.49 ln M/LnlnM .
Theorem 4.3. There is an infinite sequence of moduli M such that the CREW PRAM complexity and
the Boolean circuit complexity of computing the least bit of the inverse modulo M are both #(log n), where
n is the bit length of M .
Proof. We construct infinitely many odd squarefree integers M with #(M) # 0.34 ln M/LnlnM , thus
satisfying the lower bound (4.8), and with smoothness thus satisfying the upper
bound O(ln ln O(log n) of [11] on the depth of Boolean circuits for inversion modulo such M .
For each integer s > 1 we select #s/ ln s# primes between s 3 and 2s 3 , and let M be the product of these
primes. Then, M # s 3s/
large enough.
5. Complexity of one bit of an integer power. For nonnegative integers u and m, we let Btm (u)
be the mth lower bit of u, i.e., Btm
with each u i # {0, 1}. If u
In this section, we obtain a lower bound on the CREW PRAM complexity of computing Btm (x e ). For
small m, this function is simple, for example Bt 0 can be computed in one step. However, we
show that for larger m this is not the case, and the PRAM complexity is #(log n) for n-bit data.
Exponential sums modulo M are easiest to use when M is a prime, as in Section 3. In Section 4 we had
the more di#cult case of a squarefree M , and now we have the extreme case
Theorem 5.1. Let m and n be positive integers with f be the Boolean function
with 2n inputs and
Proof. We set
, and consider so that #(f) #(g). Furthermore, k is an integer
parameter with e # k # 2 to be determined later.
To prove that #(g) # k, it is su#cient to show that there exists an integer x with
The first equality holds for any such x because e 2
# m, and thus the conditions are equivalent to the
existence of integers x, u 0 , . , u k-1 such that
which is implied by the existence of x, u 0 . , u k-1 , v 0 , . , v k-1 with
We set
Lemma 2.1 expresses the number of solutions of (5.1) as
a
a
a
is the subsum over all integers 0 # a 0 , . , a k-1 < 2 m with
It is su#cient to show that
0#<m
Sm contains only one summand, for a
Using the function from Section 2, we have for # < m that
(a 0 ,.,a k-1 )=#
X
a
J. VON ZUR GATHEN AND I. E. SHPARLINSKI
Now let a 0 , . , a k-1 < 2 m . We set
a
0#j#e
so that
a
We put
X
where c is the constant from Lemma 2.6. This bound also holds for # m, because the sum contains
terms with absolute value 1. Using the (crude) estimate
e
e,
and noting that
a
from Lemma 2.7 we derive that for tuples with (a 0 , . , a k-1
provided that k # 2. Substituting this bound in (5.5), we obtain
where
(a
THE CREW PRAM COMPLEXITY OF MODULAR INVERSION 17
We set
2.
and as in the proof of Theorem 4.1, from Lemma 2.2 we find
Next, we obtain
0#<m
0#<m
ck 2 n+2mk-k-e-m/e-k 2 /2e+3
0#<m
< ck 2 n+2mk-k-e-m/e-k 2 /2e+4 .
We set
-m 1/3
It easy to verify that the inequality (5.2)
holds for this choice of k, provided that m is large enough.
Corollary 5.2. Let . The CREW PRAM complexity of finding the mth bit of an n-bit
power of an n-bit integer is at least 0.25 log m- o(log m). In particular, for
6. Conclusion and open problems. Inversion in arbitrary residue rings can be considered along
these lines. There are two main obstacles for obtaining similar results. Instead of the powerful Weil estimate
of Lemma 2.3, only essentially weaker (and unimprovable) estimates are available [17, 27, 29]. Also, we
need a good explicit estimate, while the bounds of [17, 27] contain non-specified constants depending on
the degree of the rational function in the exponential sum. The paper [29] deals with polynomials rather
than with rational functions, and its generalization has not been worked out yet.
Open Question 6.1. Extend Theorem 4.1 to arbitrary moduli M .
Moduli of the form prime number, are of special interest because Hensel's lifting
allows to design e#cient parallel algorithms for them [2, 11, 15]. Theorem 5.1 and its proof demonstrate
how to deal with such moduli and what kind of result should be expected.
Each Boolean function f(X 1 , . , X n ) can be uniquely represented as a multilinear polynomial of degree
of the form
0#k#d
1#i1<.<ik #r
We define its weight as the number of nonzero coe#cients in this representation. Both the weight and the
degree can be considered as measures of complexity of f . In [5, 26], the same method was applied to obtain
J. VON ZUR GATHEN AND I. E. SHPARLINSKI
good lower bounds on these characteristics of the Boolean function f deciding whether x is a quadratic
residue modulo p. However, for the Boolean functions of this paper, the same approach produces rather
poor results.
Open Question 6.2. Obtain lower bounds on the weight and the degree of the Boolean function f of
Theorem 4.1.
It is well known that the modular inversion problem is closely related to the GCD-problem.
Open Question 6.3. Obtain a lower bound on the PRAM complexity of computing integers u, v such
that relatively prime integers M # N > 1.
In the previous question we assume that gcd(N, M) = 1 is guaranteed. Otherwise one can easily obtain
the lower bound #(f) #(n) on the sensitivity of the Boolean function f which on input of two n-bit
integers M and N , returns 1 if they are relatively prime, and 0 otherwise. Indeed, if is an n bit
integer, then the function returns 0 for That is, the PRAM
complexity of this Boolean function is at least 0.5 log n +O(1).
Acknowledgment
. This paper was essentially written during a sabbatical visit by the second author
to the University of Paderborn, and he gratefully acknowledges its hospitality and excellent working
conditions.
--R
'Parallel Implementation of Schonhage's Integer GCD Algorithm'
Joachim von zur Gathen and Daniel Panario
Joachim von zur Gathen
Joachim von zur Gathen
Joachim von zur Gathen
Joachim Von Zur Gathen
Joachim von zur Gathen and Gadiel Seroussi
Introduction to number theory
Michal M
Theory and computation
Number theoretic methods in cryptography: Complexity lower bounds
The complexity of Boolean functions
--TR | parallel computation;CREW PRAM complexity;exponential sums;modular inversion |
356488 | On Interpolation and Automatization for Frege Systems. | The interpolation method has been one of the main tools for proving lower bounds for propositional proof systems. Loosely speaking, if one can prove that a particular proof system has the feasible interpolation property, then a generic reduction can (usually) be applied to prove lower bounds for the proof system, sometimes assuming a (usually modest) complexity-theoretic assumption. In this paper, we show that this method cannot be used to obtain lower bounds for Frege systems, or even for TC0-Frege systems. More specifically, we show that unless factoring (of Blum integers) is feasible, neither Frege nor TC0-Frege has the feasible interpolation property. In order to carry out our argument, we show how to carry out proofs of many elementary axioms/theorems of arithmetic in polynomial-sized TC0-Frege.As a corollary, we obtain that TC0-Frege, as well as any proof system that polynomially simulates it, is not automatizable (under the assumption that factoring of Blum integers is hard). We also show under the same hardness assumption that the k-provability problem for Frege systems is hard. | Introduction
One of the most important questions in propositional proof complexity is to show that there is a
family of propositional tautologies requiring superpolynomial size proofs in a Frege or Extended
Frege proof system. The problem is still open, and it is thus a very important question to
understand which techniques can be applied to prove lower bounds for these systems, as well
as for weaker systems. In recent years, the interpolation method has been one of the most
promising approaches for proving lower bounds for propositional proof systems and for bounded
arithmetic. Here we show that this method is not likely to work for Frege systems and some
weaker systems. The basic idea behind the interpolation method is as follows.
Department of LSI, Universidad Polit'ecnica de Catalu~na, Barcelona, Spain, bonet@lsi.upc.es. Research
partly supported by EU HCM network console, by ESPRIT LTR Project no. 20244 (ALCOM-IT), CICYT
TIC98-0410-C02-01 and TIC98-0410-C02-01
y Department of Computer Science, University of Arizona, toni@cs.arizona.edu. Research supported by NSF
Grant CCR-9457782, US-Israel BSF Grant 95-00238, and Grant INT-9600919/ME-103 from NSF and M - SMT
(Czech Republic)
z Department of Applied Math, Weizmann Institute, ranraz@wisdom.weizmann.ac.il. Research supported by
US-Israel BSF Grant 95-00238
We begin with an unsatisfiable statement of the form F (x;
z denotes a vector of shared variables, and x and y are vectors of private variables for formulas
A 0 and A 1 respectively. Since F is unsatisfiable, it follows that for any truth assignment ff
to z , either A 0 (x; ff) is unsatisfiable or A 1 (y; ff) is unsatisfiable. An interpolation function
associated with F is a boolean function that takes such an assignment ff as input, and outputs
only if A 0 is unsatisfiable, and 1 only if A 1 is unsatisfiable. (Note that both A 0 and A 1 can
be unsatisfiable in which case either answer will suffice).
How hard is it to compute an interpolation function for a given unsatisfiable statement F
as above ? It has been shown, among other things, that interpolation functions are not always
computable in polynomial time unless Nevertheless, it is
possible that such a procedure exists for some special cases. In particular, a very interesting
and fruitful question is whether one can find (or whether there exists) a polynomial size circuit
for an interpolation function, in the case where F has a short refutation in some proof system
S . We say that a proof system S admits feasible interpolation if whenever S has a polynomial
size refutation of a formula F (as above), an interpolation function associated with F has
a polynomial size circuit. Kraj'i-cek [K2] was the first to make the connection between proof
systems having feasible interpolation and circuit complexity.
There is also a monotone version of the interpolation idea. Namely, for conjunctive normal
monotone if the variables of z occur only
positively in A 1 and only negatively in A 0 . In this case, an associated interpolant function
is monotone, and we are thus interested in finding a polynomial size monotone circuit for an
interpolant function. We say that a proof system S admits monotone feasible interpolation
if whenever S has a polynomial size refutation of a monotone F , a monotone interpolation
function associated with F has a monotone polynomial size circuit.
Beautiful connections exist between circuit complexity, and proof systems having feasible
interpolation, in both (monotone and non-monotone) cases:
In the monotone case, superpolynomial lower bounds can be proven for a (sufficiently
strong) proof system that admits feasible interpolation. This was presented by the sequence of
papers [IPU, BPR, K1], and was first used in [BPR] to prove lower bounds for propositional
proof systems. (The idea is also implicit in [Razb2]).
In short, the statement F that is used is the Clique interpolation formula, A 0 (g; x)-A 1 (g; y) ,
where A 0 states that g is a graph containing a clique of size k (where the clique is described
by the x variables), and A 1 states that g is a graph that can be colored with
(where the coloring is described by the y variables). By the pigeonhole principle, this formula
is unsatisfiable. However, an associated monotone interpolation function would take as input
a graph g , and distinguish between graphs containing cliques of size k from those that can be
colored with , such a circuit is of exponential
size. Thus, exponential lower bounds follow for any propositional proof system S that admits
feasible monotone interpolation.
Similar ideas also work in the case where S admits feasible interpolation (but not necessarily
monotone feasible interpolation). The first such result, by [Razb2], gives explicit superpolynomial
lower bounds for (sufficiently strong) proof systems S admitting feasible interpolation,
under a cryptographic assumption. In particular, it was shown that a (non-monotone)
interpolation function, associated with a certain statement expressing P 6= NP , is computable
by polynomial size circuits only if there do not exist pseudorandom number generators.
Therefore, lower bounds follow for any (sufficiently strong) propositional proof system that
admits feasible interpolation (conditional on the cryptographic assumption that there exist
pseudorandom number generators). It is also possible to prove nonexplicit superpolynomial
lower bounds for a (sufficiently strong) proof system under the assumption that NP is not
computable by polynomial sized circuits.
Many researchers have used these ideas to prove lower bounds for propositional proof
systems. In particular, in the last five years, lower bounds have been shown for all of
the following systems using the interpolation method: Resolution [BPR], Cutting Planes
[IPU, BPR, Pud, CH], generalizations of Cutting Planes [BPR, K1, K3], relativized bounded
arithmetic [Razb2], Hilbert's Nullstellensatz [PS], the polynomial calculus [PS], and the Lovasz-
Schriver proof system [Pud3].
1.1 Automatizability and k-provability
As explained in the previous paragraphs, the existence of feasible interpolation for a particular
proof system S gives rise to lower bounds for S . Feasible interpolation, moreover, is a very
important paradigm for proof complexity (in general) for several other reasons. In this section,
we wish to explain how the lack of feasible interpolation for a particular proof system S implies
that S is not automatizable.
We say that a proof system S is automatizable if there exists a deterministic procedure
D that takes as input a formula f and returns an S -refutation of f (if one exists) in time
polynomial in the size of the shortest S -refutation of f . Automatizability is a crucial concept
for automated theorem proving: in proof complexity we are mostly interested in the length of
the shortest proof, whereas in theorem proving it is also essential to be able to find the proof.
While there are seemingly powerful systems for the propositional calculus (such as Extended
Resolution or even ZFC), they are scarce in theorem proving because it seems difficult to search
efficiently for a short proof in such systems. In other words, there seems to be a tradeoff
between proof simplicity and automatizability - the simpler the proof system, the easier it is to
find the proof.
In this section, we formalize this tradeoff in a certain sense. In particular, we show that if
S has no feasible interpolation then S is not automatizable. This was first observed by Russell
Impagliazzo. The idea is to show that if S is automatizable (using a deterministic procedure
D ), then S has feasible interpolation.
Theorem 1 If a proof system S does not have feasible interpolation, then S is not automatiz-
able.
Proof Suppose that S is automatizable, and suppose D is the deterministic procedure to find
proofs, and moreover, D is guaranteed to run in time n c , where n is the size of the shortest
proof of the input formula. Let A 0 (x; z) - A 1 (y; z) be the interpolant statement, and let ff
be an assignment to z . We want to output an interpolant function for A 0 (x; ff) - A 1 (y; ff) .
First, we run D on A 0 (x; z) - A 1 (y; z) to obtain a refutation of size s . Next, we simulate D
on A 0 (x; ff) for T (s) steps, and return 0 if and only if D produces a refutation of A 0 (x; ff)
will be chosen to be the maximum time for D to produce a refutation
for a formula that has a refutation of size s ; thus T c in this case. This works because
in the case where A 1 (y; ff) is satisfiable with satisfying assignment fl , we can plug fl into the
refutation of A 0 (x; ff) - A 1 (y; ff) to obtain a refutation of A 0 (x; ff) of size s . Therefore S has
feasible interpolation. ut
Thus, feasible interpolation is a simple measure that formalizes the complexity/search trade-
off: the existence of feasible interpolation implies superpolynomial lower bounds (sometimes
modulo complexity assumptions), whereas the nonexistence of feasible interpolation implies
that the proof system cannot be automatized.
A concept that is very closely related to automatizability is k-provablity. The k-symbol
provability problem for a particular Frege system S is as follows. The problem is to determine,
given a propositional formula f and a number k , whether or not there is a k-symbol S proof
of f . The k -line provability problem for S is to determine whether or not there is a k -line
S proof of f . The k -line provability is an undecidable problem for first-order logic [B1]; and
the first complexity result for the k-provability problem for propositional logic was provided
by Buss [B2] who proved the rather surprising fact that the k-symbol propositional provability
problem is NP -complete for a particular Frege system. More recently, [ABMP] show that the
k-symbol and k -line provability problems cannot be approximated to within linear factors for
a variety of propositional proof systems, including Resolution and all Frege systems, unless
The methods in our paper show that both the k-symbol and k -line provability problems
cannot be solved in polynomial time for any TC 0 -Frege system, Frege system, or Extended Frege
system, assuming hardness of factoring (of Blum integers). More precisely, using the same idea
as above, we can show that if there is a polynomial time algorithm A solving the k-provability
problem for S , then S has feasible interpolation: Suppose that is the
unsatisfiable statement. We first run A with first verifies that
there is a size proof of F for some fixed value of c . Now let ff be an assignment
to z . As above, we run A to determine if there is an O(s)-symbol (or O(s)-line) refutation
of A 0 (x; ff) and return 0 if and only if A accepts. In fact, this proof can be extended easily
to show that both the k-symbol and k -line provability problems cannot be approximated to
within polynomial factors for the same proof systems (TC 0 -Frege, Frege, Extende Frege) under
the same hardness assumption.
1.2 Interpolation and one way functions
How can one prove that a certain propositional proof system S does not admit feasible
interpolation ? One idea, due to Kraj'i-cek and Pudl'ak [KP], is to use one way permutations in
the following way. Let h be a one way permutation and let A 0 (x; z); A 1 (y; z) be the following
formulas.
The formula A
th bit of x is 0.
The formula
th bit of y is 1.
Since h is one to one, A 0 (x; z) - A 1 (y; z) is unsatisfiable. Assume that A 0 ; A 1 can be
formulated in the proof system S , and that in S there exists a polynomial size refutation for
A 0 (x; z) -A 1 (y; z) . Then, if S admits feasible interpolation it follows that given an assignment
ff to z there exists a polynomial size circuit that decides whether A 0 (x; ff) is unsatisfiable or
A 1 (y; ff) is unsatisfiable. Obviously, such a circuit breaks the i th bit of the input for h . Since
can be constructed for any i , all bits of the input for h can be broken. Hence, assuming
that the input for h is secure, and that in the proof system S there exists a polynomial size
refutation for A 0 - A 1 , it follows that S does not admit feasible interpolation.
A major step towards the understanding of feasible interpolation was made by Kraj'i-cek
and Pudl'ak [KP]. They considered formulas A 0 ; A 1 based on the RSA cryptographic scheme,
and showed that unless RSA is not secure, Extended Frege systems do not have feasible
interpolation. It has been open, however, whether or not the same negative results hold for
Frege systems, and for weaker systems such as bounded depth threshold logic or bounded depth
Frege.
1.3 Our results
In this paper, we prove that Frege systems, as well as constant-depth threshold logic (referred
to below as do not admit feasible interpolation, unless factoring of Blum integers
is computable by polynomial size circuits. (Recall that Blum integers are integers P of the
are both primes such that p 1 mod
our result significantly extends [KP] to weaker proof systems. In addition, our cryptographic
assumption is weaker.
To prove our result, we use a variation of the ideas of [KP]. In a conversation with Moni
Naor, he observed that the cryptographic primitive needed here is not a one way permutation
as in [KP], but the more general structure of bit commitment. Our formulas A 0 ; A 1 are based
on the Diffie-Hellman secret key exchange scheme [DH]. For simplicity, we state the formulas
only for the least significant bit. (Our argument works for any bit).
Informally, our propositional statement DH will be
The common variables are two integers X;Y , and P and g . P represents a number (not
necessarily a prime) of length n , and g an element of the group Z
. The private variables for
A 0 are integers a; b , and the private variables for A 1 are integers c; d .
Informally, A 0 say that g a mod that
ab mod P is even. Similarly, A 1 d) will say that
and g cd mod P is odd. The statement A 0 - A 1 is unsatisfiable since (informally) if A 0 ; A 1 are
both true we have
ab mod
We will show that the above informal proof can be made formal with a (polynomial size)
proof. On the other hand, an interpolant function computes one bit of the
secret key exchanged by the Diffie-Hellman procedure. Thus, if TC 0 -Frege admits feasible
interpolation, then all bits of the secret key exchanged by the Diffie-Hellman procedure can be
broken using polynomial size circuits, and hence the Diffie-Hellman cryptographic scheme is not
secure. Note, that it was proved that for are both primes such that
breaking the Diffie-Hellman cryptographic
scheme is harder than factoring P ! [BBR] (see also [Sh, Mc]).
It will require quite a bit of work to formalize the above statement and argument with a
short proof. Notice that we want the size of the propositional formula expressing the
Diffie Hellman statement to be polynomially bounded in the number of binary variables. And
additionally, we want the size of the proof of the statement to also be polynomially
bounded. A key idea in order to define the statement and prove it efficiently, is to introduce
additional common variables to our propositional Diffie-Hellman statement. The bulk of the
argument then involves showing how (with the aid of the auxillary variables) one can formalize
the above proof by showing that basic arithmetic facts, including the Chinese Remainder
Theorem, can be stated and proven efficiently within
1.4 Section description
The paper is organized as follows. In Section 2, we define our system. In Section
3, we define the used for the proof. In Section 4, we define precisely the
interpolation formulas which are based on the Diffie-Hellman cryptographic scheme. In Section
5 we show how to prove our main theorem, provided we have some technical lemmas that will
be proved fully in section 7. In Section 6 there is a discussion and some open problems. Finally
in section 7 we prove all the technical lemmas required for the main theorem.
The unusual organization of the paper is due to the many very technical lemmas required
to show the result, that are essential to the correctness of the argument, but not every reader
might want to go through. Sections 1-6 give an exposition of the result, relying on the complete
proofs in the technical part.
systems
For clarity, we will work with a specific bounded-depth threshold logic system, that we call
reasonable definition of such a system should also suffice. Our system
is a sequent-calculus logical system where formulas are built up using the connectives - ,
only if the number of 1's in x is at least k , and
\Phi i (x) is true if and only if the number of 1's in x is i mod 2.)
Our system is essentially the one introduced in [MP]. (Which is, in turn, an extension of
the system PTK introduced by Buss and Clote [BC, Section 10].)
Intuitively, a family of formulas f each
formula has a proof of size polynomial in the size of the formula, and such that every line in the
proof is a
are built up using the connectives -, Th k , \Phi 1 , \Phi 0 , :. All
connectives are assumed to have unbounded fan-in. Th k interpreted to be true if
and only if the number of true A i 's is at least k ; \Phi j interpreted to be true if and
only if the number of true A i 's is equal to j mod 2.
The formula denotes the logical AND of the multi-set consisting of A
and similarly for - , \Phi j and Th k . Thus commutativity of the connectives is implicit. Our proof
system operates on sequents which are sets of formulas of the form A
The intended meaning is that the conjunction of the A i 's implies the disjunction of the B j 's.
A proof of a sequent S in our logic system is a sequence of sequents, S 1 ; :::; S q , such that each
sequent S i is either an initial sequent, or follows from previous sequents by one of the rules of
inference, and the final sequent, S q , is S . The size of the proof is
its depth
is
The initial sequents are of the form: (1) A ! A where A is any formula; (2)
. The
rules of inference are as follows. Note that the logical rules are defined for n - 1 and k - 1 .
First we have simple structural rules such as weakening (formulas can always be added to the
left or to the right), contraction (two copies of the same formula can be replaced by one), and
permutation (formulas in a sequent can be reordered). The remaining rules are the cut rule,
and logical rules which allow us to introduce each connective on both the left side and the right
side. The cut rule allows the derivation of \Gamma;
The logical rules are as follows.
1. , we can derive
2. (Negation-right) From
3. (And-left) From A 1 ;
4. (And-right) From
5. (Or-left) From A
6.
7. (Mod-left) From A 1 ; \Phi 1\Gammai derive
8. (Mod-right) From A derive
9. (Threshold-left) From Th k
derive
derive
A proof is a bounded-depth proof in our system of polynomial size. More formally we
have the following definitions.
Ng be a family of sequents. Then fR
family of TC 0 proofs for F if there exist constants c and d such that the following conditions
hold: (1) Each R n is a valid proof of (\Gamma our system, and 2) For all i, the depth of
R n is at most d; and (3) For all n, the size of R n is at most
We note that we have defined a specific proof system for clarity; our result still holds for
any reasonable definition of a proof (it can be shown that our system polynomially
simulates any Frege-style system). The difference between a polynomial size proof in our system
and a polynomial size TC 0 proof is similar to the difference between NC 1 and TC 0 .
3 The
In this section we will describe some of the needed to formulate and to refute the
Diffie-Hellman formula. For simplicity of the description, let us assume that we have a fixed
number N which is an upper bound for the length of all numbers used in the refutation of
the Diffie-Hellman formula. The number N will be used to define some of the formulas below.
After seeing the statement and the refutation of the Diffie-Hellman formula, it will be clear that
it is enough to take N to be a small polynomial in the length of the number P used for the
Diffie-Hellman formula.
3.1 Addition and subtraction
We will use the usual carry-save AC 0 -formulas to add two n-bit numbers. Let
be two numbers. Then x denote the following AC 0 -formula: There will
be . The bit z i will equal the mod 2 sum of C i , x i and y i , where
C i is the carry bit. Intuitively, C i is 1 if there is some bit position less than i that generates
a carry that is propagated by all later bit positions until bit i . Formally, C i is computed
by OR(R
th bit position generates a carry, and P k is 1 if the k th bit
position propagates but does not generate a carry.)
As for subtraction, let us show how to compute z . Think of x; y as N -bit numbers.
y is the complement (modulo 2) of
the N bits of y , and x is the complement of the N bits of x . Denote
note that s is equal to 2 N similarly t is equal to 2 N
we know that x \Gamma y - 0 and thus we know that y
and thus . Thus, for any i , we can compute z i by (s N+1 - s i
3.2 Iterated addition
We will now describe the inputs m numbers, each n bits
long, and outputs their sum x 1 We assume that m - N . The
main idea is to reduce the addition of m numbers to the addition of two numbers. Let x i
be x i;n ; :::; x i;1 (in binary representation). Let l = dlog 2 Ne . Let
2l
, and assume (for
simplicity) that r is an integer.
Divide each x i into r blocks where each block has 2l bits, and let S i;k be the number in
the k th block of x i . That is,
2l
Now, each S i;k has 2l bits. Let L i;k be the low-order half of S i;k and let H i;k be the high order
half. That is, S
Denote
r
r
Then,
r
r
r
Hence, we just have to show how to compute the numbers H;L . Let us show how to compute
L , the computation of H is similar.
Denote
r
Since each L i;k is of length l , each L k is of length at most l which is at most 2l .
Hence, the bits of L are just the bits of the L k 's combined. That is,
As for the computation of the L k 's, note that since each L k is a poly-size sum of logarithmic
length numbers, it can be computed using poly-size threshold gates.
3.3 Modular arithmetic
Next, we describe our that compute the quotient and remainder of a number
z modulo p , where z is of length n . remainder and The inputs for the remainder and the
quotient formulas are as follows:
1. the number z ;
2. numbers
3. numbers k i and r i for all 1 - i - n .
The intended values for the variables k i and r i are such that 2
(for all 1 - i - n). The intended values for the variables p i are i \Delta p .
Suppose that z assume that the input variables k i , r i and
take the right values. Then our formula [z] p will output r , and our formula div p (z) will
output k . The formulas are computed as follows.
Suppose that the k i , r i and p i variables satisfy
(p
l be such that l \Delta
and can therefore be computed by
div p (z) is computed by SUM n
Notice that if the k i , r i and p i 's are not such that 2
then the formulas are not required to compute the correct values of the quotient or remainder,
and can give junk.
3.4 Product and iterated product
We will write x \Delta y to denote the formula SUM i;j [2 i+j \Gamma2 x computing the product of two
n-bit numbers x and y . By 2 i+j \Gamma2 x i y j we mean 2 i+j \Gamma2 if both x i and y j are true, and 0
otherwise.
Lastly, we will describe our computing the iterated product of m numbers.
This formula is basically the original formula of [BCH], and articulated as a TC 0 -formula in
[M].
The iterative product, PROD[z gives the product of z 1 ; :::; z m , where each z i is of
length n , and we assume that m;n are both bounded by N . The basic idea is to compute the
product modulo small primes using iterated addition, and then to use the constructive Chinese
Remainder Theorem to construct the actual product from the product modulo small primes.
Let Q be the product of the first t primes is the first integer that gives a
number Q of length larger than N 2 . Since q 1 ; :::; q t are all larger than 2, t is at most N 2 , and
by the well known bounds for the distribution of prime numbers the length of each q j is at most
O(log N) . For each q j , let g j be a fixed generator for Z
. Also, for each q j , let u j - Q be a
fixed number with the property that u j mod q (such a
number exists by the Chinese Remainder Theorem). PROD[z computed as follows.
1. First we compute r
, for all . This is calculated using the modular arithmetic
described earlier.
2. For each 1 -
a. Compute a ij such that (g a ij
. This is done by a table lookup.
b. Calculate c
c. Compute r j such that g c j
. This is another table lookup.
3. Finally compute
We will hardwire the values . Thus, this computation is obtained by
doing a table lookup to compute followed by an iterated sum followed by a mod Q
calculation.
3.5 Equality, and inequality
Often we will write are both vectors of variables or formulas:
When we
We apply the same conventions when writing 6=; !; - .
4 The Diffie-Hellman Formula
We are now ready to formally define our propositional statement DH . DH will be the
conjunction of A 0 and A 1 . The common variables for the formulas will be:
a) P and g representing n-bit integers, and for every i - 2n , we will also add common
variables for g 2 i
mod P .
b) X;Y , and for every i - 2n , we will also add common variables for X 2 i
mod P , and for
mod P .
c) We also add variables for P These variables
are needed to define arithmetic modulo P (see section 3.3).
For the following: the common
variable
mod
mod
. The
will be the conjunction of the following
1.
(Which means g a mod
2. For every j - n ,
mod P:
(Which means (g 2 j
modP .) Note that from this it is easy to prove for
3. Similar formulas for g b mod
mod P .
4. PROD i;j
is even. (Which means g ab mod P is even.)
5. For every i - N , formulas expressing
formulas are added to guarantee that the modulo P arithmetic is computed correctly).
Similarly, the formula A 1 d) will be the conjunction of the above formulas, but
with a replaced by c , b replaced by d , and the fourth item stating that g cd mod P is odd.
Note that the definition of the iterated product (PROD ) requires the primes q 1 ; :::; q t (as
well as their product Q , and the numbers u are fixed for the length n . So we
are going to hardwire the numbers q well as the correct values for the r i 's
and k i 's needed for the modulo q j arithmetic, for each one of these numbers.
refutation for DH
We want to describe a refutation for DH . As mentioned above the proof proceeds
as follows.
1. Using A 0 , show that g ab mod
2. Using A 1 , show that X b mod
3. Show that g cb mod
4. Using A 0 , show that g bc mod
5. Using A 1 , show that Y c mod
6. Show that g dc mod
We can conclude from the above steps that A 0 and A 1 imply that g ab mod
but now we can reach a contradiction since A 0 states that g ab mod P is even, while A 1 states
that g cd mod P is odd.
We formulate g ab mod P as
PROD i;j
and X b mod P as
Thus, step 1 is formulated as
PROD i;j
and so on.
Steps 1,2,4,5 are all virtually identical. Steps 3 and 6 follow easily because our formulas
defining g ab make symmetry obvious. Thus the key step is to show step 1; that is, to show how
to prove g ab mod As mentioned above, this is formulated as follows.
PROD i;j
We will build up to the proof that g ab mod P equals X b mod P by proving many lemmas
concerning our basic -formulas. The final lemma that we need is the following:
Lemma 4 For every z 1;1 ; :::; z m;m 0 and p, there are TC 0 -Frege proofs of
The proof of the lemma is given in section 7.
Using Lemma 4 for the first equality, and point 2 from section 4 for the second equality, we
can now obtain:
PROD i;j
which proves step 1.
The main goal of Section 7 is hence to show that the statement
has a short proof. This is not trivial because our are quite
complicated (and in particular the formulas for iterated product and modular arithmetic).
In order to prove the statement, we will need to carry out a lot of the basic arithmetic in
Before we go on to the technical part, we will try to give some intuition on how the
proof of the main lemma is built.
We organized the proof as a sequence of lemmas that show how many basic facts of
arithmetic can be formulated and proved in TC 0 -Frege (using our formulas). The proofs
of these lemmas require careful analysis of the exact formula used for each operation. The proof
of some of these lemmas is straightforward (using the well known while the
proof of other lemmas require some new tricks.
In short, the main lemmas that are used for the proof of the final statement
(Lemma are the following:
1. (Lemma 38). For every x; y and p , there are TC 0 -Frege proofs of
2. (Lemma 41). For every z are -Frege proofs of
3. (Lemma 47). For every z 1 ; z 2 , there are TC 0 -Frege proofs of
First, we prove some basic lemmas about addition, subtraction, multiplication, iterative-
sum, less-than, and modular arithmetic. Among these lemmas will be Lemma 38.
The proof of Lemma 41 is cumbersome, but it is basically straightforward, given some basic
facts about modular arithmetic. Recall that to do the iterated product we have to first compute
the product modulo small primes and then combine all these products to get the right answer
using iterated sum. Therefore, many basic facts of the modulo arithmetic need to be proven in
advance, as well as some basic facts of the iterated sum.
Once this is done we need to obtain the same fact modulo p (Lemma 48). At this point it is
easier to go through the regular product, where the basic facts of modular arithmetic are easier
to prove. Therefore it is important to show that TC 0 -Frege can prove
(Lemma 47). In our application z 1 and z 2 will themselves be iterated products.
To show this fact we use the Chinese Remainder Theorem. We first prove the equality
modulo small primes. This is relatively easy, since the sizes of these primes are sufficiently small
(O(log N)), and we can basically check all possible combinations. Once this is done, we apply
the Chinese Remainder theorem to obtain the equality modulo the product of the primes, and
since this product is big enough, we obtain the desired result.
Our proof of the Chinese Remainder Theorem, is different than the standard
textbook one. The main fact that we need to show is that if for every j , [R] q j
there are TC 0 -Frege proofs of [R] The usual proofs use some
basic facts of division of primes, that would be hard to implement here. Instead we prove by
induction on
This method allows us to work
with numbers smaller than the q i 's and again since these numbers are sufficiently small, we can
verify all possibilities.
6 Discussion and open problems
We have shown that TC 0 -Frege does not have feasible interpolation, assuming that factoring
of Blum integers is not efficiently computable. This implies (under the same assumptions)
that -Frege as well as any system that can polynomially-simulate -Frege is not
automatizable. It is interesting to note that our proof and even the definition of the Diffie-Hellman
formula itself is nonuniform, essentially due to the nonuniform nature of the iterated
product formulas that we use. It would be interesting to know to what extent our result holds
in the uniform TC 0 proof setting.
A recent paper [BDGMP] extends our results to prove that bounded-depth Frege doesn't
have feasible interpolation assuming factoring Blum integers is sufficiently hard (actually
their assumptions are stronger than ours). As a consequence bounded-depth Frege is not
automatizable under somewhat weaker hardness assumptions.
An important question that is still open is whether resolution, or some restricted forms
of it, is automatizable. A positive answer to this question would have important applied
consequences.
7 Formal proof of main lemma
The goal of this section is to prove Lemma 4. As mentioned earlier, we will build up to the
proof of this lemma, by showing that basic facts concerning arithmetic, multiplication, iterated
multiplication and modulus computations can be efficiently carried out in our proof system.
Before we begin the formal presentation, we would like to note that we will be giving a precise
description of a sequence of lemmas that are sufficient in order to carry out a full, formal proof
Lemma 4. However, since there are many lemmas and many of them have obvious proofs, we
will describe at a meta-level what is required in order to formalize the argument in TC 0 -Frege,
rather than give an excessively formal -Frege proof of each lemma.
In what follows, x , y and z will be numbers. Each one of them will denote a vector of
n variables or formulas (representing the number), where n - N and x i (respectively y i , z i )
denotes the i th variable of x (representing the i th bit of the number x). When we need to talk
about more than three numbers, we will write z 1 ; :::; z m to represent a sequence of m n-bit
(where m;n - N ), and now z i;j is the j th variable of z i (representing the j th bit in
the i th number).
Recall that whenever we say below "there are TC 0 -Frege proofs" we actually mean to say
"there are polynomial size TC 0 -Frege proofs". Some trivial properties like
are not stated here.
7.1 Some basic properties of addition, subtraction and multiplication
Lemma 5 For every x; y , there are TC 0 -Frege proofs of x
Proof of Lemma 5 Immediate from the fact that the addition formula was defined in a
symmetric way. ut
Lemma 6 For every x; z , there are TC 0 -Frege proofs of x
Proof of Lemma 6 By the definition of the addition formula, the i th bit of ((x
equal to \Phi 1 (\Phi 1 is the carry bit going into the
th position, when we add x and y , and C similarly defined to be the carry bit
going into the i th position when we add
Using basic properties of \Phi 1 and the above definitions, there is a simple
proof that if \Phi 1 (C i (x;
Thus it is left to show that for all i
We will show how to prove the stronger equality
(It can be verified that this is the strongest equality possible for the 4 quantities
That is, all 6 assignments for
that satisfy the above equality are actually
possible.)
We will prove this by induction on i . For the carry bits going into the first
position are zero, so the above identity holds trivially. To prove the above equality for i ,
we assume that it holds for We will prove the equality by considering many cases,
where a particular case will assume a fixed value to each of the following seven quantities:
subject to the condition
that C It is easy to check that the
number of cases is 48 since there are 2 choices for x choices for y choices for z
and 6 choices in total for C
Each case will proceed in the same way. We will first show how to compute
using the above seven values. Then we
simply verify that in all 48 cases where the inductive hypothesis holds, the equality is true.
First, we will show that
This requires a proof along the following lines. If x then the left-hand side of the
above statement is true since position a carry, and also the right-hand side of
the statement is also true. Similarly, if x both sides of the above statement
are false (since position carry). The last case is when x
vice-versa). In this case, position propagates a carry, so the i th carry bit is 1 if and only if
there exists a such that the j th position generates a carry, and all positions between
carries - but this is exactly the definition of C . Thus, we have
in this last case that both sides of the statement are true if and only if C
Using the above fact, and also that
only if z
arguments show that C i (y; can also be computed as simple
formulas of the seven pieces of information. ut
Lemma 7 For every x; y , there are TC 0 -Frege proofs of
Proof of Lemma 7 computed by taking the first N bits of
that by the definition of the addition formula it follows easily that all bits of (y + y) are 1, and
hence that ((y
and hence the first N bits of this number are the same as the first N bits of x . ut
Lemma 8 For every x; y , there are
Proof of Lemma 8 computed by taking the first N bits of x By the
definition of the addition formula, and since x - y , it can be proved that the (N th bit of
hence that
Therefore, as in Lemma 7,
In particular, the first N bits of are the same as those of x
Lemma 9 For every x; z , there are TC 0 -Frege proofs of x
Proof of Lemma 9 Follows immediately from Lemma 7 and Lemma 6 as follows:
For every z , there are TC 0 -Frege proofs of
Proof of Lemma 10 We need to show that for every j ,
This is shown by a rather tedious but straightforward proof following the definition of the
formula SUM for iterated addition. Namely, we show first that
and similarly that
Secondly, we show that [H using the definition of + . This second step is not
difficult because all carry bits are zero. ut
Lemma 11 For every z 1 ; :::; z m , and every fixed permutation ff , there are TC 0 -Frege proofs of
(That is, the iterated sum is symmetric.)
Proof of Lemma 11 Immediate from the fact that the formula SUM was defined in a
symmetric way. ut
Lemma 12 For every z , there are TC 0 -Frege proofs of
Proof of Lemma 12 By definition of the iterated addition formula SUM , it is straightforward
to prove that
and similarly that
Then it is also straightforward to show, using the definition of the formula for + , that
for every j . (Again, all carry bits are zero.) ut
Lemma 13 For every z 1 ; :::; z m , there are TC 0 -Frege proofs of
Proof of Lemma 13 Recall that SUM [z computed by adding two numbers H;L .
Recall that L is computed by first computing the numbers L
is the
low-order half of the k th block of z i . The first equality follows from Lemma 13, and Similarly,
computed by H 0 +L 0 , where L 0 is computed by first computing the numbers
In both L k ; L 0
k the sum is computed using poly-size threshold gates, e.g. by using the unary
representation of each L i;k . It is therefore straightforward to prove for each k , L
(e.g. by trying all the possibilities for L 0
proving the formula separately for each
possibility).
Now consider the formula Since in this addition there is no carry flow from one
block to the next one, and since the bits of L; L in each block are just the bits of L k ; L 0
(respectively), we can conclude that in a similar way we can prove that
we are now able to conclude
ut
Lemma 14 For every z 1 ; :::; z m , there are TC 0 -Frege proofs of
Proof of Lemma 14 Can be proved easily from Lemma 13, and Lemma 6 as follows:
Lemma 15 For every z 1 ; :::; z m , and every 1 - k - m, there are TC 0 -Frege proofs of
Proof of Lemma 15 By Lemma 13, Lemma 14 and Lemma 11, we have
The proof now follows by repeating the same argument m \Gamma k times, where Lemma 12 is used
for the base case. ut
Lemma 16 For every x; y , there are TC 0 -Frege proofs of
x
Proof of Lemma 16 Immediate from the fact that the product formula was defined in a
symmetric way. ut
Lemma 17 For every x; x is a power of 2, there are TC 0 -Frege proofs of
x
Proof of Lemma 17 It is straightforward to prove that 2 i \Delta y where y is any sequence of bits,
consists of adding to the end of y , i 0's. The lemma easily follows. ut
Lemma For every z 1 ; :::; z m , and every x where x is a power of 2, there are
proofs of
Proof of Lemma The proof of this lemma is like the proof of Lemma 21, but using
Lemma 17 instead of 20. ut
Lemma 19 For every x; are powers of 2 there are TC 0 -Frege proofs of
x
Proof of Lemma 19 Same as the proof of Lemma 17. ut
The following three lemmas are generalizations of the previous three lemmas.
Lemma 20 For every x; y; z there are TC 0 -Frege proofs of
x
Proof of Lemma 20 By definition of the product formula,
x
Similarly,
x
By iterative application of Lemma 15 (using also Lemma 11),
Similarly (using also Lemma 13, and Lemma 14),
and in the same way (using the same lemmas)
Thus, we have to prove
We will prove this by proving that for every i ,
this is trivial. Otherwise, x using Lemma 19, Lemma 18, and Lemma
we have
In the same way, (using also Lemma 17),
ut
Lemma 21 For every z 1 ; :::; z m , and every x, there are TC 0 -Frege proofs of
Proof of Lemma 21 We will show that for every i ,
The lemma then follows by the combination of all these equalities. The case proven as
follows:
The first equality follows by applying Lemma 13, and the second equality by Lemma 12.
For the general step:
The first equality follows from Lemmas 13 and the second equality follows from Lemma 20; the
third equality follows from Lemma 13. ut
Lemma 22 For every x; z , there are TC 0 -Frege proofs of
x
Proof of Lemma 22 We will show that x \Delta (y \Delta z) is equal to SUM i;j;k [2 i+j+k\Gamma3 x . The
same will be true for and the lemma follows.
By the definition of the product,
y
Hence, by Lemma 10, and two applications of Lemma 21 (and using freely Lemma 16),
x
Since it can be easily verified that (2 , the above is equal to
and by an iterative application of Lemma 15 (using also Lemma 11) the above is equal to
7.2 Some basic properties of less-than
Lemma 23 For every x; y , there are also of
Proof of Lemma 23 Either there is a bit i such that i is the most significant bit where x
and y differ, or not. If all bits are equal, then But if there is i such that it is the most
significant bit where they differ, then if x
Lemma 24 For every x; y , there are
Proof of Lemma 24 By lemma 23, Suppose for a contradiction
that and we get x ? x (which is easily proved to be
false).
Lemma 25 For every x; y; z there are
Proof of Lemma 25 If then the proof of the first statement is obvious. Otherwise,
suppose that i is the most significant bit where x i 6= y i and that x Similarly,
suppose that j is the most significant bit where y j 6= z j and y
it is easy to show that i is the most significant bit where x i 6= z i , and x
thus x ? z . Similarly, if j ? i , then j is the most significant bit where x j 6= z j and x
reasoning also implies the second statement in the lemma. ut
Lemma 26 For every x; z , there are TC 0 -Frege proofs of x+z - x; and also z ?
Proof of Lemma 26 If z = 0 , then it is clear that x show
inductively for decreasing k that
Then when
Assuming that z ? 0 , let z i 0 be the most significant bit such that z i . The base case
of the induction will be to show that x Because z
and applying Lemma 12, it suffices to show that x There are two
cases. If x is equal to x n x
The other case is when x be the most significant bit position greater than i 0 such
that x One clearly exists because x
higher bits are equal, and thus x
For the inductive step, we assume that x want to show
that x Using the same argument as in the base case, one can
prove that (a) x . By the inductive
hypothesis, (b) x Applying Lemma 25 to (a) and (b), we obtain
as desired. ut
Lemma 27 For every x; y; z there are
Proof of Lemma 27 If
Lemma 26. Then by Lemma
Lemma 28 For every x; y; z there are
Proof of Lemma 28
The first equality follows from Lemma 8, and the second from Lemma 26, and the fact that
Lemma 29 For every x; y , there are TC 0 -Frege proofs of y
Proof of Lemma 29 x by definition. Also, since y ? 0 there is a bit
of y that is 1 , and suppose that it is y l . Then
x
ut
7.3 Some basic properties of modular arithmetic
Recall that the formulas for [z] p and div p (z) take as inputs not only the variables p and z , but
also variables k . The formulas give the right
output if n). So the following theorems
will all have the hypothesis that the values for the variables k i , r i and p i are correct, and that
there are short We will state this
hypothesis for the first lemma, and omit it afterwards for simplicity. For simplicity, we will also
use the notations k
The lemmas will be used with either is the number used for the DH
formula, or with fixed hardwired value (e.g.,
q j is one of the primes used for the iterated product formula, and Q is the product of all these
primes). If hardwired q then k can also be hardwired. Hence,
there values are correct and it is straightforward to check (i.e. to prove) that the non-variable
are all correct. If
are inputs for the DH formula itself, and the requirements 2
are part of the requirements in the DH formula.
Lemma z and p be n-bit numbers. Then there are TC 0 -Frege proofs of
Proof of Lemma
ut
Lemma 31 For every z and p, there are TC 0 -Frege proofs of
Also, the following uniqueness property has a
then
Proof of Lemma 31 From the previous lemma, we can express z as SUM i [(r
Let l be the same as in the definition of the modulo formulas. Then
The first equality follows from the definitions of the formulas [z] p and div p (z) . The remaining
equalities follow from the following Lemmas: 20, 6, 8, 21, 13, 15, 14 and 30.
Let us now prove the uniqueness part. Suppose
then we are done. But if div p (z) ? y , then by the claim bellow
which is a contradiction. (And a similar argument holds when div
Proof of the claim Since v ? y , by Lemma 8, and Lemma
Lemma 9 we get that Therefore by Lemmas 29 and 26,
and by Lemma 25 we get p - x . ut
Lemma 33 For every z; k and p, there are TC 0 -Frege proofs of
Proof of Lemma 33 Let
(by Lemma 31).
Therefore,
By the uniqueness part of Lemma 31 applied to x , [z]
Lemma 34 For every x; y; z and p, there are -Frege proofs of
and also of
and
Proof of Lemma 34 By Lemma 31,
and by Lemma 33,
A similar argument shows that [x
Lemma 35 For every z 1 ; :::; z m and p, there are TC 0 -Frege proofs of
Proof of Lemma 35 The lemma follows easily from Lemma 13 and Lemma 34. ut
Lemma 36 For every x; y and p, there are TC 0 -Frege proofs of
Proof of Lemma 36
The first equality follows from Lemma 34; the next equality follows from the assumption that
and the third equality follows from Lemma 34. ut
Lemma 37 For every x; y; z and p, there are -Frege proofs of
Proof of Lemma 37 Assuming that [x it follows from the above Lemma 36
that [x . The left side of the equation is equal to:
The first equality follows from Lemma 34; the second equality follows from Lemmas 5, 6 and
7; the third equality follows from Lemma 33. Similarly, it can be shown that [y
and thus the lemma follows. ut
Lemma 38 For every x; y and p, there are TC 0 -Frege proofs of
Proof of Lemma 38
where the last equality follows from Lemma 33. ut
Lemma fixed numbers such that A = BC . Then for every z , there are
-Frege proofs of
This lemma will be used in situations where . Recall that the
numbers Q; are hardwired and also their corresponding k i , r i and the variables for the
's. Hence, we think of A; B; C as hardwired.
Proof of Lemma 39 Using Lemmas 31,33,22, we get
ut
7.4 Some basic properties of iterative product
Lemma 40 For every z 1 ; :::; z m , and every fixed permutation ff , there are TC 0 -Frege proofs of
(That is, the iterated product is symmetric.)
Proof of Lemma 40 This lemma is immediate from the symmetric definition of PROD . ut
Lemma 41 For every z 1 ; :::; z m , and every 1 - k - m, there are TC 0 -Frege proofs of
Proof of Lemma 41 Recall that we have hard-coded the numbers u j , such that u j mod q
and for all i . For all primes q j dividing Q , and for all m ,
we can verify the following statements:
that these statements are variable-free and hence they can be easily proven by doing a formula
evaluation.)
Recall that for any k , the iterated product of the numbers z k ; :::; z m is calculated as follows:
where r [k;:::;m]
j is computed like r j as defined in Section 3.4, but using r ij only for i such that
In the same way,
where r [1;::;k\Gamma1;[k;::;m]]
j is calculated as before by the following steps:
1. For
, and also calculate r
2. For calculate a i;j such that (g a i;j
also a ;j such that (g a ;j
3. Calculate c 0
4. Calculate r [1;::;k\Gamma1;[k;::;m]]
j such that g c 0
by table-lookup.
Therefore, all we have to do is to show that
Hence, all we need to do to prove Lemma 41 is to show the following claim:
42 For every j , there are TC 0 -Frege proofs of r [1;:::;k\Gamma1;[k;:::;m]]
.
The first step is to prove the following claim:
43 There are
.
43 is proven as follows.
[r [k;::;m]
The second equality follows by Lemma 39; the third equality follows by Lemma 35, and
Lemma 11. To prove the fourth equality, we need to use the fact that [u j \Delta r [k;:::;m]
and also for all i
These facts can be easily proved just by checking
all possibilities for r [k;:::;m]
(proving the statement for each possibility is easy, because these
statements are variable-free and hence they can be easily proven by doing a formula evaluation).
In order to prove the fourth equality formally, we can show that SUM i6=j [u i \Delta r [k;:::;m]
equals
zero by induction on the number of terms in the sum. ut
We can now turn to the proof of Claim 42. The quantity r [1;:::;m]
j is obtained by doing a
table lookup to find the value equal to g c j
\Gamma1) . Similarly,
the quantity r [1;:::;k\Gamma1;[k;:::;m]]
j is obtained by doing a table lookup to find the value equal to
Hence, it is enough to prove that c
. Using previous lemmas
Thus, it suffices to show that
Recall that a ;j is the value obtained by table-lookup such that (g a ;j
by Claim 43, we have that r
j , in turn, is the value obtained by
table-lookup to equal (g d
Now it is easy to verify that our table-lookup is one-to-one. That is, for every x;
. Using this property (with
7.5 The Chinese Remainder Theorem and other properties of iterative prod-
uct
The heart of our proof is a proof for the following lemma, which gives the hard
direction of the Chinese Remainder Theorem (a proof for the other direction is
simpler).
Lemma 44 Let R; S be two integers, such that for every j , [R] q j
: Then there are
-Frege proofs of
(where are the fixed primes used for the PROD formula (i.e., the first t primes), and
Q is their product.)
Proof of Lemma 44 Without loss of generality, we can assume that 0 - R; S -
prove that R = S . Otherwise, define R use Lemma 39 to show
that for every j , [R 0
. Since 0 - R 0 conclude that
For every k , let Q k denote
Note that the numbers Q k can be hardwired, and that
one can easily prove the following statements. (These statements are variable-free and hence
they can be easily proven by doing a formula evaluation.)
For every i ,
The proof of the lemma is by induction on t (the number of q j 's). For
the lemma is trivial. Assume therefore by the induction hypothesis that
and hence
Denote,
[R] , and D
[S] . Then by Lemma 31,
and
and since we know that [R] q t
; we have
and by [R] Q
, and Lemma 37
Since R; S are both lower than Q , it follows that DR ; D S are both lower than q t . Hence,
by Claim Therefore, we can conclude that
For every i, there are
then
Proof Since d 1 are only O(log n) possibilities for d 1 ; d 2 . Therefore, one can just
check all the possibilities for d 1 ; d 2 . Proving the statement for each possibility is easy, because
these statements are variable-free and hence they can be easily proven by doing a formula
evaluation.
Alternatively, one can define the function
, in the domain f0; :::; q i g , and
prove that f(x) is onto the range f0; :::; q i g . Then, by applying the propositional pigeonhole
principle, which is efficiently provable in TC 0 -Frege, it follows that f is one to one. ut
We are now able to prove the following lemmas.
Lemma 46 For every z , there are TC 0 -Frege proofs of
Proof of Lemma 46 Recall that PROD[z] is calculated as follows:
where r j is computed by r
By Claim 43, for every i , PROD[z] q i
thus have for every i , PROD[z] q i
The proof of the lemma now follows by Lemma 44.
ut
Lemma 47 For every z 1 ; z 2 , there are TC 0 -Frege proofs of
Proof of Lemma us prove that for every i ,
The proof of the lemma then follows by Lemma 44. By two applications of Lemma 38 it is
enough to prove for every i ,
Recall that PROD[z 1 ; z 2 ] is calculated as follows:
where r [1;2]
j is computed like r j as defined in Section 3.4. By Claim 43, for every i ,
Recall that [z 1
r 2;i . Therefore, all we have to prove is that for every
r [1;2]
By the definitions: r
Also,
r [1;2]
Therefore, one can just check all the possibilities for a 1;i ; a 2;i .
ut
Using the previous lemmas, we are now able to prove the following:
Lemma 48 For every z every p, there are TC 0 -Frege proofs of
(as before, given that 2
Proof of Lemma 48
The lemmas used for each equality in turn are: Lemmas 41,47, 38,47,41, and 41. ut
We are now ready to prove Lemma 4:
For every z 1;1 ; :::; z m;m 0 and p , there are TC 0 -Frege proofs of
(given that 2
Proof of Lemma 4 By an iterative application of the previous lemma. ut
Acknowledgments
We are very grateful to Omer Reingold and Moni Naor for collaboration at early stages of this
work, and in particular for suggesting the use of the Diffie-Hellman cryptographic scheme. We
also would like to thank Uri Feige for conversations and for his insight about extending this
result to bounded-depth Frege. Part of this work was done at Dagstuhl, during the Complexity
of Boolean Functions workshop (1997).
--R
"The Monotone Circuit Complexity of Boolean Functions"
"Minimal propositional proof length is NP-hard to linearly approximate"
"Generalized Diffie-Hellman modulo a composite is not weaker than factoring"
" Log depth circuits for division and related problems,"
"Non-automatizability of bounded-depth Frege proofs"
"Lower bounds for Cutting Planes proofs with small coefficients,"
"The Undecidability of k-Provability"
"On Godel's theorems on lengths of proofs II: Lower bounds for recognizing k symbol provability,"
"Cutting planes, connectivity and threshold logic,"
"An Exponential Lower Bound for the Size of Monotone Real Circuits,"
"Constant depth reducibility,"
"New directions in cryptography,"
"Upper and lower bounds for tree-like Cutting Planes proofs,"
"Interpolation theorems, lower bounds for proof systems and independence results for bounded arithmetic"
"Discretely ordered modules as a first-order extension of the cutting planes proof system,"
"Threshold circuits of small majority depth,"
"A key distribution system equivalent to factoring,"
"A lower bound for the complexity of Craig's interpolants in sentential logic,"
"Lower bounds for resolution and cutting planes proofs and monotone computations,"
"Algebraic models of computation and interpolation for algebraic proof systems,"
"Lower Bounds for the Monotone Complexity of some Boolean Func- tions"
"Unprovability of lower bounds on the circuit size in certain fragments of bounded arithmetic,"
"Composite Diffie-Hellman public-key generating systems are hard to break,"
--TR
--CTR
Alexander Razborov, Propositional proof complexity, Journal of the ACM (JACM), v.50 n.1, p.80-82, January
Pavel Pudlk, On reducibility and symmetry of disjoint NP pairs, Theoretical Computer Science, v.295 n.1-3, p.323-339, 24 February
Olaf Beyersdorff, Classes of representable disjoint NP-pairs, Theoretical Computer Science, v.377 n.1-3, p.93-109, May, 2007
Albert Atserias, Conjunctive query evaluation by search-tree revisited, Theoretical Computer Science, v.371 n.3, p.155-168, March, 2007
Samuel R. Buss, Polynomial-size Frege and resolution proofs of st-connectivity and Hex tautologies, Theoretical Computer Science, v.357 n.1, p.35-52, 25 July 2006
Paolo Liberatore, Complexity results on DPLL and resolution, ACM Transactions on Computational Logic (TOCL), v.7 n.1, p.84-107, January 2006
Maria Luisa Bonet , Nicola Galesi, Optimality of size-width tradeoffs for resolution, Computational Complexity, v.10 n.4, p.261-276, May 2002
Albert Atserias , Mara Luisa Bonet, On the automatizability of resolution and related propositional proof systems, Information and Computation, v.189 n.2, p.182-201, March 15, 2004
Juan Luis Esteban , Nicola Galesi , Jochen Messner, On the complexity of resolution with bounded conjunctions, Theoretical Computer Science, v.321 n.2-3, p.347-370, August 2004 | diffie-hellman;threshold circuits;propositional proof systems;frege proof systems |
356491 | Constructing Planar Cuttings in Theory and Practice. | We present several variants of a new randomized incremental algorithm for computing a cutting in an arrangement of n lines in the plane. The algorithms produce cuttings whose expected size is O(r2), and the expected running time of the algorithms is O(nr). Both bounds are asymptotically optimal for nondegenerate arrangements. The algorithms are also simple to implement, and we present empirical results showing that they perform well in practice. We also present another efficient algorithm (with slightly worse time bound) that generates small cuttings whose size is guaranteed to be close to the best known upper bound of J. Matou{s}ek [Discrete Comput. Geom., 20 (1998), pp. 427--448]. | Introduction
A natural approach for solving various problems in computational geometry is the divide-and-
conquer paradigm. A typical application of this paradigm to problems involving a set L of n
lines in the plane, is to fix a parameter r ? 0, and to partition the plane into regions R
(those regions are usually vertical trapezoids, or triangles), such that the number of lines of L
that intersect the interior of R i is at most n=r, for any m. This allows us to split the
problem at hand into subproblems, each involving the subset of lines intersecting a region R i .
Such a partition is known as a (1=r)-cutting of the plane. See [Aga91] for a survey of algorithms
that use cuttings. For further work related to cuttings, see [AM95].
The first (though not optimal) construction of cuttings, is due to Clarkson [Cla87]. Chazelle
and Friedman [CF90] showed the existence of (1=r)-cuttings with bound that is
worst-case tight). They also showed that such cuttings, consisting of vertical trapezoids, can
be computed in O(nr) time. Although this construction is asymptotically optimal, it does not
seem to produce a practically small number of regions. Coming up with the smallest possible
number of regions (i.e., reducing the constant of proportionality) is important for the efficiency
of (recursive) data structures that use cuttings. Currently, the best lower bound on the number
of vertical trapezoids in a (1=r)-cutting in an arrangement of lines, is 2:54(1 \Gamma o(1))r 2 , and the
A preliminary version of the paper appeared in the 14th ACM Symposium of Computational Geometry, 1998.
This work has been supported by a grant from the U.S.-Israeli Binational Science Foundation. This work is part
of the author's Ph.D. thesis, prepared at Tel-Aviv University under the supervision of Prof. Micha Sharir.
y School of Mathematical Sciences, Tel Aviv University, Tel Aviv 69978, Israel; sariel@math.tau.ac.il;
http://www.math.tau.ac.il/ ~ sariel/
optimal cutting has at most 8r 2 +6r+4 trapezoids, see [Mat98]. Improving the upper and lower
bounds on the size of cuttings is still open, indicating that our understanding of cuttings is still
far from being satisfactory. In Section 3, we outline Matousek construction for achieving the
upper bound and show a slightly improved construction (see below for details).
In spite of the theoretical importance of cuttings (in the plane and in higher dimensions),
we are not aware of any implementation of efficient algorithms for constructing cuttings. In this
paper we propose a new and simple randomized incremental algorithm for constructing cuttings,
and prove the expected worst-case tight performance bounds of the new algorithm, as stated in
the abstract. We also present empirical results on several algorithms/heuristics for computing
cuttings that we have implemented. They are mostly variants of our new algorithm, and they
all perform well in practice. An O(r 2 ) bound on the expected size of the cuttings for some of
those variants can be proved, while for the others no formal proof of performance is currently
available. We leave this as an open question for further research.
Matousek [Mat98] gave an alternative construction for cuttings, showing that there exists a
(1=r)-cutting with at most (roughly) 8r 2 vertical trapezoids. Unfortunately, this construction
relies on computing the whole arrangement, and its computation thus takes O(n 2 ) time. We
present a new randomized algorithm that is based on Matousek's construction; it generates a
(1=r)-cutting of size (1+ ")8r 2 , in O
log
expected time, where prescribed
constant.
In Section 2, we present the new algorithm, and analyze its expected running time and the
expected number of trapezoids that it produces. Specifically, the expected running time is O(nr)
and the expected size of the output cutting is O(r 2 ). In Section 3 we present our variant of
Matousek's construction. In Section 4 we present our empirical results, comparing the new
algorithm with several other algorithms/heuristics for constructing cuttings. These algorithms
are mostly variants of our main algorithm, but they also include a variant of the older algorithm
of Chazelle and Friedman. The cuttings generated by the new algorithm and its variants are of
size, roughly, 14r 2 . (The algorithms generate smaller cutting when r is small. For example, for
the constant is about 9.) In contrast, the Chazelle-Friedman algorithm generates cuttings
of size roughly 70r 2 . Some variants of our algorithm are based on cuttings by convex polygons
with a small number of edges. These perform even better in practice, and we have a proof of
optimality only for one of the methods PolyVertical, which can be interpreted as an extension
of CutRandomInc. We conclude in Section 5 by mentioning a few open problems.
Incremental Randomized Construction of Cuttings
Given a set "
S of n lines in the plane, let A( "
S) denote the arrangement of "
namely, the partition
of the plane into faces, edges, and vertices as induced by the lines of "
S. Let AVD
S) denote the
partition of the plane into vertical trapezoid, obtained by erecting two vertical segments up and
down from each vertex of A( "
S), and extending each of them until it either reaches a line of "
or all the way to infinity.
Computing the decomposed arrangement AVD
S) can be done as follows. Pick a random
permutation S =! s
S.
incrementally the decomposed arrangements AVD (S i ), for inserting the i-th line
s i of S into AVD (S To do so, we compute the zone Z i of s i in AVD (S i\Gamma1 ), which is the set
of all trapezoids in AVD (S i\Gamma1 ) that intersect s i . We split each trapezoid of Z i into at most 4
trapezoids, such that no trapezoid intersects s i in its interior, as in [SA95]. Finally, we perform a
pass over all the newly created trapezoids, merging vertical trapezoids that are adjacent, and have
identical top and bottom lines. The merging step guarantees that the resulting decomposition is
independently of the insertion order of elements in S i ; see [dBvKOS97].
However, if we decide to skip the merging step, the resulting structure, denoted as A j (S i ),
depends on the order in which the lines are inserted into the arrangement. In fact, A j (S i ) is
additional superfluous vertical walls. Each such vertical wall is a fragment of a
vertical wall that was created at an earlier stage and got split during a later insertion step.
S be a set of n lines in the plane, and let c ? 0 be a constant. A c-cutting
of "
S is a partition of the plane into regions R , such that, for each m, the
number of lines of "
S that intersect the interior of R i is at most cn.
A region C in the plane is c-active, if the number of lines of "
S that intersect the interior of
C is larger than cn.
A (1=r)-cutting is thus a partition of the plane into m regions such that none of them is
(1=r)-active. Chazelle and Friedman [CF90] showed that one can compute, in O(nr) time, a
(1=r)-cutting that consists of O(r 2 ) vertical trapezoids.
We propose a new algorithm for computing a cutting that works by incrementally computing
the arrangements A j (S i ), using a random insertion order of the lines. The new idea in the
algorithm, is that any "light" trapezoid (i.e., a trapezoid that is not (1=r)-active) constructed
by the algorithm is immediately added to the final cutting, and the algorithm does not maintain
the arrangement inside such a trapezoid from this point on. In this sense, one can think of the
algorithm as being greedy; that is, it adds a trapezoid to the cutting as soon as one is constructed,
until the whole plane is covered. The algorithm, called CutRandomInc, is depicted in Figure 1.
If CutRandomInc outputs C k , for some k ! n, then C k has no (1=r)-active trapezoids, and it
is thus a (1=r)-cutting. Otherwise, if C n is output, then again it has no (1=r)-active trapezoids,
because any such trapezoid must have been processed and split earlier (when one of the lines
crossing the trapezoid is inserted). Thus C n is a (1=r)-cutting. This implies the correctness of
CutRandomInc.
The covering C i of the plane maintained by CutRandomInc depends heavily on the order
in which the lines are inserted into the arrangement. To bound the expected running time of
CutRandomInc, and the expected size of the cutting that it computes, we adapt the analysis of
Agarwal et al. [AMS94] to our case.
The following elegant argument, due to Agarwal et al. [AEG98], shows that the expected
complexity of A j (S i ) remains quadratic (i.e. we use CutRandomInc to compute a 0-cutting.):
Lemma 2.2 Let "
S be a set of n lines, and let S be a random permutation of "
S. Then the
expected complexity of A j (S) is O(n 2 ), where A j (S) is the decomposed arrangement computed by
the incremental algorithm outlined at the beginning of the section, without performing merging.
Proof: Let V be the set of all intersection points of pairs of lines of "
(the vertices of A( "
S)).
For be an indicator variable, such that D(v; i) (resp. U(v; i)) is
1 if the vertical wall emanating from v still exists in A j (S) as we go downward (resp. upward)
Algorithm
Input: A set "
S of n lines, a positive integer r
Output: A (1=r)-cutting of "
S by vertical trapezoids
begin
Choose a random permutation S =! s
S.
while there are (1=r)-active trapezoids in C i do
Zone i / The set of (1=r)-active trapezoids in C i\Gamma1 that intersect s i .
Zone 0
is the operation of splitting a vertical trapezoid T
crossed by a line s into at most four vertical trapezoids, as in [dBvKOS97],
such that the new trapezoids cover T , and they do not intersect
s in their interior.
while
return C i
Figure
1: Algorithm for constructing a (1=r)-cutting of an arrangement of lines
from v after crossing i lines. Clearly, the complexity of A j (S) is proportional to
However, if D(v; then the two lines defining v must appear in S before the first i lines
that intersect the downward ray emanating from v. Thus, P r
, and the
same inequality holds for U(v; i). Therefore, the expected complexity of A j (S) is
O
O
The analysis is applied in the following abstract framework. Let "
S be a set of n objects (in
our case the objects are lines in IR 2 ). A selection of "
S is an ordered sequence of distinct elements
of "
S. Let oe( "
S) denote the set of all selections of "
S. For a permutation S of "
S, let S i denote the
subsequence consisting of the first i elements of S, for n. For each R 2
S), we define
a collection CT (R) of 'regions' (in our case, a region will be either a trapezoid or a segment),
each defined by a small subset of R. Let T
denote the set of all possible
regions.
We associate two subsets D(\Delta); K (\Delta) ' S with each region \Delta 2 T , where D(\Delta) is the
defining set of \Delta, in the sense that \Delta 2 CT (R) only if D(\Delta) ' R. The size of D(\Delta) is assumed
to be bounded by a fixed constant. The set K (\Delta) is the killing set of \Delta; namely, if K (\Delta) "R 6= ;,
then
denote the weight of \Delta. Let CT (R; denote the set of
all regions of CT (R) having weight at least k, where k is a positive integer. For a region \Delta in
the plane, we denote by CT (R; k; \Delta) the set of all regions of CT (R; k) that are contained in \Delta.
S be a set of objects such that, for any sequence R 2
S), the following axioms hold:
(A) For any \Delta 2 CT (R), we have D(\Delta) ' R, and K (\Delta)
(R), then for any subsequence R 0 of R, such that D(\Delta) ' R 0 , we have
(R 0 ).
The above is a natural extension of the settings of Agarwal et al. [AMS94], where the insertion
order of objects in the sample is important. For any natural number k, define
The following key lemma asserts that the expected number of heavy regions decreases exponentially
as a function of their weight.
Lemma 2.3 Given a set "
S of n objects, let R be a random sequence of r n distinct elements of
S, where each such sequence (of size r) is chosen with equal probability, and let t be a parameter,
Assuming that "
S satisfies Axioms (A) and (B), we have
(R 0 )j
where R 0 is a random subsequence of "
S, as above, of size r
Proof: The proof is a straightforward adaptation of the proof of Lemma 2.2 of [AMS94] to
our ordered sampling.
Intuitively, as the execution of CutRandomInc progresses, the number of trapezoids with heavy
weight becomes smaller. Unfortunately, Lemma 2.3 can not be applied directly to analyze the
distribution of weights of the active trapezoids in C i . Since, the axiom (B) does not hold for the
active trapezoids in C i . See Remark 2.10 below. To analyze CutRandomInc, we prove a weaker
version of Lemma 2.3, by relying on the fact that C i "lies" between two structures for which
Lemma 2.3 hold.
In the following, "
S denote a set of n lines in the plane. We denote by R a selection of "
S of
length r n.
Definition 2.4 A vertical segment which serves as a left or right side of a trapezoid, is called a
splitter. Let CT W (R) denote the set of splitters of the trapezoids of A j (R), A splitter in CT W (R)
is uniquely defined by 4 lines. Let
S) CT W (R) denote the set of all splitters that might
appear in A j (R). We denote by T the set of vertical trapezoids having a top and bottom line
from "
S.
Similarly, let CT V D (R) denote the set of trapezoids of AVD (R). A trapezoid in CT V D (R) is
defined by 4 lines.
Figure
2: A vertical trapezoid is of weight k, either because one of its vertical sides intersects at
least k=4 lines, or at least k=2 of the lines pass from the bottom to the top of the trapezoid.
Lemma 2.5 Let "
S be a set of n lines in the plane, and let R be a selection of "
S. Axioms (A)
and (B) holds for CT W (R) , and CT V D (R).
Proof: Let s be a splitter in CT W (R). The segment s is a part of a vertical ray emanating
from an intersection point p of two lines l 1 ; l 2 of R. Additionally, there are two additional lines
that define the bottom and top intersection points of s. Clearly, under general position
assumption, if s 2 CT W (R) then line that intersects s
can appear in R.
As for axiom (B), let R 0 be a subsequence of R that contains D(s). Let i be the minimal
index, such that l 1
. Clearly, at this point A j (R 0
contains a vertical wall that contains s,
since the vertex p is in A(R 0
either the downward or upward ray emanating from p must
include s, otherwise s can not appear in A j (R). Thus, after inserting l 3 ; l 4 into the arrangement
(R 0
i ), the splitter s will appear in the resulting arrangement. Implying that s 2 CT W (R 0 ).
As for the second part of the lemma, this is well known [dBvKOS97].
Lemma 2.6 Let \Delta be a trapezoid of weight k in AVD (R). Let l g be a set of
disjoint trapezoids contained inside \Delta, such that they have the same bottom and top lines as \Delta,
their splitters belong to CT W (R), and their weight is at least k 0 . Then, the number of trapezoids
of V is
Proof: If either vertical sides (i.e. splitters) of \Delta i intersects k 0 =4 lines, then we charge this
trapezoid to the relevant splitter of CT W (R; k 0 =4; \Delta). Otherwise, there are at least k
lines of S that intersects only the top and bottom parts of namely, those lines intersection
with \Delta lie inside \Delta i . See Figure 2. Thus, the number of such trapezoids is k=(k 0 =2).
Definition 2.7 Let k be a positive integer number, and let U be a set of disjoint trapezoids.
The set U is (k; R)-compliant if each trapezoid \Delta of U is of weight at least k, ffi is contained in
a single trapezoid of AVD (R), and \Delta is a union of trapezoids of A j (R).
Note, that the set of (1=r)-active trapezoids of C i (the covering of the plane computed after
the i-th iteration of CutRandomInc) are (n=r; S i )-compliant. Moreover, this remains true even if
CutRandomInc performs merging.
Lemma 2.8 Let R be a selection of "
S of r n distinct lines of "
S, where each such selection (of
size r) is chosen with equal probability, let U be a (n=r; R)-compliant set of trapezoids, and let t
be a parameter, 1 t r=6. We have
O
dn=re, and T
S).
Proof: Since U is (n=r; R)-compliant, we have by Lemma 2.6:
r ew(\Delta)(i+1)d n
r e
O
r
By Lemma 2.3, we have:
O
O
O
Theorem 2.9 The expected size of the (1=r)-cutting generated by CutRandomInc is O(r 2 ), and
the expected running time is O(nr), for any integer 1 r n.
Proof: Let CS(n; r) denote the maximum expected size of the (1=r)-cutting generated by
r), where the maximum is taken over all sets "
S of n lines in the plane.
Suppose we execute CutRandomInc until cr lines are inserted, where c is a constant to be specified
shortly. At this stage, the expected size of the covering cr computed by CutRandomInc
is O((cr) 2 ). Indeed, each trapezoid in C is a union of one or more trapezoids of A j (S cr ). Hence,
by Lemma 2.2, we have E[jCj] E[j A j (S cr Hence, if the algorithm terminates
before cr lines are inserted, the expected size of the covering is O(r 2 ).
For each CS(\Delta) to be the expected number of vertical trapezoids that are
contained in \Delta and belong to the final covering computed by the algorithm, if we resume the
execution of CutRandomInc until it terminates. However, CS(\Delta) CS(w(\Delta); dw(\Delta)r=ne), since
we can interpret the execution of CutRandomInc within \Delta, as executing CutRandomInc from fresh
on K (\Delta), in order to compute a
w(\Delta)r
-cutting inside \Delta. Indeed, if we set C 0 , in the algorithm,
to be a given trapezoid \Delta, then CutRandomInc will compute a cutting inside \Delta, and then only
the lines in K (\Delta) will be relevant to the behavior of the algorithm; see Figure 1. Moreover, the
analysis of the performance of CutRandomInc does not depend on the shape C 0 is initialized to.
Thus, we have
CS(n; r) O
\Delta2C;w(\Delta)?n=r
CS
w(\Delta)r
O
\Delta2C;
cr
cr
CS
c
Applying Lemma 2.2 and Lemma 2.8, we have that the expected number of vertical trapezoids
in C with weight tn=(cr) is O(2 \Gammat=4 (cr) 2 =t), where 1 t cr=6. Thus,
cr=6
t=c
O
CS
cr
c
O
t=c
CS
cr
c
If we choose c to be a sufficiently large constant, the solution to this recurrence is O(r 2 ), as is
easy to verify by induction.
We next analyze the expected running time of the algorithm. We implement CutRandomInc
using a conflict graph; namely, for each trapezoid \Delta of C i , we maintain a list of the elements
of K (\Delta), and similarly, for each line of "
S, we maintain a list of active trapezoids of C i that it
intersects (the "zone" of the line in C i ).
Let W i denote the expected work in the i-th iteration of the algorithm. It is easy to verify
that say. For i ? 10, we analyze the expected value of W i , by applying
Lemma 2.2 and Lemma 2.8, to CutRandomInc after iterations were performed (i.e., our
random sample is of size i \Gamma 1). Then, the probability of a trapezoid to be processed
at the i-th iteration of the algorithm is w(\Delta)=(n
trapezoids are inactive). Moreover, if \Delta is being processed at the i-th stage, then the work
associated with \Delta (at this stage) is O(w(\Delta)). We thus have,
\Delta2C
r
w(\Delta)
\Delta2C
O
Hence,
ii
\Delta2C
O
O
by Lemma 2.2 and Lemma 2.8. Thus, the expected running time of the algorithm in the first cr
iterations, is O(cnr). Let T (n; r) be the maximum expected running time of CutRandomInc(
where the maximum is taken over all sets "
S of n lines in the plane. Consider a set "
S over which
T (n; r) is maximized. Arguing as above, we have
\Delta2C;w(\Delta)?n=r
\Delta2C;
cr w(\Delta)(t+1) n
cr
c
t=c
O
cr
c
cr=6
O (cnr) +O
t=c
cr
c
cr2 \Gammacr=24
using Lemma 2.8. For c sufficiently large, the solution to this recurrence is easily verified, using
induction, to be O(nr). This completes the proof of the theorem.
Remark 2.10 We can not apply Lemma 2.3 directly on the (1=r)-active regions of C i , because
the axiom (B) does not hold for this case. See Figure 3 for an example that shows that axiom
may be violated if we use merging. Even if we do not use merging in CutRandomInc, axiom
(B) is still violated. See Figure 4
The algorithm CutRandomIncworks also for planar arrangements of segments and x-monotone
curves (such that the number of intersection of each pair of curves is bounded by a constant).
l 3 l 2
l 1
l 4
l 6
l 5
a b
c
l 3 l 2
l 1
l 4
l 6
l 5
a
c
l 3 l 2
l 1
l 6
l 5
a b
c
(i) (ii) (iii)
Figure
3: Axiom (B) fails if merging is used: The thick lines represent two sets of 100 parallel
lines, and we want to compute a (1=10)-cutting. We execute CutRandomInc with the first 6
lines l 6 in this order. Note that any trapezoid that intersects a thick line is active.
The first trapezoid \Delta 0 inside 4abc that becomes inactive is created when the line l 5 is being
inserted; see parts (i) and (ii). However, if we skip the insertion of the line l 4 (as in part (iii)),
the corresponding inactive trapezoid \Delta 00 will extend downwards and intersect \Delta. Since \Delta 00 is
inactive, the decomposition of the plane inside \Delta 00 is no longer maintained. In particular, this
implies that the trapezoid \Delta will not be created, since it is being blocked by \Delta 00 , and no merging
involving areas inside \Delta 00 will take place. This is a contradiction to axiom (B), since l 4 does not
belong to the killing or defining sets of \Delta.
l 3 l 2
l 4
l 1
l 5
a
c
l 6 l 7
l 3 l 2
l 4
l 5
a
c
l 6 l 7
Figure
4: Axiom (B) fails if even if merging is not being used by CutRandomInc. Indeed, if
CutRandomInc insert the lines in the order l 1 then the trapezoid \Delta is being created.
See (i). However, if we skip the insertion of the line l 1 , then the trapezoid \Delta is not being created,
because the ray emanating downward from l 2 " l 3 intersects it.
This follows immediately by observing that Lemma 2.2 and Lemma 2.8 can be extended for those
cases, and that Axioms (A) and (B) hold for the vertical decomposition of such arrangements,
and for the set of splitters of such arrangements.
Lemma 2.11 Let \Gamma be a set of x-monotone curves such that each pair intersects in at most a
constant number of points. Then the expected size of the (1=r)-cutting generated by CutRandomInc
for \Gamma is O(r 2 ), and the expected running time is O(nr), for any integer 1 r n.
However, the arrangement of a set of n segments or curves might have subquadratic complexity
(since the number of intersection points might be subquadratic). This raises the question
whether CutRandomInc generates smaller cuttings for such sparse arrangements.
Definition 2.12 Let \Gamma be a set of curves in the plane. We denote by (\Gamma) the number of
intersection points between pairs of curves of \Gamma.
Lemma 2.13 Let "
\Gamma be a set of n curves, so that each pair of curves from \Gamma have at most
intersection points, and let \Gamma be a random permutation of "
\Gamma. Then the expected complexity
of A j (\Gamma) is O(n log n
\Gamma)), where A j (\Gamma) is the decomposed arrangement computed by the
incremental insertion algorithm, without performing merging.
Proof: Note that any intersection point of a pair of curves of " \Gamma, induces an upward and
downward vertical "walls", and the expected number of vertical walls in A j (\Gamma) is O(( "
\Gamma)), arguing
as in the proof of Lemma 2.2.
Additionally, there are vertical walls defined by the endpoints of the curves of \Gamma. Let p be
an endpoint of a curve \Gamma. The probability that the vertical upward ray v p emanating from
will introduce i vertical walls in A j (\Gamma), is the probability that fl will be chosen before the first
i curves of \Gamma that this vertical ray intersects. Thus, the expected number of superfluous vertical
walls introduced by v p is
Thus, the total number of vertical walls in A j (\Gamma) introduced by the endpoints of arcs in " \Gamma is
O(n log n), and the Lemma readily follows.
Corollary 2.14 Let " \Gamma be a set of n curves, so that each pair of curves from "
\Gamma have at most
intersection points, and let \Gamma r be a random selection of r elements of " \Gamma. Then the expected
complexity of A j (\Gamma r ) is O(r log r
Proof: We note that the probability of an intersection point of A( "
\Gamma) to appear in
is r(r\Gamma1)
. Hence, the expected number of intersection points of arcs of "
in A is
O
The lemma now readily follows by applying Lemma 2.13 to A j (\Gamma r ).
Theorem 2.15 Let " \Gamma be a set of n curves, such that each pair of curves of "
intersect in at
most a constant number of points. Then the expected size of the (1=r)-cutting generated by
CutRandomInc, when applied to \Gamma, is
O
and the expected running time is O(n log
\Gamma).
Proof: The proof is a tedious extension of the proof of Theorem 2.9. We derive similar recurrences
to the ones used in the proof of Theorem 2.9. In deriving and solving those recurrences,
we repeatedly apply the bounds stated in Lemma 2.11. We omit the details.
Remark 2.16 An interesting question is whether CutRandomInc can be extended to higher di-
mensions. If we execute CutRandomInc in higher dimensions, we need to use a more complicated
technique in decomposing each of our "vertical trapezoids" whenever it intersects a newly inserted
hyperplane. Chazelle and Friedman's algorithm uses bottom vertex triangulation for this
decomposition. However, in our case, it is easy to verify that CutRandomInc might generate
simplices that their defining set is no longer a constant number of hyperplanes. This implies that
Lemma 2.3 can no longer be applied to CutRandomInc in higher dimension. We leave the problem
of extending CutRandomInc to higher dimensions as an open problem for further research.
Generating Small Cuttings
In this section, we present an efficient algorithm that generates cuttings of guaranteed small size.
The algorithm is based on Matousek's construction of small cuttings [Mat98]. We first review
his construction, and then show how to modify it for building small cuttings efficiently.
Definition 3.1 ([Mat98]) Let L be a set of n lines in the plane in general position, i.e., every
pair of lines intersect in exactly one point, no three have a common point, no line is vertical or
horizontal, and the x-coordinate of all intersections are pairwise distinct. The level of a point
in the plane is the number of lines of L lying strictly below it. Consider the set E k of all edges
of the arrangement of L having level k (where 0 k ! n). These edges form an x-monotone
connected polygonal line, which is called the level k of the arrangement of L.
Definition 3.2 ([Mat98]) Let E k be the level k in the arrangement A(L) with edges
(from left to right), and let p i be a point in the interior of the edge e i , for t.
The q-simplification of the level k, for an integer parameter 1 q t, is defined as the x-
monotone polygonal line containing the part of e 0 to the left of the point p 0 , the segments p 0 p q ,
and the part of e t to the right of p t . Let simp q polygonal
line.
Let L be a set of n lines, and let E i;q denote the union of the levels
denote the set of edges of the q-simplifications of the levels
of E i;q .
Matousek showed that the vertical decomposition of the plane induced by simp q (E i;q ), where
assume that n is divisible by 2r), is a (1=r)-cutting of the plane, for any
1. Moreover, the following holds:
Theorem 3.3 ([Mat98]) Let L be a set of n lines in general position, and let r be a positive
integer, and let n=(2r). Then the subdivision of the plane defined by the vertical decomposition
of simp q (E m;q ) is a (1=r)-cutting of A(L), where is the index i
which jE i;q j is minimized. Moreover, the cutting generated has at most 8r 2
Remark 3.4 (i) Matousek's construction can be slightly improved, by noting that the leftmost
and rightmost points in a q-simplification of a level can be placed at "infinity"; that is, we replace
the first and second edges in the q-simplification by a ray emanating from p q which is parallel
to e 0 . We can do the same thing to the two last edges of the simplified level. We denote this
improved simplification by simp 0
q . It is easy to prove that using this improved simplification
results in a (1=r)-cutting of A(L) with at most 8r 2
trapezoids.
(ii) Inspecting Matousek's construction, we see that if we can only find an i such that
prescribed constant, then the vertical decomposition induced by
simp q (E i;q ) is a (1=r)-cutting having c(8r trapezoids.
construction is carried out by computing the
picking the minimal number n i , which is guaranteed to be no larger
than the average n 2 =q. Unfortunately, implementing this scheme directly, requires computing the
whole arrangement A(L), so the resulting running time is O(n 2 ). Let us assume for the moment
that one can compute any of the numbers n i quickly. Then, as the following lemma testifies, one
can compute a number n i which is (1 computing all the n i s.
Lemma 3.5 Let n positive integers, whose sum
known in ad-
vance, and let " ? 0 be a prescribed constant. One can compute an index 0 k ! q, such
that ")m=qe, by repeatedly picking uniformly and independently a random index
q, and by checking whether n i ")m=qe. The expected number of iterations
required is
Proof: Let Y i be the random variable which is the value of n i picked in the i-th iteration.
Using Markov's inequality 1 , one obtain:
we have that the probability for failure in the i-th iteration is
1 The inequality asserts that P r
t , for a random variable Y that assumes only nonnegative values.
Let X denote the number of iterations required by the algorithm. Then E[X] is bounded
by the expected number of trials to the first success in a geometric distribution with probability
1+"
. Thus, the expected number of iterations is bounded by
1+"
To apply Lemma 3.5 in our setting, we need to supply an efficient algorithm for computing
the level of an arrangement of lines in the plane.
Lemma 3.6 Let L be a set of n lines in the plane. Then one can compute, in O((n
time, the level k of A(L), where is the complexity of the level.
Proof: The technique presented here is well known (see [BDH97] for a recent example):- we
include it for the sake of completeness of exposition. Let e t be the edges of the level k
from left to right (where e are rays).
Let e be an edge of the level k. Clearly, there exists a face f of A(L) having e on its boundary
such that f lies above e. In particular, all the edges on the bottom part of @f belong to the level
k.
r be the faces of A(L) having the level k as their "floor", from left to right. The
ray e 0 can be computed in O(n) time since it lies on line l k of L, with the k-th largest slope.
Moreover, by intersecting l k with the other lines of L, one can compute e 0 in linear time.
Any face of A(L) is uniquely defined as an intersection of half-planes induced by the lines
of L. For the faces f 1 , we can compute the half-planes and their intersection, that corresponds
to f 1 , in O(n log n) time, see [dBvKOS97]. To carry out the computation of bottom parts of
dynamically maintain the intersection representing f i as we traverse the level
k from left to right. To do so, we will use the data-structure of Overmars and Van Leeuwen
[OvL81] that maintains such an intersection, with O(log 2 n) time for an update operation. As
we move from f i to f i+1 through a vertex v, we have to "flip" the two half-planes associated with
the two lines passing through v. Thus such operation will cost us O(log 2 n) time. Similarly, if
we are given an edge e on the boundary of f i we can compute the next edge in O(log 2 n) time.
Thus, we can compute the level k of A(L) in O((n
Combining Lemma 3.5 and Lemma 3.6, we have:
Theorem 3.7 Let L be a set of n lines in the plane, and let be a prescribed constant.
Then one can compute a (1=r)-cutting of A(L), having at most (1 trapezoids.
The expected running time of the algorithm is O
nr log 2 n
Proof: By the above discussion, it is enough to find an index 0 i q \Gamma 1, such that
dn=(2r)e. By Remark 3.4 (ii), the vertical
decomposition of simp 0
(1=r)-cutting of the required size.
Picking i randomly, we have to check whether jE i;q j M . We can compute E i;q , by computing
the levels in an output sensitive manner, using Lemma 3.6. Note
that if jE i;q j ? M , we can abort as soon as the number of edges we computed exceeds M . Thus,
checking if jE i;q j M takes O((1 time. By Lemma 3.6, the expected number of
iterations the algorithm performs until the inequality jE i;q j M will be satisfied is
Thus, the expected running time of the algorithm is
O
nr log 2 n
since the vertical decomposition of simp 0
(which is the resulting cutting) can be computed
in additional O((1 In fact, one can also compute, in O((1 for each
trapezoid in the cutting, the lines of L that intersect it.
4 Empirical Results
In this section, we present the empirical results we got for computing cuttings in the plane using
CutRandomInc and various related heuristics that we have tested.
The test program with a GUI of the alogrithm presented in this paper, is avaliable on the
web in source form. It can be downloaded from:
http://www.math.tau.ac.il/ ~ sariel/CG/cutting/cuttings.html
4.1 The Implemented Algorithms - Using Vertical Trapezoids
We have implemented the algorithm CutRandomInc presented in Section 2, as well as several
other algorithms for constructing cuttings. We have also experimented with the algorithm of
Section 3. In this section, we report on the experimental results that we obtained.
Most of the algorithms we have implemented are variants of CutRandomInc. The algorithms
implemented are the following:
Classical: This is a variant of the algorithm of Chazelle and Friedman [CF90] for constructing
a cutting. We pick a sample R
S of r lines, and compute its arrangement
For each active trapezoid \Delta 2 A, we pick a random sample R \Delta ' K (\Delta) of size 6k log k, where
(\Delta)j=ne, and compute the arrangement of AVD (R \Delta ) inside \Delta. If AVD (R \Delta ) is not
a (1=r)-cutting, then the classical algorithm performs resampling inside \Delta until it reaches a
cutting. Our implementation is more naive, and it simply continues recursively into the active
subtrapezoids of AVD (R \Delta ).
Cut Randomized Incremental: This is CutRandomInc without merging, as described in
Figure
1.
The following four heuristics, for which we currently do not have a proof of any concrete
bound on the expected size of the cutting that they generate, also perform well in practice.
Parallel Incremental: Let C i be the covering generated in the i-th iteration of the algorithm.
For each active trapezoid pick a random line from K (\Delta), and insert it into \Delta (i.e.,
splitting \Delta). Continue until there are no active trapezoids. Note that unlike CutRandomInc the
insertion operations are performed locally inside each trapezoid, and the line chosen for insertion
in each trapezoid is independent of the lines chosen for other trapezoids.
Randomized Incremental: This is CutRandomInc with merging.
Greedy Trapezoid: This is a variant of CutRandomInc where we try to be "smarter" about
the line inserted into the partition in each iteration. Let V i be the set of trapezoids of C i with
maximal weight. We pick randomly a trapezoid \Delta out of the trapezoids of V i , and pick randomly
a line s from K (\Delta). We then insert s into C i .
Greedy Line: Similar to Greedy Trapezoid, but here we compute the set U of lines of "
for which w 0 (s) is maximal, where w 0 (s) is the number of active trapezoids in C i that intersect
the line s. We pick randomly a line from U and insert it into the current partition of the plane.
Greedy Weighted Line: Similar to Greedy Line, but our weight function is:
w(\Delta) \Xi n
3r
namely, we give a higher priority to lines that intersect heavier (1=r)-active trapezoids.
4.2 Polygonal Cuttings
In judging the quality of cuttings, the size of the cutting is of major concern. However, other
factors might also be important. For example we want the regions defining the cutting to be as
simple as possible. Furthermore, there are applications where we are not interested directly in
the size of the cutting, but rather in the overall number of vertices defining the cutting regions.
This is useful when applying cuttings in the dual plane, and transforming the vertices of the
cutting back to the primal plane, as done in the computation of partition trees [Mat92]. A
natural question is the following: Can one compute better cuttings, if one is willing to use
cutting regions which are different from vertical trapezoids?
For example, if one is willing to cut using non-convex regions having a non-constant description
complexity, the size of the cutting can be improved to ????????. However, if one wishes to
cut a collection of lines by triangles, instead of trapezoids, the situation becomes somewhat dis-
appointing, because the smallest cuttings currently known for this case, are generated by taking
the cutting of Remark 3.4, and by splitting each trapezoid into two triangles. This results in
cuttings having (roughly) 16r 2 triangles.
In this section, we present a slightly different approach for computing cuttings, suggested
to us by Jiri how to write name??? Matousek, that works extremely well in practice. The
new approach is based on cuttings by using small polygonal convex regions, instead of vertical
trapezoids. Namely, we apply CutRandomInc, where each region is a convex polygon (of constant
complexity). Whenever we insert a new line into an active region, we split the polygon into two
new polygons. Of course, it might be that the number of vertices of a new polygon is too large.
If so, we split each such polygon into two subpolygons ensuring that the number of vertices of
the new polygons are below our threshold.
Intuitively, the benefit in this approach is that the number of superfluous entities (i.e. vertical
walls in the case of vertical trapezoids) participating in the definition of the cutting regions is
a
Figure
5: In the PolyTree algorithm, each time a polygon is being split by a line, we might have
to further split it because a split region might have too many vertices.
much smaller. Moreover, since the cutting regions are less restrictive, the algorithm can be more
flexible in its maintenance of the active regions.
Here are the different methods we tried:
PolyTree: We use CutRandomInc where each region is a convex polygon having at most k-
sides. When inserting a new line, we first split each of the active regions that intersect it into
two subpolygons. If a split region R has more than k sides, we further split it using the diagonal
of R that achieves the best balanced partition of R; namely, it is the pair of vertices a; b realizing
the following minimum:
min
ab
ab
(R) is the set of vertices of R, and w(R) is the number of lines intersecting R, and H
ab
ab ) is the closed halfplane lying to the right (resp. left) of ab. See Figure 5.
PolyTriangle: Modified PolyTree for generating cuttings by triangles. In each stage, we check
whether a newly created region R can be triangulated into a set of inactive triangles. If do so,
by applying an arbitrary triangulation to A region R and check if all the triangles generated
in our (arbitrary) triangulation of R are inactive. If so, we replace R in our cuttings by its
Modified PolyTree for generating cuttings by triangles. Whenever a region
is being created we check whether it has a leaf triangle (a triangle defined by three consecutive
vertices of the region) that is inactive. If we find such an inactive triangle, we add it immediately
to the final cutting. We repeat this process until the region can not be further shrunk.
2 Computing the "best" (i.e. the weight of the heaviest triangle is minimized) triangulation is relatively compli-
cated, and requires dynamic programming. It is not clear that it is going to perform better than PolyDeadLeaf,
described below.
We use PolyTree, but instead of splitting along a diagonal, we split along a
vertical ray emanating from one of the vertices of the region. The algorithm also tries to remove
dead regions from the left and right side of the region. Intuitively, each region is now an extended
vertical trapezoid having a convex ceiling and floor, with at most two additional vertical walls.
Theorem 4.1 The expected size of the (1=r)-cutting generated by PolyVertical is O(r 2 ), and
the expected running time is O(nr), for any integer 1 r n.
Proof: We only sketch the proof. First, note that the number of regions maintained by
PolyVertical in the i-th iteration is O(i 2 ), since each region maintained by PolyVertical is a
union of trapezoids of A j (S i ). And the total complexity of A j (S i ) is O(i 2 ) (Lemma 2.2.
Let be the maximal number of vertical in a region maintained curing the execution of
PolyVertical (This is a parameter of the algorithm). We know that if a region P is (1=r)-active
after the i-th iteration of the algorithm, then P must contain at least one vertical trapezoid of
2.8, the expected number of such trapezoids, having
weight larger than t dn=re is O
Thus, we have an exponential decay bound on the distribution of heavy trapezoids, during
the execution of the algorithm. We now derive similar recurrences to the recurrences used in
Theorem 2.9 to bound the running time, and size of the cutting generated by PolyVertical.
Remark 4.2 Note, that for all the polygonal cutting methods, except PolyVertical, it is not
even clear that the number of regions they maintain, in the i-th iteration, is O(i 2 ). Thus, the
proof of Theorem 2.9 does not work for those methods.
4.3 Implementation Details
As an underlying data-structure for our testing, we implemented the history-graph data-structure
[Sei91]. Our random arrangements were constructed by choosing n points uniformly and independently
on the left side of the unit square, and similarly on the right side of the unit square.
We sorted the points, and connected them by lines in a transposed manner. This yields a random
arrangement with all the
intersections inside the unit square.
We had implemented our algorithm in C++. We had encountered problems with floating point
robustness at an early stage of the implementation, and decided to use exact arithmetic instead,
using LEDA rational numbers [MN95]. While this solved the robustness problems, we had to deal
with a few other issues:
ffl Speed: Using exact arithmetic instead of floating point arithmetic resulted in a slow down
by a factor 20-40. The time to perform an operation in exact arithmetic is proportional
to the bit-sizes of the numbers involved. To minimize the size of the numbers used in the
computations, we normalized the line equations so that the coefficients are integer numbers
(in reduced form).
ffl Memory consumption: A LEDA rational is represented by a block of memory dynamically
allocated on the heap. In order to save, both in the memory consumed and the time used
by the dynamic memory allocator, we observe that in a representation of vertical decomposition
the same number appears in several places (i.e., an x-coordinate of an intersection
point appears in 6 different vertical trapezoids). We reduce memory consumption, by storing
such a number only once. To do so, we use a repository of rational numbers generated
so far by the algorithm. Whenever we compute a new x-coordinate, we search it in the
repository, and if it does not exist, then we insert it. In particular, each vertical trapezoid
is represented by two pointers to its x lef t and x right coordinates, and pointers to its top
and bottom lines.
The repository is implemented using Treaps [SA96].
4.4 Results
The empirical results we got for the algorithms/heuristics of Subsection 4.1, are depicted in
Tables
For each value of r, and each value of n, we computed a random arrangement of lines inside
the unit square, as described above. For each such arrangement, we performed 10 tests for each
algorithm/heuristic. The tables present the size of the minimal cutting computed in those tests.
Each entry is the size of the output cutting divided by r 2 . In addition, each table caption presents
a range containing the size of the cutting that can be obtained by Matousek's algorithm [Mat98].
As noted in Remark 2.10, it is an interesting question whether or not using merging results
in smaller cuttings generated by CutRandomInc. We tested this empirically, and the results are
presented in Table 6. As can be seen in Table 6, using merging does generate smaller cuttings,
but the improvement in the cutting size is rather small. The difference in the size of the cuttings
generated seems to be less than 2r 2 .
4.5 Implementing Matousek's Construction
In
Table
7, we present the empirical results for Matousek's construction, comparing it with the
slight improvement described in Remark 3.4. For small values of r the improved version yields
considerably smaller cuttings than Matousek's construction, making it the best method we are
aware of for constructing small cuttings.
We had implemented Matousek's algorithm naively, using quadratic space and time. Cur-
rently, this implementation can not be used for larger inputs because it runs out of memory.
Implementing the more efficient algorithm described in Theorem 3.7 is non-trivial since it requires
the implementation of the data-structure of Overmars and van Leeuwen [OvL81]. However,
if it is critical to reduce the size of a cutting for large inputs, the algorithm of Theorem 3.7 seems
to be the best available option.
To look into papers of cuttings. Is it possible that just picking r lines and computing the
arrangement might be enough, when using the distribution lemma? didn't this idea appear in
Matousek paper? Can partition trees work with lazy cuttings? If not, what is the penalty? If
lazy cuttings are indeed useful, to indicate in the introduction why cuttings are interesting. To
add reference to Matousek deterministic construction of cuttings.
4.6 Polygonal Cuttings
The results for polygonal cuttings are presented in Tables 8, 9. As seen in the tables the polygonal
cutting methods perform well in practice. In particular, the PolyTree method generated cuttings
of average size (roughly) 7:5r 2 , beating all the cutting methods that use vertical trapezoids.
As for triangles, the situation is even better: PolyDeadLeaf generates cuttings by triangles of
size 12r 2 . (That is better by an additive factor of about 4r 2 than the best theoretical bound).
To summarize, polygonal cutting methods seems to be the clear winner in practice. The
generate quickly cuttings of small size, with small number of vertices, and small number of
triangles.
Conclusions
In this paper, we presented a new approach, different from that of [CF90], for constructing
cuttings. The new algorithm is rather simple and easy to implement. We proved the correctness
and bounded the expected running time of the new algorithm, while demonstrating that the new
algorithm performs much better in practice than the algorithm of [CF90]. We believe that the
results in this paper shows that planar cuttings are practical, and might be useful in practice
when constructing data-structures for range-searching.
Moreover, the empirical results show that the size of the cutting constructed by the new
algorithm is not considerably larger (and in some cases better) than the cuttings that can be
computed by the currently best theoretical algorithm (too slow to be useful in practice due to its
running time) of Matousek [Mat98]. The empirical constants that we obtain are generally
between 10 and 13 (for vertical trapezoids). For polygonal cuttings we get a constant of 7 by
cutting by convex polygons (using PolyTree) having at most 6 vertices. Moreover, the various
variants of CutRandomInc seem to produce constants that are rather close to each other. As
noted above, the method described in Remark 3.4 generates the smallest cuttings by vertical
trapezoids (but is rather slow because of our naive implementation).
Tables
9 present the running time we got for the various cutting algorithms. This information
should be taken with reservation, since no serious effort had gone into optimizing the code
for speed, and those measurements tend to change from execution to execution. (Recall also
that we use exact arithmetic, which slows down the running time significantly.) However, it does
provide a general comparison between the running times of the various methods in practice.
Given this results, in we recommend for use in practice one of the polygonal-cutting methods.
They perform well in practice, and they should be used whenever possible. If we are restricted
to vertical trapezoid, CutRandomInc seems like a reasonable algorithm to be used in practice
(without merging, as this is the only "non-trivial" part in the implementation of the algorithm).
There are several interesting open problems for further research:
ffl Can one obtain a provable bound on the expected size of the cutting generated by the
PolyTree methods?
ffl Can one prove the existence of a cutting smaller than the one guaranteed by the algorithm
in Remark 3.4 for specific values of r? For example, Table 1 suggests a smaller cutting
should exist for 2. In particular, the test results hint that a cutting made out of
vertical trapezoid should exist, while the cutting size guaranteed by Matousek's algorithm
[Mat98] is 48.
ffl Can one generate smaller cuttings by modifying CutRandomInc to be smarter in its decision
when to merge trapezoids?
ffl Is there a simple and practical algorithm for computing cuttings in three and higher di-
mensions? The current algorithms seems to be far from practical.
Acknowledgments
The author wishes to thank Pankaj Agarwal, Boris Aronov, Herve Bronnimann, Bernard Chazelle
, Jiri Matousek, and Joe Mitchell for helpful discussions concerning the problems studied in this
paper and related problems.
I wish to thank Micha Sharir for his help and guidance in preparing the paper.
--R
Kinetic binary space partitions for triangles.
Geometric partitioning and its applications.
The area bisectors of a polygon and force equilibria in programmable vector fields.
A deterministic view of random sampling and its use in geometry.
New applications of random sampling in computational geometry.
Computational Geometry: Algorithms and Applications.
LEDA: a platform for combinatorial and geometric computing.
Maintenance of configurations in the plane.
Randomized search trees.
A simple and fast incremental randomized algorithm for computing trapezoidal decompositions and for triangulating polygons.
--TR
--CTR
Micha Sharir , Emo Welzl, Point-line incidences in space, Proceedings of the eighteenth annual symposium on Computational geometry, p.107-115, June 05-07, 2002, Barcelona, Spain
Micha Sharir , Emo Welzl, PointLine Incidences in Space, Combinatorics, Probability and Computing, v.13 n.2, p.203-220, March 2004
Siu-Wing Cheng , Antoine Vigneron, Motorcycle graphs and straight skeletons, Proceedings of the thirteenth annual ACM-SIAM symposium on Discrete algorithms, p.156-165, January 06-08, 2002, San Francisco, California
Sariel Har-Peled , Micha Sharir, Online point location in planar arrangements and its applications, Proceedings of the twelfth annual ACM-SIAM symposium on Discrete algorithms, p.57-66, January 07-09, 2001, Washington, D.C., United States
Hayim Shaul , Dan Halperin, Improved construction of vertical decompositions of three-dimensional arrangements, Proceedings of the eighteenth annual symposium on Computational geometry, p.283-292, June 05-07, 2002, Barcelona, Spain
Pankaj K. Agarwal , Micha Sharir, Pseudo-line arrangements: duality, algorithms, and applications, Proceedings of the thirteenth annual ACM-SIAM symposium on Discrete algorithms, p.800-809, January 06-08, 2002, San Francisco, California | range-searching;cuttings;computational geometry |
356493 | Gadgets, Approximation, and Linear Programming. | We present a linear programming-based method for finding "gadgets," i.e., combinatorial structures reducing constraints of one optimization problem to constraints of another. A key step in this method is a simple observation which limits the search space to a finite one. Using this new method we present a number of new, computer-constructed gadgets for several different reductions. This method also answers a question posed by Bellare, Goldreich, and Sudan [SIAM J. Comput., 27 (1998), pp. 804--915] of how to prove the optimality of gadgets: linear programming duality gives such proofs.The new gadgets, when combined with recent results of H stad [ Proceedings of the 29th ACM Symposium on Theory of Computing, 1997, pp. 1--10], improve the known inapproximability results for MAX CUT and MAX DICUT, showing that approximating these problems to within factors of $16/17 + \epsilon$ and $12/13+ \epsilon,$ respectively, is NP-hard for every $\epsilon > 0$. Prior to this work, the best-known inapproximability thresholds for both problems were 71/72 (M. Bellare, O. Goldreich, and M. Sudan [ SIAM J. Comput., 27 (1998), pp. 804--915]). Without using the gadgets from this paper, the best possible hardness that would follow from Bellare, Goldreich, and Sudan and H{s}tad is $18/19$. We also use the gadgets to obtain an improved approximation algorithm for MAX3 SAT which guarantees an approximation ratio of .801. This improves upon the previous best bound (implicit from M. X. Goemans and D. P. Williamson [ J. ACM, 42 (1995), pp. 1115--1145]; U. Feige and M. X. Goemans [ Proceedings of the Third Israel Symposium on Theory of Computing and Systems, 1995, pp. 182--189]) of .7704. | Introduction
. A \gadget" is a nite combinatorial structure which translates
a given constraint of one optimization problem into a set of constraints of a
second optimization problem. A classical example is in the reduction from 3SAT to
MAX 2SAT, due to Garey, Johnson and Stockmeyer [6]. Given an instance of 3SAT
on variables X and with clauses C the reduction creates an instance
of MAX 2SAT on the original or \primary" variables along with
new or \auxiliary" variables Y . The clauses of the MAX 2SAT instance are
obtained by replacing each clause of length 3 in the 3SAT instance with a \gadget", in
this case a collection of ten 2SAT clauses. For example the clause C
would be replaced with the following ten clauses on the variables
new auxiliary variable Y
The property satised by this gadget is that for any assignment to the primary vari-
ables, if clause C k is satised, then 7 of the 10 new clauses can be satised by setting
only 6 of the 10 are satisable. (Notice that the gadget
An extended abstract of this paper appears in the Proceedings of the 37th IEEE Symposium on
Foundations of Computer Science, pages 617-626, Burlington, Vermont, 14-16 October 1996.
y Columbia University, Department of Computer Science, 1214 Amsterdam Avenue, New York,
NY 10027, USA. luca@cs.columbia.edu. Part of this work was done while the author was at the
University of Rome \La Sapienza" and visiting IBM Research.
z IBM T.J. Watson Research Center, P.O. Box 218, Yorktown Heights NY 10598. fsorkin,
dpwg@watson.ibm.com.
x MIT, Laboratory for Computer Science, 545 Technology Square, Cambridge, MA 02139, USA.
madhu@mit.edu. Work supported in part by an Alfred P. Sloan Foundation fellowship. Part of this
work was done while the author was at the IBM Thomas J. Watson Research Center.
L. TREVISAN, G. B. SORKIN, M. SUDAN, AND D. P. WILLIAMSON
associated with each clause C k uses its own auxiliary variable Y k , and thus Y k may
be set independently of the values of variables not appearing in C k 's gadget.) Using
this simple property of the gadget it is easy to see that the maximum number of
clauses satised in the MAX 2SAT instance by any assignment is 7m if and only if
the instance of 3SAT is satisable. This was used by [6] to prove the NP-hardness of
solving MAX 2SAT. We will revisit the 3SAT-to-2SAT reduction in Lemma 6.5.
Starting with the work of Karp [12], gadgets have played a fundamental role in
showing the hardness of optimization problems. They are the core of any reduction
between combinatorial problems, and they retain this role in the spate of new results
on the non-approximability of optimization problems.
Despite their importance, the construction of gadgets has always been a \black
art", with no general methods of construction known. In fact, until recently no one
had even proposed a concrete denition of a gadget; Bellare, Goldreich and Sudan [2]
nally did so, with a view to quantifying the role of gadgets in non-approximability
results. Their denition is accompanied by a seemingly natural \cost" measure for
a gadget. The more \costly" the gadget, the weaker the reduction. However, rstly,
nding a gadget for a given reduction remained an ad hoc task. Secondly, it remained
hard to prove that a gadget's cost was optimal.
This paper addresses these two issues. We show that for a large class of reductions,
the space of potential gadgets that need to be considered is actually nite. This
is not entirely trivial, and the proof depends on properties of the problem that is
being reduced to. However, the method is very general, and encompasses a large
number of problems. An immediate consequence of the niteness of the space is the
existence of a search procedure to nd an optimal gadget. But a naive search would be
impracticably slow, and search-based proofs of the optimality (or the non-existence)
of a gadget would be monstrously large.
Instead, we show how to express the search for a gadget as a linear program (LP)
whose constraints guarantee that the potential gadget is indeed valid, and whose
objective function is the cost of the gadget. Central to this step is the idea of working
with weighted versions of optimization problems rather than unweighted ones.
(Weighted versions result in LPs, while unweighted versions would result in integer
programs, IPs.) This seemingly helps only in showing hardness of weighted optimization
problems, but a result due to Crescenzi, Silvestri and Trevisan [3] shows that
for a large class of optimization problems (including all the ones considered in this
paper), the weighted versions are exactly as hard with respect to approximation as the
unweighted ones. Therefore, working with a weighted version is as good as working
with an unweighted one.
The LP representation has many benets. First, we are able to search for much
more complicated gadgets than is feasible manually. Second, we can use the theory
of LP duality to present short(er) proofs of optimality of gadgets and non-existence
of gadgets. Last, we can solve relaxed or constrained versions of the LP to obtain
upper and lower bounds on the cost of a gadget, which can be signicantly quicker
than solving the actual LP. Being careful in the relaxing/constraining process (and
with a bit of luck) we can often get the bounds to match, thereby producing optimal
gadgets with even greater e-ciency!
Armed with this tool for nding gadgets (and an RS/6000, OSL, and often
GADGETS, APPROXIMATION, AND LINEAR PROGRAMMING 3
some of the known gadgets and construct many new ones. (In
what follows we often talk of \gadgets reducing problem X to problem Y" when we
mean \gadgets used to construct a reduction from problem X to problem Y".) Bellare
et al. [2] presented gadgets reducing the computation of a \verier" for a PCP
(probabilistically checkable proof system) to several problems, including MAX 3SAT,
MAX 2SAT, and MAX CUT. We examine these in turn and show that the gadgets
in [2] for MAX 3SAT and MAX 2SAT are optimal, but their MAX CUT gadget is
not. We improve on the e-ciency of the last, thereby improving on the factor to
which approximating MAX CUT can be shown to be NP-hard. We also construct a
new gadget for the MAX DICUT problem, thereby strengthening the known bound
on its hardness. Plugging our gadget into the reduction (specically Lemma 4.15)
of [2], shows that approximating MAX CUT to within a factor of 60=61 is NP-hard,
as is approximating MAX DICUT to within a factor of 44=45. 2 For both problems,
the hardness factor proved in [2] was 71=72. The PCP machinery of [2] has since
been improved by Hastad [9]. Our gadgets and Hastad's result show that, for every
> 0, approximating MAX CUT to within a factor of 16=17 + is NP-hard, as is
approximating MAX DICUT to within a factor of 12=13+ . Using Hastad's result in
combination with the gadgets of [2] would have given a hardness factor of
for both problems, for every > 0.
Obtaining better reductions between problems can also yield improved approximation
algorithms (if the reduction goes the right way!). We illustrate this point by
constructing a gadget reducing MAX 3SAT to MAX 2SAT. Using this new reduction
in combination with a technique of Goemans and Williamson [7, 8] and the state-of-
the-art :931-approximation algorithm for MAX 2SAT due to Feige and Goemans [5]
(which improves upon the previous :878-approximation algorithm of [8]), we obtain
a :801-approximation algorithm for MAX 3SAT. The best result that could be obtained
previously, by combining the technique of [7, 8] and the bound of [5], was :7704.
(The best previously published result is a :769-approximation algorithm, due to Ono,
Hirata, and Asano [14].)
Finally, our reductions have implications for probabilistically checkable proof sys-
tems. Let PCP c;s [log; q] be the class of languages that admit membership proofs that
can be checked by a probabilistic verier that uses a logarithmic number of random
bits, reads at most q bits of the proof, accepts correct proofs of strings in the language
with probability at least c, and accepts purported proofs of strings not in the language
with probability at most s. We show: rst, for any > 0, there exist constants
c and s, c=s > 10=9 , such that NP PCP c;s [log; 2]; and second, for all c; s with
c=s > 2:7214, PCP c;s [log; 3] P. The best bound for the former result obtainable
from [2, 9] is 22=21 ; the best previous bound for the latter was 4 [16].
All the gadgets we use are computer-constructed. In the nal section, we present
an example of a lower bound on the performance of a gadget. The bound is not
computer constructed and cannot be, by the nature of the problem. The bound still
relies on dening an LP that describes the optimal gadget, and extracting the lower
1 Respectively, an IBM RiscSystem/6000 workstation, the IBM Optimization Subroutine Library,
which includes a linear programming package, and (not that we are partisan) IBM's APL2 programming
language.
Approximation ratios in this paper for maximization problems are less than 1, and represent
the weight of the solution achievable by a polynomial time algorithm, divided by the weight of the
optimal solution. This matches the convention used in [18, 7, 8, 5] and is the reciprocal of the
measure used in [2].
4 L. TREVISAN, G. B. SORKIN, M. SUDAN, AND D. P. WILLIAMSON
bound from the LP's dual.
Subsequent work. Subsequent to the original presentation of this work [17], the
approximability results presented in this paper have been superseded. Karlo and
Zwick [10] present a 7/8-approximation algorithm for MAX 3SAT. This result is tight
unless NP=P [9]. The containment result PCP c;s [log; 3] P has also been improved
by Zwick [19] and shown to hold for any c=s 2. This result is also tight, again
by [9]. Finally, the gadget construction methods of this paper have found at least
two more applications. Hastad [9] and Zwick [19] use gadgets constructed by these
techniques to show hardness results for two problems they consider, MAX 2LIN and
MAX NAE3SAT respectively.
Version. An extended abstract of this paper appeared as [17]. This version corrects
some errors, pointed out by Karlo and Zwick [11], from the extended abstract.
This version also presents inapproximability results resting on the improved PCP
constructions of Hastad [9], while mentioning the results that could be obtained otherwise
Organization of this paper. The next section introduces precise denitions which
formalize the preceding outline. Section 3 presents the niteness proof and the LP-based
search strategy. Section 4 contains negative (non-approximability) results and
the gadgets used to derive them. Section 5 brie
y describes our computer system
for generating gadgets. Section 6 presents the positive result for approximating
MAX 3SAT. Section 7 presents proofs of optimality of the gadgets for some problems
and lower bounds on the costs of others. It includes a mix of computer-generated and
hand-generated lower bounds.
2. Denitions. We begin with some denitions we will need before giving the
denition of a gadget from [2]. In what follows, for any positive integer n, let [n]
denote the set ng.
Definition 2.1. A (k-ary) constraint function is a boolean function f :
1g. We refer to k as the arity of a k-ary constraint function f . When
it is applied to variables X (see the following denitions) the function f is
thought of as imposing the constraint f(X
Definition 2.2. A constraint family F is a collection of constraint functions.
The arity of F is the maximum of the arity of the constraint functions in F .
Definition 2.3. A constraint C over a variable set
is a constraint function and are
distinct members of [n]. The constraint C is said to be satised by an assignment
an to X We say that
constraint C is from F if f 2 F .
Constraint functions, constraint families and constraints are of interest due to
their dening role in a variety of NP optimization problems.
Definition 2.4. For a nitely specied constraint family F , MAX F is the
optimization problem given by:
Input: An instance consisting of m constraints C
non-negative real weights w instance is thus a triple
w).)
Goal: Find an assignment ~ b to the variables ~
which maximizes the weight
of satised constraints.
Constraint functions, families and the class fMAX F j Fg allow descriptions of
GADGETS, APPROXIMATION, AND LINEAR PROGRAMMING 5
optimization problems and reductions in a uniform manner. For example, if
2SAT is the constraint family consisting of all constraint functions of arity at most
2 that can be expressed as the disjunction of up to 2 literals, then MAX 2SAT is
the corresponding MAX F problem. Similarly MAX 3SAT is the MAX F problem
dened using the constraint family consisting of all constraint functions of
arity up to 3 that can be expressed as the disjunction of up to 3 literals.
One of the motivations for this work is to understand the \approximability" of
many central optimization problems that can be expressed as MAX F problems,
including MAX 2SAT and MAX 3SAT. For 2 [0; 1], an algorithm A is said
to be a -approximation algorithm for the MAX F problem, if on every instance
w) of MAX F with n variables and m constraints, A outputs an assignment
~a s.t.
b)g. We say that the problem MAX F
is -approximable if there exists a polynomial time-bounded algorithm A that is a
-approximation algorithm for MAX F . We say that MAX F is hard to approximate
to within a factor (-inapproximable), if the existence of a polynomial time
-approximation algorithm for MAX F implies NP=P.
Recent research has yielded a number of new approximability results for several
MAX F problems (cf. [7, 8]) and a number of new results yielding hardness of approximations
(cf. [2, 9]). One of our goals is to construct e-cient reductions between
MAX F problems that allow us to translate \approximability" and \inapproximabil-
ity" results. As we saw in the opening example such reductions may be constructed by
constructing \gadgets" reducing one constraint family to another. More specically,
the example shows how a reduction from 3SAT to 2SAT results from the availability,
for every constraint function f in the family 3SAT, of a gadget reducing f to the
family 2SAT. This notion of a gadget reducing a constraint function f to a constraint
family F is formalized in the following denition.
Definition 2.5 (Gadget [2]). For
f0; 1g, and a constraint family F : an -gadget (or \gadget with performance ")
reducing f to F is a set of variables Y collection of real weights
associated constraints C j from F over primary variables
and auxiliary variables Y with the property that, for boolean assignments
~a to X the following are satised:
The gadget is strict if, in addition,
We use the shorthand
w) to denote the gadget described above.
It is straightforward to verify that the introductory example yields a strict 7-
gadget reducing the constraint function to the family
2SAT.
6 L. TREVISAN, G. B. SORKIN, M. SUDAN, AND D. P. WILLIAMSON
Observe that an
w) can be converted into an 0 > gadget
by \rescaling", i.e., multiplying every entry of the weight vector ~
w by
strictness is not preserved). This indicates that a \strong" gadget is one with a
small ; in the extreme, a 1-gadget would be the \optimal" gadget. This intuition
will be conrmed in the role played by gadgets in the construction of reductions.
Before describing this, we rst list the constraints and constraint families that are of
interest to us.
For convenience we now give a comprehensive list of all the constraints and constraint
families used in this paper.
Definition 2.6.
Parity check (PC) is the constraint family fPC
f0; 1g, PC i is dened as follows:
Henceforth we will simply use terms such as MAX PC to denote the optimization
problem MAX F where (referred to as MAX 3LIN in [9]) is the
source of all our inapproximability results.
For any k 1, Exactly-k-SAT (EkSAT) is the constraint family ff :
that is, the set of k-ary disjunctive
constraints.
For any k 1, kSAT is the constraint family
SAT is the constraint family
l1 ElSAT.
The problems MAX 3SAT, MAX 2SAT, and MAX SAT are by now classical optimization
problems. They were considered originally in [6]; subsequently their central
role in approximation was highlighted in [15]; and recently, novel approximation algorithms
were developed in [7, 8, 5]. The associated families are typically the targets of
gadget constructions in this paper. Shortly, we will describe a lemma which connects
the inapproximability of MAX F to the existence of gadgets reducing PC 0 and PC 1
to F . This method has so far yielded in several cases tight, and in other cases the
best known, inapproximability results for MAX F problems.
In addition to 3SAT's use as a target, its members are also used as sources; gadgets
reducing members of MAX 3SAT to MAX 2SAT help give an improved MAX 3SAT
approximation algorithm.
3-Conjunctive SAT (3ConjSAT) is the constraint family ff 000
where:
1. f 000 (a; b; c)
2. f 001 (a; b; c)
3. f 011 (a; b; c)
4. f 111 (a; b; c)
Members of 3ConjSAT are sources in gadgets reducing them to 2SAT. These gadgets
enable a better approximation algorithm for the MAX 3ConjSAT problem, which in
turn sheds light on the the class PCP c;s [log; 3].
is the constraint function given by CUT(a; a b.
CUT/0 is the family of constraints fCUT;Tg, where
CUT/1 is the family of constraints fCUT;Fg, where
GADGETS, APPROXIMATION, AND LINEAR PROGRAMMING 7
MAX CUT is again a classical optimization problem. It has attracted attention due
to the recent result of Goemans and Williamson [8] providing a .878-approximation
algorithm. An observation from Bellare et al. [2] shows that the approximability of
MAX CUT/0, MAX CUT/1, and MAX CUT are all identical; this is also formalized
in Proposition 4.1 below. Hence MAX CUT/0 becomes the target of gadget constructions
in this paper, allowing us to get inapproximability results for these three
problems.
is the constraint function given by DICUT(a;
MAX DICUT is another optimization problem to which the algorithmic results of
[8, 5] apply. Gadgets whose target is DICUT will enable us to get inapproximability
results for MAX DICUT.
2CSP is the constraint family consisting of all binary functions, i.e.
MAX 2CSP was considered in [5], which gives a .859-approximation algorithm; here
we provide inapproximability results.
Respect of monomial basis check (RMBC) is the constraint family
may be thought of as the test (c; d)[a] ?
as the test
as the test (:c; d)[a] ?
as the test
b, where the notation (v refers to the i 1'st coordinate
of the vector (v
Our original interest in RMBC came from the work of Bellare et al. [2] which derived
hardness results for MAX F using gadgets reducing every constraint function in PC
and RMBC to F . This work has been eectively superseded by Hastad's [9] which
only requires gadgets reducing members of PC to F . However we retain some of
the discussion regarding gadgets with RMBC functions as a source, since these constructions
were signicantly more challenging, and some of the techniques applied to
overcome the challenges may be applicable in other gadget constructions. A summary
of all the gadgets we found, with their performances and lower bounds, is given in
Table
1.
We now put forth a theorem, essentially from [2] (and obtainable as a generalization
of its Lemmas 4.7 and 4.15), that relates the existence of gadgets with F as
target, to the hardness of approximating MAX F . Since we will not be using this
theorem, except as a motivation for studying the family RMBC, we do not prove it
here.
Theorem 2.7. For any family F , if there exists an 1 -gadget reducing every
function in PC to F and an 2 -gadget reducing every function in RMBC to F , then
for any > 0, MAX F is hard to approximate to within 1 :15
.
In this paper we will use the following, stronger, result by Hastad.
Theorem 2.8. [9] For any family F , if there exists an 0 -gadget reducing PC 0
to F and an 1 -gadget reducing PC 1 to F , then for any > 0, MAX F is hard to
approximate to within 1 1
8 L. TREVISAN, G. B. SORKIN, M. SUDAN, AND D. P. WILLIAMSON
source previous our lower bound
Table
All gadgets described are provably optimal, and strict. The sole exception (y) is the best possible
strict gadget; there is a non-strict 3-gadget. All \previous" results quoted are interpretations of the
results in [2], except the gadget reducing 3SAT to 2SAT, which is due to [6], and the gadget reducing
PC to 3SAT, which is folklore.
Thus, using CUT=0, DICUT, 2CSP, EkSAT and kSAT as the target of gadget constructions
from PC 0 and PC 1 , we can show the hardness of MAX CUT, MAX DICUT,
MAX 2CSP, MAX EkSAT and MAX kSAT respectively. Furthermore, minimizing
the value of in the gadgets gives better hardness results.
3. The Basic Procedure. The key aspect of making the gadget search spaces
nite is to limit the number of auxiliary variables, by showing that duplicates (in a
sense to be claried) can be eliminated by means of proper substitutions. In general,
this is possible if the target of the reduction is a \hereditary" family as dened below.
Definition 3.1. A constraint family F is hereditary if for any f 2 F of
arity k, and any two indices [k], the function f when restricted to X i X j
and considered as a function of k 1 variables, is identical (up to the order of the
arguments) to some other function f 0 2 F [f0; 1g (where 0 and 1 denote the constant
functions).
Definition 3.2. A family F is complementation-closed if it is hereditary
and, for any f 2 F of arity k, and any index i 2 [k], the function f 0 given by
contained in F .
Definition 3.3 (Partial Gadget). For
and a constraint family F : an S-partial -gadget (or
\S-partial gadget with performance ") reducing f to F is a nite collection of constraints
Cm from F over primary variables and nitely many aux-
GADGETS, APPROXIMATION, AND LINEAR PROGRAMMING 9
iliary variables Y a collection of non-negative real weights w
with the property that, for boolean assignments ~a to X
the following are satised:
We use the shorthand
w) to denote the partial gadget.
The following proposition follows immediately from the denitions of a gadget
and a partial gadget.
Proposition 3.4. For a constraint function
. Then for every
family
1. An S 1 -partial -gadget reducing f to F is an -gadget reducing f to F .
2. An S 2 -partial -gadget reducing f to F is a strict -gadget reducing f to F .
Definition 3.5. For 1 and S f0; 1g k ,
w) be an S-partial
-gadget reducing a constraint f : f0; 1g k ! f0; 1g to a constraint family F . We say
that the function b is a witness for the partial gadget, witnessing the
set S, if b(~a) satises equations (3.2) and (3.4). Specically:
The witness function can also be represented as an jSj
rows are the vectors (~a; b(~a)). Notice that the columns of the matrix correspond to the
variables of the gadget, with the rst k columns corresponding to primary variables,
and the last n corresponding to auxiliary variables. In what follows we shall often
prefer the matrix notation.
Definition 3.6. For a set S f0; 1g k let MS be the matrix whose rows are
the vectors ~a 2 S, let k 0
S be the number of distinct columns in MS , and let k 00
S be
the number of columns in MS distinct up to complementation. Given a constraint f
of arity k and a hereditary constraint family F that is not complementation-closed,
an (S; f; F)-canonical witness matrix (for an S-partial gadget reducing f to F)
is the jSj
whose rst k columns correspond to the k
primary variables and whose remaining columns are all possible column vectors that
L. TREVISAN, G. B. SORKIN, M. SUDAN, AND D. P. WILLIAMSON
are distinct from one another and from the columns corresponding to the primary
variables. If F is complementation-closed, then a canonical witness matrix is the
whose rst k columns correspond to the k primary
variables and whose remaining columns are all possible column vectors that are distinct
up to complementation from one another and from the columns corresponding to the
primary variables.
The following lemma is the crux of this paper and establishes that the optimal
gadget reducing a constraint function f to a hereditary family F is nite. To motivate
the lemma, we rst present an example, due to Karlo and Zwick [11], showing that
this need not hold if the family F is not hereditary. Their counterexample has
g. Using k auxiliary variables, Y may construct a gadget
for the constraint X , using the constraints X Y with each
constraint having the same weight. For an appropriate choice of this weight it may
be veried that this yields a (2 2=k)-gadget for even k; thus the performance tends
to 2 in the limit. On the other hand it can be shown that any gadget with k auxiliary
variables has performance at most 2 thus no nite gadget achieves the limit.
It is clear that for this example the lack of hereditariness is critical: any hereditary
family containing PC 1 would also contain f , providing a trivial 1-gadget.
To see why the hereditary property helps in general, consider an -gadget
reducing f to F , and let W be a witness matrix for . Suppose two columns of W ,
corresponding to auxiliary variables Y 1 and Y 2 of , are identical. Then we claim that
does not really need the variable Y 2 . In every constraint containing Y 2 , replace it
with Y 1 , to yield a new collection of weighted constraints. By the hereditary property
of F , all the resulting constraints are from F . And, the resulting instance satises
all the properties of an -gadget. (The universal properties follow trivially, while
the existential properties follow from the fact that in the witness matrix Y 1 and Y 2
have the same assignment.) Thus this collection of constraints forms a gadget with
fewer variables and performance at least as good. The niteness follows from the
fact a witness matrix with distinct columns has a bounded number of columns. The
following lemma formalizes this argument. In addition it also describes the canonical
witness matrix for an optimal gadget | something that will be of use later.
Lemma 3.7. For 1, set S f0; 1g k , constraint
hereditary constraint family F, if there exists an S-partial -gadget reducing f to
F , with witness matrix W , then for any (S; f; F)-canonical witness matrix W 0 , and
some 0 , there exists an 0 -gadget 0 reducing f to F , with W 0 as a witness
matrix.
Proof. We rst consider the case where F is not complementation-closed. Let
w) be an S-partial -gadget reducing f to F and let W be a witness
matrix for . We create a gadget 0 with auxiliary variables Y 0
one associated with each column of the matrix W 0 other than the rst k.
With each variable Y i of we associate a variable Z such that the column corresponding
to Y i in W is the same as the column corresponding to Z in W 0 . Notice that
Z may be one of the primary variables or one of the auxiliary variables
By denition of a canonical witness, such a column and hence variable Z
does exist.
Now for every constraint C j on variables Y i 1
in with weight w j , we
introduce the constraint C j on variables Y 0
in 0 with weight w j where Y 0
is the variable associated with Y i l . Notice that in this process the variables involved
GADGETS, APPROXIMATION, AND LINEAR PROGRAMMING 11
with a constraint do not necessarily remain distinct. This is where the hereditary
property of F is used to ensure that a constraint C j 2 F , when applied to a tuple
of non-distinct variables, remains a constraint in F . In the process we may arrive
at some constraints which are either always satised or never satised. For the time
being, we assume that the constraints 0 and 1 are contained in F , so this occurrence
does not cause a problem. Later we show how this assumption is removed.
This completes the description of 0 . To verify that 0 is indeed an S-partial
-gadget, we notice that the universal constraints (conditions (3.1) and (3.3) in Definition
are trivially satised, since 0 is obtained from by renaming some variables
and possibly identifying some others. To see that the existential constraints
(conditions (3.2) and (3.4) in Denition 3.3) are satised, notice that the assignments
to the variables ~
Y that witness these conditions in are allowable assignments to
the corresponding variables in ~ Y 0 and in fact this is what dictated our association of
variables in ~
Y to the variables in ~
Y 0 . Thus 0 is indeed an S-partial -gadget reducing
f to F , and, by construction, has W 0 as a witness matrix.
Last, we remove the assumption that 0 must include constraints 0 and 1. Any
constraints 0 can be safely thrown out of the gadget without changing any of the pa-
rameters, since such constraints are never satised. On the other hand, constraints 1
do aect . If we throw away a 1 constraint of weight w j , this reduces the total weight
of satised clauses in every assignment by w j . Throwing away all such constraints
reduces by the total weight of the 1 constraints, producing a gadget of (improved)
performance 0 .
Finally, we describe the modications required to handle the case where F is
complementation-closed (in which case the denition of a canonical witness changes).
Here, for each variable Y i and its associated column of W , either there is an equal
column in W 0 , in which case we replace Y i with the column's associated variable
or there is a complementary column in W 0 , in which case we replace Y i with
the negation of the column's associated variable, :Y 0
The rest of the construction
proceeds as above, and the proof of correctness is the same.
It is an immediate consequence of Lemma 3.7 that an optimum gadget reducing a
constraint function to a hereditary family does not need to use more than an explicitly
bounded number of auxiliary variable.
Corollary 3.8. Let f be a constraint function of arity k with s satisfying
assignments. Let F be a constraint family and 1 be such that there exists an
-gadget reducing f to F .
1. If F is hereditary then there exists an 0 -gadget with at most 2 s k 0 auxiliary
variables reducing f to F , where 0 , and k 0 is the number of distinct
variables among the satisfying assignments of f .
2. If F is complementation-closed then there exists an 0 -gadget with at most
auxiliary variables reducing f to F , for some 0 , where k 00 is
the number of distinct variables, up to complementation, among the satisfying
assignments of f .
Corollary 3.9. Let f be a constraint function of arity k. Let F be a constraint
family and 1 be such that there exists a strict -gadget reducing f to F .
1. If F is hereditary then there exists a strict 0 -gadget with at most 2 2 k
auxiliary variables reducing f to F , for some 0 .
2. If F is complementation-closed then there exists a strict 0 -gadget with at
most auxiliary variables reducing f to F , for some 0 .
L. TREVISAN, G. B. SORKIN, M. SUDAN, AND D. P. WILLIAMSON
We will now show how to cast the search for an optimum gadget as a linear programming
problem.
Definition 3.10. For a constraint function f of arity k, constraint family F ,
M) is a linear program dened as follows:
Cm be all the possible distinct constraints that arise from applying
a constraint function from F to a set of n variables. Thus for
every j, 1g. The LP variables are w
corresponds to the weight of the constraint C j . Additionally the LP has one
more variable .
Let S f0; 1g k and b be such that (i.e., M is the
witness matrix corresponding to the witness function b for the set S). The
LP inequalities correspond to the denition of an S-partial gadget.
Finally the LP has the inequalities w j 0.
The objective of the LP is to minimize .
Proposition 3.11. For any constraint function f of arity k, constraint family
F , and s witnessing the set S f0; 1g k , if there exists
an S-partial gadget reducing f to F with witness matrix M , then LP(f; F ; M) nds
such a gadget with the minimum possible .
Proof. The LP-generated gadget consists of k primary variables corresponding
to the rst k columns of M ; n auxiliary variables Y corresponding
to the remaining n columns of M ; constraints C as dened in Denition 3.10;
and weights w returned by LP(f; F ; M ). By construction the LP solution
returns the minimum possible for which an S-partial -gadget reducing f to F with
witness M exists.
Theorem 3.12 (Main). Let f be a constraint function of arity k with s satisfying
assignments. Let k 0 be the number of distinct variables of f and k 00 be the number
of distinct variables up to complementation. Let F be a hereditary constraint family
with functions of arity at most l. Then:
If there exists an -gadget reducing f to F , then there exists such a gadget with
at most v auxiliary variables, where
closed and
If there exists a strict -gadget reducing f to F then there exists such a
gadget with at most v auxiliary variables, where
complementation-closed and
Furthermore such a gadget with smallest performance can be found by solving a linear
program with at most jF j (v variables and 2 v+k constraints.
GADGETS, APPROXIMATION, AND LINEAR PROGRAMMING 13
Remark: The sizes given above are upper bounds. In specic instances, the sizes
may be much smaller. In particular, if the constraints of F exhibit symmetries, or
are not all of the same arity, then the number of variables of the linear program will
be much smaller.
Proof. By Proposition 3.11 and Lemma 3.7, we have that LP(f; F ; WS ) yields
an optimal S-partial gadget if one exists. By Proposition 3.4 the setting
gadget, and the setting strict gadget.
Corollaries 3.8 and 3.9 give the required bound on the number of auxiliary variables;
and the size of the LP then follows from the denition.
To conclude this section, we mention some (obvious) facts that become relevant
when searching for large gadgets. First, if S 0 S, then the performance of an S 0 -
partial gadget reducing f to F is also a lower bound on the performance of an S-partial
gadget reducing f to F . The advantage here is that the search for an S 0 -partial gadget
may be much faster. Similarly, to get upper bounds on the performance of an S-partial
gadget, one may use other witness matrices for S (rather than the canonical one); in
particular ones with (many) fewer columns. This corresponds to making a choice of
auxiliary variables not to be used in such a gadget.
4. Improved Negative Results.
4.1. MAX CUT. We begin by showing an improved hardness result for the
MAX CUT problem. It is not di-cult to see that no gadget per Denition 2.5 can
reduce any member of PC to CUT: for any setting of the variables which satises
equation (2.2), the complementary setting has the opposite parity (so that it must be
subject to inequality (2.3)), but the values of all the CUT constraints are unchanged,
so that the gadget's value is still , violating (2.3). Following [2], we use instead the
fact that MAX CUT and MAX CUT/0 are equivalent with respect to approximation
as shown below.
Proposition 4.1. MAX CUT is equivalent to MAX CUT/0. Specically, given
an instance I of either problem, we can create an an instance I 0 of the other with the
same optimum and with the feature that an assignment satisfying constraints of total
weight W to the latter can be transformed into an assignment satisfying constraints
of the same total weight in I.
Proof. The reduction from MAX CUT to MAX CUT/0 is trivial, since the family
CUT/0 contains CUT; and thus the identity map provides the required reduction.
In the reverse direction, given an instance ( ~
w) of MAX CUT/0 with n
variables and m clauses, we create an instance ( ~
w) of MAX CUT with
variables and m clauses. The variables are simply the variables ~
X with one additional
variable called 0. The constraints of ~
C are transformed as follows. If the constraint
is a CUT constraint on variables X i and X j it is retained as is. If the constraint is
replaced with the constraint CUT(X Given a assignment ~a to the
vector ~
notice that its complement also satises the same number of constraints
in I 0 . We pick the one among the two that sets the variable 0 to 0, and then observe
that the induced assignment to ~
X satises the corresponding clauses of I.
Thus we can look for reductions to CUT/0. Notice that the CUT=0 constraint
family is hereditary, since identifying the two variables in a CUT constraint yields the
constant function 0. Thus by Theorem 3.12, if there is an -gadget reducing PC 0 to
CUT=0, then there is an -gadget with at most 13 auxiliary variables (16 variables
in all). Only are possible on 16 variables. Since we only
14 L. TREVISAN, G. B. SORKIN, M. SUDAN, AND D. P. WILLIAMSONx 1
Fig. 4.1. 8-gadget reducing PC 0 to CUT. Every edge has weight .5. The auxiliary variable
which is always 0 is labelled 0.
need to consider the cases when Y 0, we can construct a linear program as above
with constraints to nd the optimal -gadget reducing PC 0 to
CUT=0. A linear program of the same size can similarly be constructed to nd a
gadget reducing PC 1 to CUT=0.
Lemma 4.2. There exists an 8-gadget reducing PC 0 to CUT=0, and it is optimal
and strict.
We show the resulting gadget in Figure 4.1 as a graph. The primary variables are
labelled is a special variable. The unlabelled vertices are
auxiliary variables. Each constraint of non-zero weight is shown as an edge. An edge
between the vertex 0 and some vertex x corresponds to the constraint T (x). Any
other edge between x and y represents the constraint CUT(x; y). Note that some of
the 13 possible auxiliary variables do not appear in any positive weight constraint and
thus are omitted from the graph. All non-zero weight constraints have weight .5.
By the same methodology, we can prove the following.
Lemma 4.3. There exists a 9-gadget reducing PC 1 to CUT=0, and it is optimal
and strict.
The gadget is similar to the previous one, but the old vertex 0 is renamed Z, and
a new vertex labelled 0 is joined to Z by an edge of weight 1.
The two lemmas along with Proposition 4.1 above imply the following theorem.
Theorem 4.4. For every > 0, MAX CUT is hard to approximate to within
Proof. Combining Theorem 2.8 with Lemmas 4.2 and 4.3 we nd that MAX CUT/0
is hard to approximate to within 16=17 . The theorem then follows from Proposition
4.1.
gadgets. Finding RMBC gadgets was more di-cult. We discuss this
point since it leads to ideas that can be applied in general when nding large gad-
gets. Indeed, it turned out that we couldn't exactly apply the technique above
to nd an optimal gadget reducing, say, RMBC 00 to CUT=0. (Recall that the
is the function (a 3 ; a 4
.) Since there are 8 satisfying
assignments to the 4 variables of the RMBC 00 constraint, by Theorem 3.12, we would
need to consider 2 auxiliary variables, leading to a linear program with
which is somewhat beyond the capacity of current computing
GADGETS, APPROXIMATION, AND LINEAR PROGRAMMING 15
machines. To overcome this di-culty, we observed that for the RMBC 00 function, the
value of a 4 is irrelevant when a and the value of a 3 is irrelevant when a
led us to try only restricted witness functions for which ~ b(0; a 2 ; a 3 ;
and ~ b(1; a (dropping from the witness matrix columns violating
the above conditions), even though it is not evident a priori that a gadget with
a witness function of this form exists. The number of distinct variable columns that
such a witness matrix can have is at most 16. Excluding auxiliary variables identical
to a 1 or a 2 , we considered gadgets with at most 14 auxiliary variables. We then created
a linear program with constraints.
The result of the linear program was that there exists an 8-gadget with constant 0
reducing RMBC 00 to CUT, and that it is strict. Since we used a restricted witness
function, the linear program does not prove that this gadget is optimal.
However, lower bounds can be established through construction of optimal S-
partial gadgets. If S is a subset of the set of satisfying assignments of RMBC 00 , then
its dening equalities and inequalities (see Denition 3.3) are a subset of those for a
gadget, and thus the performance of the partial gadget is a lower bound for that of a
true gadget.
In fact, we have always been lucky with the latter technique, in that some choice
of the set S has always yielded a lower bound and a matching gadget. In particular,
for reductions from RMBC to CUT, we have the following result.
Theorem 4.5. There is an 8-gadget reducing RMBC 00 to CUT=0, and it is
optimal and strict; there is an 8-gadget reducing RMBC 01 to CUT=0, and it is optimal
and strict; there is a 9-gadget reducing RMBC 10 to CUT=0, and it is optimal and
strict; and there is a 9-gadget reducing RMBC 11 to CUT=0, and it is optimal and
strict.
Proof. In each case, for some set S of satisfying assignments, an optimal S-
partial gadget also happens to be a true gadget, and strict. In the same notation as
in Denition 2.6, the appropriate sets S of 4-tuples (a; b; c; d) are: for RMBC 00 ,
4.2. MAX DICUT. As in the previous subsection, we observe that if there
exists an -gadget reducing an element of PC to DICUT, there exists an -gadget with
auxiliary variables. This leads to linear programs with 1615 variables (one for each
possible DICUT constraint, corresponding to a directed edge) and 2
linear constraints. The solution to the linear programs gives the following.
Lemma 4.6. There exist 6:5-gadgets reducing PC 0 and PC 1 to DICUT, and they
are optimal and strict.
The PC 0 gadget is shown in Figure 4.2. Again x 1 , x 2 and x 3 refer to the primary
variable and an edge from x to y represents the constraint :x^b. The PC 1 gadget is
similar, but has all edges reversed.
Theorem 4.7. For every > 0, MAX DICUT is hard to approximate to within
gadgets. As with the reductions to CUT=0, reductions from the RMBC
family members to DICUT can be done by constructing optimal S-partial gadgets,
and again (with fortuitous choices of S) these turn out to be true gadgets, and strict.
Theorem 4.8. There is a 6-gadget reducing RMBC 00 to DICUT, and it is
optimal and strict; there is a 6.5-gadget reducing RMBC 01 to DICUT, and it is optimal
and strict; there is a 6.5-gadget reducing RMBC 10 to DICUT, and it is optimal and
L. TREVISAN, G. B. SORKIN, M. SUDAN, AND D. P. WILLIAMSON3
Fig. 4.2. 8-gadget reducing PC 0 to DICUT. Edges have weight 1 except when marked otherwise.
strict; and there is a 7-gadget reducing RMBC 11 to DICUT, and it is optimal and
strict.
Proof. Using, case by case, the same sets S as in the proof of Theorem 4.5, again
yields in each case an optimal S-partial gadget that also happens to be a true, strict
gadget.
4.3. MAX 2-CSP. For reducing an element of PC to the 2CSP family we need
consider only 4 auxiliary variables, for a total of 7 variables. There are two non-constant
functions on a single variable, and twelve non-constant functions on pairs of
variables, so that there are 2 7 functions to consider overall. We can
again set up a linear program with a variable per function and 2 7 linear
constraints. We obtain the following.
Lemma 4.9. There exist 5-gadgets reducing PC 0 and PC 1 to 2CSP, and they are
optimal and strict.
The gadget reducing PC 0 to 2CSP is the following:
The gadget reducing PC 1 to 2CSP can be obtained from this one by complementing
all the occurrences of X 1 .
Theorem 4.10. For every > 0, MAX 2CSP is hard to approximate to within
MAX 2CSP can be approximated within :859 [5]. The above theorem has implications
for probabilistically checkable proofs. Reversing the well-known reduction
from constraint satisfaction problems to probabilistically checkable proofs (cf. [1]) 3 ,
Theorem 4.10 yields the following theorem.
Theorem 4.11. For any > 0, constants c and s exist such that NP
c;s [log; 2] and c=s > 10=9 .
The previously known gap between the completeness and soundness achievable reading
two bits was 74=73 [2]. It would be 22=21 using Hastad's result [9] in combination
3 The reverse connection is by now a folklore result and may be proved along the lines of [2,
Proposition 10.3, Part (3)].
GADGETS, APPROXIMATION, AND LINEAR PROGRAMMING 17
with the argument of [2]. Actually the reduction from constraint satisfaction problems
to probabilistically checkable proofs is reversible, and this will be important in Section
7.
RMBC gadgets. Theorem 4.12. For each element of RMBC, there is a 5-gadget
reducing it to 2CSP, and it is optimal and strict.
Proof. Using the same selected assignments as in Theorems 4.5 and 4.8 again
yields lower bounds and matching strict gadgets.
5. Interlude: Methodology. Despite their seeming variety, all the gadgets in
this paper were computed using a single program (in the language APL2) to generate
an LP, and call upon OSL (the IBM Optimization Subroutine Library) to solve it.
This \gadget-generating" program takes several parameters.
The source function f is specied explicitly, by a small program that computes
f .
The target family F is described by a single function, implemented as a small
program, applied to all possible clauses of specied lengths and symmetries. The
symmetries are chosen from among: whether clauses are unordered or ordered; whether
their variables may be complemented; and whether they may include the constants 0
or 1. For example, a reduction to MAX CUT=0 would take as F the function x 1 x 2 ,
applied over unordered binomial clauses, in which complementation is not allowed
but the constant 0 is allowed. This means of describing F is relatively intuitive and
has never restricted us, even though it is not completely general. Finally, we specify
an arbitrary set S of selected assignments, which allows us to search for S-partial
gadgets (recall Denition 3.3). From equations (3.2) and (3.4), each selected assignment
~a generates a constraint that
Selecting
all satisfying assignments of f reproduces the set of constraints (2.2) for an -gadget,
while selecting all assignments reproduces the set of constraints (2.2) and (2.4) for a
strict -gadget.
Selected assignments are specied explicitly; by default, to produce an ordinary
gadget, they are the satisfying assignments of f . The canonical witness for the selected
set of assignments is generated by our program as governed by Denition 3.6. Notice
that the denition of the witness depends on whether F is complementation-closed
or not, and this is determined by the explicitly specied symmetries.
To facilitate the generation of restricted witness matrices, we have also made
use of a \don't-care" state (in lieu of 0 or 1) to reduce the number of selected assign-
ments. For example in reductions from RMBC 00 we have used selected assignments
of (00 1). The various LP constraints must be satised
for both values of any don't-care, while the witness function must not depend on
the don't-care values. So in this example, use of a don't-care reduces the number
of selected assignments from 8 to 4, reduces the number of auxiliary variables from
about 2 8 to 2 4 (ignoring duplications of the 4 primary variables, or any symmetries),
and reduces the number of constraints in the LP from 2 2 8
(a
more reasonable 65,536). Use of don't-cares provides a technique complementary to
selecting a subset of all satisfying assignments, in that if the LP is feasible it provides
an upper bound and a gadget, but the gadget may not be optimal.
In practice, selecting a subset of satisfying assignments has been by far the more
useful of the two techniques; so far we have always been able to choose a subset which
produces a lower bound and a gadget to match.
L. TREVISAN, G. B. SORKIN, M. SUDAN, AND D. P. WILLIAMSON
After constructing and solving an LP, the gadget-generating program uses brute
force to make an independent verication of the gadget's validity, performance, and
strictness.
The hardest computations were those for gadgets reducing from RMBC; on an
IBM Risc System/6000 model 43P-240 workstation, running at 233MHz, these took
up to half an hour and used 500MB or so of memory. However, the strength of [9]
makes PC virtually the sole source function of contemporary interest, and all the
reductions from PC are easy; they use very little memory, and run in seconds on an
ordinary 233MHz Pentium processor.
6. Improved Positive Results. In this section we show that we can use gadgets
to improve approximation algorithms. In particular, we look at MAX 3SAT, and
a variation, MAX 3ConjSAT, in which each clause is a conjunction (rather than a
disjunction) of three literals. An improved approximation algorithm for the latter
problem leads to improved results for probabilistically checkable proofs in which the
verier examines only 3 bits. Both of the improved approximation algorithms rely on
strict gadgets reducing the problem to MAX 2SAT. We begin with some notation.
Definition 6.1. A )-approximation algorithm for MAX 2SAT is an algorithm
which receives as input an instance with unary clauses of total weight m 1 and
binary clauses of total weight m 2 , and two reals produces
reals s 1 u 1 and s 2 u 2 and an assignment satisfying clauses of total weight at
least . If there exists an optimum solution that satises unary clauses of
weight no more than u 1 and binary clauses of weight no more than u 2 , then there is a
guarantee that no assignment satises clauses of total weight more than s 1 +s 2 . That
is, supplied with a pair of \upper bounds" )-approximation algorithm
produces a single upper bound of s 1 +s 2 , along with an assignment respecting a lower
bound of 1
Lemma 6.2. [5] There exists a polynomial-time (:976; :931)-approximation algorithm
for MAX 2SAT.
6.1. MAX 3SAT. In this section we show how to derive an improved approximation
algorithm for MAX 3SAT. By restricting techniques in [8] from MAX SAT to
MAX 3SAT and using a :931-approximation algorithm for MAX 2SAT due to Feige
and Goemans [5], one can obtain a :7704-approximation algorithm for MAX 3SAT.
The basic idea of [8] is to reduce each clause of length 3 to the three possible subclauses
of length 2, give each new length-2 clause one-third the original weight, and
then apply an approximation algorithm for MAX 2SAT. This approximation algorithm
is then \balanced" with another approximation algorithm for MAX 3SAT to
obtain the result. Here we show that by using a strict gadget to reduce 3SAT to
MAX 2SAT, a good )-approximation algorithm for MAX 2SAT leads to a :801-
approximation algorithm for MAX 3SAT.
Lemma 6.3. If for every f 2 E3SAT there exists a strict -gadget reducing f
to 2SAT, there exists a )-approximation algorithm for MAX 2SAT, and
there exists a -approximation algorithm for MAX 3SAT with
Proof. Let be an instance of MAX 3SAT with length-1 clauses of total weight
clauses of total weight m 2 , and length-3 clauses of total weight m 3 .
GADGETS, APPROXIMATION, AND LINEAR PROGRAMMING 19
We use the two algorithms listed below, getting the corresponding upper and lower
bounds on number of satisable clauses:
Random: We set each variable to 1 with probability 1=2. This gives a solution
of weight at least m 1
Semidenite programming: We use the strict -gadget to reduce every length-
3 clause to length-2 clauses. This gives an instance of MAX 2SAT. We apply
the )-approximation algorithm with parameters
to nd an approximate solution to this problem. The approximation
algorithm gives an upper bound s 1 on the weight of any solution to
the MAX 2SAT instance and an assignment of weight 1
translated back to the MAX 3SAT instance, the assignment has weight at
least
and the maximum weight satisable in the MAX 3SAT instance is at most
The performance guarantee of the algorithm which takes the better of the two
solutions is at least
We now dene a sequence of simplications which will help prove the bound.
To nish the proof of the lemma, we claim that
To see this, notice that the rst inequality follows from the substitution of variables
. The second follows from the fact that setting m 1 to t 1
only reduces the numerator. The third inequality follows
from setting . The fourth is obtained by substituting a convex combination
of the arguments instead of max and then simplifying. The convex combination takes
a 1 fraction of the rst argument, 2 of the second and 3 of the third, where
L. TREVISAN, G. B. SORKIN, M. SUDAN, AND D. P. WILLIAMSON
3:
Observe that 1 and that the condition on guarantees that 2 0.
Remark 6.4. The analysis given in the proof of the above lemma is tight. In
particular for an instance with m clauses such that
is easy to see that
The following lemma gives the strict gadget reducing functions in E3SAT to
2SAT. Notice that nding strict gadgets is almost as forbidding as nding gadgets
for RMBC, since there are 8 existential constraints in the specication of a gadget.
This time we relied instead on luck. We looked for an S-partial gadget for the set
found an S-partial 3:5-gadget that turned out to be a
gadget! Our choice of S was made judiciously, but we could have aorded to run
through all 8 sets S of size 4 in the hope that one would work.
Lemma 6.5. For every function f 2 E3SAT, there exists a strict (and optimal)
3:5-gadget reducing f to 2SAT.
Proof. Since 2SAT is complementation-closed, it is su-cient to present a 3:5-
gadget reducing (X 1 _X 2 _X 3 ) to 2SAT. The gadget is
every clause except the last has weight
1=2, and the last clause has weight 1.
Combining Lemmas 6.2, 6.3 and 6.5 we get a :801-approximation algorithm.
Theorem 6.6. MAX 3SAT has a polynomial-time :801-approximation algorithm
6.2. MAX 3-CONJ SAT. We now turn to the MAX 3ConjSAT problem. The
analysis is similar to that of Lemma 6.3.
Lemma 6.7. If for every f 2 3ConjSAT there exists a strict
reducing f to 2SAT composed of 1 length-1 clauses and 2 length-2 clauses, and
there exists a )-approximation algorithm for MAX 2SAT, then there exists a
-approximation algorithm for MAX 3ConjSAT with
=8
Proof. Let be an instance of MAX 3ConjSAT with constraints of total weight
m. As in the MAX 3SAT case, we use two algorithms and take the better of the two
solutions:
Random: We set every variable to 1 with probability half. The total weight
of satised constraints is at least m=8.
GADGETS, APPROXIMATION, AND LINEAR PROGRAMMING 21
Semidenite programming: We use the strict -gadget to reduce any constraint
to 2SAT clauses. This gives an instance of MAX 2SAT and we use the
)-approximation algorithm with parameters
The algorithm returns an upper bound s 1 on the total weight of satisable
constraints in the MAX 2SAT instance, and an assignment of measure
at least 1 translated back to the MAX 3ConjSAT instance,
the measure of the assignment is at least 1
thermore, s 1 1 m, s 2 2 m, and the total weight of satisable constraints
in the MAX 3ConjSAT instance is at most s
Thus we get that the performance ratio of the algorithm which takes the better
of the two solutions above is at least
We now dene a sequence of simplications which will help prove the bound.
tmt
In order to prove the lemma, we claim that
To see this, observe that the rst inequality follows from the substitution of variables
1)m. The second follows from setting
The third inequality follows from the fact that setting t 2 to (1 1 )m only reduces
the numerator. The fourth is obtained by substituting a convex combination of the
arguments instead of max and then simplifying.
The following gadget was found by looking for an S-partial gadget for
011g.
Lemma 6.8. For any f 2 3ConjSAT there exists a strict (and optimal) 4-gadget
reducing f to 2SAT. The gadget is composed of one length-1 clause and three length-2
clauses.
Proof. Recall that 2SAT is complementation-closed, and thus it is su-cient to
exhibit a gadget reducing f(a 1 ; a 2 ; a 3 to 2SAT. Such gadget is Y ,
clauses have weight 1. The variables
are primary variables and Y is an auxiliary variable.
22 L. TREVISAN, G. B. SORKIN, M. SUDAN, AND D. P. WILLIAMSON
Theorem 6.9. MAX 3ConjSAT has a polynomial-time .367-approximation algorithm
It is shown by Trevisan [16, Theorem 18] that the above theorem has consequences
for PCP c;s [log; 3]. This is because the computation of the verier in such a proof
system can be described by a decision tree of depth 3, for every choice of random
string. Further, there is a 1-gadget reducing every function which can be computed
by a decision tree of depth k to kConjSAT.
Corollary 6.10. PCP c;s [log; 3] P provided that c=s > 2:7214. The previous
best trade-o between completeness and soundness for polynomial-time PCP classes
was c=s > 4 [16].
7. Lower Bounds for Gadget Constructions. In this section we shall show
that some of the gadget constructions mentioned in this paper and in [2] are optimal,
and we shall prove lower bounds for some other gadget constructions.
The following result is useful to prove lower bounds for the RMBC family.
Lemma 7.1. If there exists an -gadget reducing an element of RMBC to a
complementation-closed constraint family F , then there exists an -gadget reducing
all elements of PC to F .
Proof. If a family F is complementation-closed, then an -gadget reducing an
element of PC (respectively RMBC) to F can be modied (using complementations)
to yield -gadgets reducing all elements of PC (respectively RMBC) to F . For this
reason, we will restrict our analysis to PC 0 and RMBC 00 gadgets. Note that, for any
be an gadget over primary variables x auxiliary variables y
reducing RMBC to 2SAT. Let 0 be the gadget obtained from by imposing x 4 x
it is immediate to verify that 0 is an -gadget reducing PC 0 to F .
7.1. Reducing PC and RMBC to 2SAT. Theorem 7.2. If is an -gadget
reducing an element of PC to 2SAT, then 11.
Proof. It su-ces to consider PC 0 . We prove that the optimum of (LP1) is at
least 11. To this end, consider the dual program of (LP1). We have a variable y ~a; ~ b
for any ~a 2 f0; 1g 3 and any ~ b 2 f0; 1g 4 , plus additional variables ^
y ~a; ~ b opt (~a)
for any
opt is the \optimal" witness function dened in Section 3. The
formulation is
maximize
subject to P
y ~a; ~ b opt (~a)
(DUAL1)
There exists a feasible solution for (DUAL1) whose cost is 11.
Corollary 7.3. If is an -gadget reducing an element of RMBC to 2SAT,
then 11.
7.2. Reducing PC and RMBC to SAT. Theorem 7.4. If is an -gadget
reducing an element of PC to SAT, then 4.
GADGETS, APPROXIMATION, AND LINEAR PROGRAMMING 23
Proof. As in the proof of Theorem 7.2 we give a feasible solution to the dual
to obtain the lower bound. The linear program that nds the best gadget reducing
PC 0 to SAT is similar to (LP1), the only dierence being that a larger number N of
clauses are considered, namely,
. The dual program is then
maximize
subject to P
y ~a; ~ b opt (~a)
(DUAL2)
Consider now the following assignment of values to the variables of (DUAL2) (the
unspecied values have to be set to zero):
1where d is the Hamming distance between binary sequences. It is possible to show
that this is a feasible solution for (DUAL2) and it is immediate to verify that its cost
is 4.
Corollary 7.5. If is an -gadget reducing an element RMBC to SAT, then
7.3. Reducing kSAT to lSAT. Let k and l be any integers k > l 3. The
standard reduction from EkSAT to lSAT can be seen as a d(k 2)=(l 2)e-gadget.
In this section we shall show that this is asymptotically the best possible. Note that
since lSAT is complementation-closed we can restrict ourselves to considering just one
constraint function of EkSAT, say f(a
Theorem 7.6. For any k > l > 2, if is an -gadget reducing f to lSAT then
k=l.
Proof. We can write a linear program whose optimum gives the smallest such
that an -gadget exists reducing f to lSAT. Let b be the witness function used to
formulate this linear program. We can assume that b is
-ary and we let
Also let N be the total number of constraints from lSAT that can be dened over
k +K variables. Assume some enumeration C of such constraints. The dual
LP is
maximize
subject to P
y ~a; ~ b kSAT lSAT (~a)
y ~a; ~ b kSAT lSAT (~a)
y ~a; ~ b kSAT lSAT (~a)
The witness function ~ b kSAT lSAT is an \optimal" witness function for gadgets reducing
kSAT to lSAT.
L. TREVISAN, G. B. SORKIN, M. SUDAN, AND D. P. WILLIAMSON
Let A k f0; 1g k be the set of binary k-ary strings with exactly one non-zero
component (note that jA k be the k-ary string all
whose components are equal to 0 (respectively, 1). The following is a feasible solution
for (DUAL3) whose cost is k=l. We only specify non-zero values.
y ~a; ~ b kSAT lSAT (~a)
In view of the above lower bound, a gadget cannot provide an approximation-
preserving reduction from MAX SAT to MAX kSAT. More generally, there cannot be
an approximation-preserving gadget reduction from MAX SAT to, say, MAX (log n)SAT.
In partial contrast with this lower bound, Khanna et al. [13] have given an approximation-
preserving reduction from MAX SAT to MAX 3SAT and Crescenzi and Trevisan [4]
have provided a tight reduction between MAX SAT and MAX (log n)SAT, showing
that the two problems have the same approximation threshold.
Acknowledgments
. We thank Pierluigi Crescenzi and Oded Goldreich for several
helpful suggestions and remarks. We are grateful to John Forrest and David
Jensen for their assistance in e-ciently solving large linear programs. We thank
Howard Karlo and Uri Zwick for pointing out the error in the earlier version of
this paper, and the counterexample to our earlier claim. We thank the anonymous
referees for their numerous comments and suggestions leading to the restructuring of
Section 3.
--R
Proof veri
Free bits
To weight or not to weight: Where is the question?
Approximating the value of two prover proof systems
Some simpli
New 3/4-approximation algorithms for the maximum satis ability problem
Improved approximation algorithms for maximum cut and satis
Reducibility among combinatorial problems.
On syntactic versus computational views of approximability.
Approximation algorithms for the maximum satis
Parallel approximation algorithms using positive linear programming.
On the approximation of maximum satis
Approximation algorithms for constraint satisfaction problems involving at most three variables per constraint.
--TR
--CTR
Eran Halperin , Dror Livnat , Uri Zwick, MAX CUT in cubic graphs, Proceedings of the thirteenth annual ACM-SIAM symposium on Discrete algorithms, p.506-513, January 06-08, 2002, San Francisco, California
Eran Halperin , Dror Livnat , Uri Zwick, MAX CUT in cubic graphs, Journal of Algorithms, v.53 n.2, p.169-185, November 2004
Gunnar Andersson , Lars Engebretsen, Property testers for dense constraint satisfaction programs on finite domains, Random Structures & Algorithms, v.21 n.1, p.14-32, August 2002
Takao Asano , David P. Williamson, Improved approximation algorithms for MAX SAT, Journal of Algorithms, v.42 n.1, p.173-202, January 2002
Manthey, Non-approximability of weighted multiple sequence alignment, Theoretical Computer Science, v.296 n.1, p.179-192, 4 March
Don Coppersmith , David Gamarnik , Mohammad Hajiaghayi , Gregory B. Sorkin, Random MAX SAT, random MAX CUT, and their phase transitions, Proceedings of the fourteenth annual ACM-SIAM symposium on Discrete algorithms, January 12-14, 2003, Baltimore, Maryland
Lane A. Hemaspaandra, SIGACT news complexity theory column 34, ACM SIGACT News, v.32 n.4, December 2001
Philippe Chapdelaine , Nadia Creignou, The Complexity of Boolean Constraint Satisfaction Local Search Problems, Annals of Mathematics and Artificial Intelligence, v.43 n.1-4, p.51-63, January 2005
Alexander D. Scott , Gregory B. Sorkin, Solving Sparse Random Instances of Max Cut and Max 2-CSP in Linear Expected Time, Combinatorics, Probability and Computing, v.15 n.1-2, p.281-315, January 2006
Johan Hstad, Some optimal inapproximability results, Journal of the ACM (JACM), v.48 n.4, p.798-859, July 2001
Amin Coja-Oghlan , Cristopher Moore , Vishal Sanwalani, MAX k-CUT and approximating the chromatic number of random graphs, Random Structures & Algorithms, v.28 n.3, p.289-322, May 2006 | reductions;intractability;combinatorial optimization;approximation algorithms;NP-completeness;probabilistic proof systems |
357383 | An Evaluation of Statistical Approaches to Text Categorization. | This paper focuses on a comparative evaluation of a wide-range of text categorization methods, including previously published results on the Reuters corpus and new results of additional experiments. A controlled study using three classifiers, kNN, LLSF and WORD, was conducted to examine the impact of configuration variations in five versions of Reuters on the observed performance of classifiers. Analysis and empirical evidence suggest that the evaluation results on some versions of Reuters were significantly affected by the inclusion of a large portion of unlabelled documents, mading those results difficult to interpret and leading to considerable confusions in the literature. Using the results evaluated on the other versions of Reuters which exclude the unlabelled documents, the performance of twelve methods are compared directly or indirectly. For indirect compararions, kNN, LLSF and WORD were used as baselines, since they were evaluated on all versions of Reuters that exclude the unlabelled documents. As a global observation, kNN, LLSF and a neural network method had the best performance; except for a Naive Bayes approach, the other learning algorithms also performed relatively well. | Introduction
Text categorization is the problem of assigning predefined categories to free text documents. A growing number of
statistical learning methods have been applied to this problem in recent years, including regression models[5, 18],
nearest neighbor classifiers[3, 19], Bayes belief networks [14, 9], decision trees[5, 9, 11], rule learning algorithms[1,
15, 12], neural networks[15] and inductive learning techniques[2, 8]. With more and more methods available, cross-
method evaluation becomes increasingly important. However, without an unified methodology of empirical validation,
an objective comparison is difficult.
The most serious problem is the lack of standard data collections. Even when a shared collection is chosen, there
are still many ways to introduce inconsistency. For example, the commonly used Reuters newswire corpus[6] has at
least four different versions, depending on how the training/test sets were divided, and what categories are included
or excluded in the evaluation. Lewis and Ringuette used this corpus to evaluate a decision tree approach and a naive
Bayes classifier, where they included a large portion of unlabelled documents (47% in the training set, and 58% in
the test set) [9]. It is not clear whether these unlabelled documents are all negative instances of the categories in
consideration, or that they are unlabelled simply as an oversight. Apte et al. run a rule learning algorithm, SWAP-1,
on the same set of documents after removing the unlabelled documents[1]. They observed an 12-14% improvement
of SWAP-1 over the results in Lewis&Ringuette's experiments, and concluded that SWAP-1 can often substantially
improve results over decision trees, and that "text classification has a number of characteristics that make optimized
rule induction particularly suitable." This would be a significant finding if the same data were used in the two
experiments. However, given that 58% of the test documents were removed from the original set, it is questionable
whether the observed difference came from the change in the data, or from the difference in the methods. An analysis
later in Sections 3 and 5 will further clarify the point: the inclusion or exclusion of unlabelled documents could have
a significant impact to the results; ignoring this issue makes an evaluation problematic.
It would be ideal if a universal test collection were shared by all the text categorization researchers, or if
a controlled evaluation of a wide range of categorization methods were conducted, similar to the Text Retrieval
Conference for document retrieval[4]. The reality, however, is still far from the ideal. Cross-method comparisons
have often been attempted but only for two or three methods. The small scale of these experiments could lead to overly
general statements based on insufficient observations at one extreme, or the inability to state significant differences
at the other extreme. A solution for these problems is to integrate the available results of categorization methods
into a global evaluation, by carefully analyzing the test conditions in different experiments, and by establishing a
common basis for cross-collection and cross-experiment integration. This paper reports on an effort in this direction.
Section 2 outlines the fourteen methods being investigated. Section 3 analyzes the collection differences in
commonly used corpora, using three classifiers to examine to what degree a difference in conditions effects the
evaluation of a classifier. Section 4 defines a variety of performance measures in use and addresses the equivalence
and comparability between them. Section 5 reports on new evaluations, and compares them with previously published
results. The performance of a baseline classifier on multiple data collections is used as a reference point for a cross-
collection observation. Section 6 concludes the findings.
Categorization Methods
The intention here is to integrate available results from individual experiments into a global evaluation. Two commonly
used corpora, the Reuters news story collection[9] and the OHSUMED bibliographical document collection[7]
are chosen for this purpose. Fourteen categorization methods are investigated, including eleven methods which were
previously evaluated using these corpora, and three methods which were newly evaluated by this author. Not all of
the results are directly comparable because different versions or subsets of these corpora were used. These methods
are outlined below; the data sets and the result comparability will be analyzed in the next section.
1. CONSTRUE, an expert system consisting of manuallydeveloped categorization rules for Reuters news stories[6].
2. Decision tree (DTree) algorithms for classification[9, 11].
3. A naive Bayes model (NaiveBayes) for classification where word independence is assumed in category prediction[9,
Table
1. Data collections examination using WORD, kNN and LLSF in category ranking
Corpus Set UniqCate TrainDoc TestDos (labelled) WORD kNN LLSF
CONSTRUE*
CONSTRUE.2
Reuters Lewis* 113 14,704 6,746 (42%) .10 .84 -
Apte 93 7,789 3,309 (100%) .21 .93 .92
full range 14,321 183,229 50,216 (100%) .16 .52 -
OHSUMED HD big*
HD small* 28 183,229 50,216 (100%) -
* Unlabelled documents are included.
* Heart Diseases (a sub-domain) Categories only, with a training-set category frequency of at least 75.
* Heart Diseases (a sub-domain) Categories only, with a training-set category frequency between 15 to 74.
4. SWAP-1, an inductive learning algorithm for classification using rules in Disjunctive Normal Form (DNF)[1].
5. A neural network approach (NNets) to classification[15].
6. CHARADE, a DNF rule learning system for classification by I. Moulinier[12].
7. RIPPER, a DNF rule learning system for classification by W. Cohen[2].
8. Rocchio, a vector space model for classification where a training set of documents are used to construct a
prototype vector for each category, and category ranking given a document is based on a similarity comparison
between the document vector and the category vectors [8].
9. An exponentiated gradient (EG) inductive learning algorithm which approximates a least squares fit [8].
10. The Widrow-Hoff (WH) inductive learning algorithm which approximates a least squares fit[8].
11. Sleeping Experts (EXPERTS), an inductive learning system using n-gram phrases in classification [2].
12. LLSF, a linear least squares fit (LLSF) approach to classification [18]. A single regression model is used for
ranking multiple categories given a test document. The input variables in the model are unique terms (words or
phrases) in the training documents, and the output variables are unique categories of the training documents.
13. kNN, a k-nearest neighbor classifier[16]. Given an arbitrary input document, the system ranks its nearest
neighbors among training documents, and uses the categories of the k top-ranking neighbors to predict the
categories of the input document. The similarity score of each neighbor document is used as the weight of its
categories, and the sum of category weights over the k nearest neighbors are used for category ranking.
14. A simple, non-learning method which ranks categories for a document based on word matching (WORD)
between the document and category names. The conventional Vector Space Model is used for representing
documents and category names (each name is treated as a bag of words), and the SMART system [13] is used
as the search engine.
3 Collection Analysis
3.1 Two corpora
The Reuters corpus, a collection of newswire stories from 1987 to 1991, is commonly used for text categorization
research, starting from an early evaluation of the CONSTRUE expert system [6, 9, 1, 15, 12, 2] 1 . This collection is
newly refined version named Reuters-21578 is available through Lewis' home page http://www.research.att.com/ ~ lewis.
split into training and test sets when used to evaluate various learning systems. However, the split is not the same in
different studies. Also, various choices were made for the inclusion and exclusion of some categories in an evaluation,
as described in the next section.
The OHSUMED corpus, developed by William Hersh and colleagues at the Oregon Health Sciences University,
is a subset of the documents in the MEDLINE database 2 . It consists of 348,566 references from 270 medical journals
from the years 1987 to 1991. All of the references have titles, but only 233,445 of them have abstracts. We refer to the
title plus abstract as a document. The documents were manually indexed using subject categories (Medical Subject
Headings, or MeSH; about 18,000 categories defined) in the National Library of Medicine. The OHSUMED collection
has been used with the full range of categories (14,321 MeSH categories actually occurred) in some experiments[17],
or with a subset of categories in the heart disease sub-domain (HD, 119 categories) in other experiments[8].
3.2 Different versions
Table
1 lists the different versions or subsets of Reuters and OHSUMED. Each is referred as a "set" or "collection",
and labelled for reference. To examine the collection differences from a text categorization point of view, three
classifiers (WORD, kNN and LLSF) were applied to these collections. The assumption is that if two collections are
statistically homogeneous, then the results of a classifier on these collections should not differ too much. Inversely, if
a dramatic performance change is observed between collections, then this would indicate a need for further analysis.
Since the behavior of a single classifier may lead to biased conclusions, the multiple and fundamentally different
classifiers were used instead. All the systems produces a ranked list of candidate categories given a document. The
conventional 11-point average precision[13] was used to measure the goodness of category ranking. WORD and
kNN were tested on all the collections, while LLSF was only tested on the smaller collections due to computational
limitations. The HD sets were examined together with the OHSUMED superset instead of being examined separately.
Several observations emerge from Table 1:
Homogeneous collections. The Apte set, the PARC set and the Lewis.2 of the Reuters documents are relatively
homogeneous, evident from the similar performance of WORD, kNN and LLSF on these sets. The Lewis.2 is derived
(by this author) from the original Lewis set by removing the unlabelled documents. The Apte set is obtained by
further restricting the categories to have a training set frequency of at least two. In both sets, a continuous chunk
of documents (the early ones) are used for training, and the remaining chunk of documents (the later ones) are used
for testing. The PARC set is drawn from the CONSTRUE set by eliminating the unlabelled documents and some
rare categories[15]. Instead of taking continuous chunks of documents for training and testing, it uses a different
partition. The collection is sliced into many subsets using non-overlapping time windows. The odd subsets are used
for training, and the even subsets are used for testing. The differences between the PARC set, the Apte set and the
Lewis.2 set do not seem to have a significant impact on the performance of the classifiers.
An outlier collection. The CONSTRUE collection has an unusual test set. The training set contains all the
documents in the Lewis set, Apte set or PARC set, and therefore should be statistically similar. The test set contains
only 723 documents which are not included in the other sets. The performance of WORD and kNN on this set are
clearly in favor of word matching over statistical learning. Comparing the Apte set to the CONSTRUE set, the
relative improvement in WORD is 33% (changing from 21% to 28% in average precision), while the performance
change in kNN is \Gamma13% (from 92% to 80%). Although we do not know what criteria were used in selecting the test
documents, it is clear that using this set for evaluation would lead to inconsistent results, compared to using the
other sets. The small size of this test set also makes its results statistically less reliable for evaluation.
collection. The categorization task in OHSUMED seems to be more difficult than in Reuters, as
evidenced from the significant performance decrease in both WORD and kNN. The category space is two magnitudes
larger than Reuters. The number of categories per document is also larger, about 12 to 13 categories on average in
OHSUMED while about 1.2 categories in Reuters. This means that the word/category correspondences are more
"fuzzy" in OHSUMED. Consequently, the categorization is more difficult to learn. The collections named "HD big"
(containing common categories) or "HD small" (containing 28 secondarily common categories) are sub-domains of
the heart diseases sub-domain. Since they contains only about 0.2-.3% of the full range of the categories, performance
of a classifier on these sets may not be sufficiently representative of its performance over the full domain. This does
2 OHSUMED is anonymously ftp-able from medir.ohsu.edu in the directory /pub/ohsumed
not invalidate the use of the HD data sets, but it should be taken into consideration in a cross-collection comparison
of categorization methods.
collection. The Lewis set of the Reuters corpus seems to be problematic given the large portion
of suspiciously unlabelled documents. Note that 58% of the test documents are unlabelled. According to D. Lewis,
"it may (or may not) have been a deliberate decision by the indexer" 3 . It is observed by this author that on randomly
selected test documents, the categories assigned by kNN appeared to be correct in many cases, but they were counted
as failures because these documents were given as unlabelled. This raises a serious question as to whether or not
these unlabelled documents should be included in the test set, and treated as negative instances of all categories, as
they were handled in the previous experiments[9, 2]. The following analysis addresses this question.
Assume the test set has 58% unlabelled documents, and suppose that all of the unlabelled documents should be
assigned categories but are erroneously unlabelled. Let us further assume A to be a perfect classier which assigns a
category to a document if and only if they match, and B a trivial classifier which never assigns any category to a
document. Now if we use the errorful test set as the gold standard to evaluate the two systems, system A will have
an assessed error rate of 58% instead of the true rate of zero percent. System B will have an assessed error rate of
42% instead of the true rate of 100%. Clearly, conclusions based on such a test set can be extremely misleading.
In other words, it can make a better method look worse, and a worse method look better. Of course we do not
know precisely how many documents in the Lewis set should be labelled with categories, so the argument above is
only indicative. Nevertheless, to avoid unnecessary confusion, it would be more sensible to remove the unlabelled
documents, or use the Apte set or PARC set instead. This point will be further addressed in Section 5, with a
discussion on the problems with the experimental results on the Lewis set.
Performance Measures
Classifiers either produce scores, and hence ranked lists of potential category labels, or make binary decisions to
assign categories. A classifier that produces a score can be made into a binary classifier by thresholding the score.
The inverse process is considerably more difficult. An evaluation method applicable to a scoring classifier may not
apply to a binary method. We present evaluations suitable to the two cases and indicate in the following which are
used for comparison.
4.1 Evaluation of category ranking
The recall and precision of a category ranking is similar to the corresponding measures used in text retrieval. Given
a document as the input to a classifier, and a ranked list of categories as the output, the recall and precision at a
particular threshold on this ranked list are defined to be:
categories found and correct
total categories correct
categories found and correct
total categories found
where "categories found" means that the categories are above the threshold. For a collection of test documents, the
category ranking for each document is evaluated first, then the performance scores are averaged across documents.
The conventional 11-point average precision is used to measure the performance of a classifier on a collection of
documents[13].
4.2 Evaluation of binary classification
Performance measures in binary classification can be defined using a two-way contingency table (Table 2). The table
contains four cells:
ffl a counts the assigned and correct cases,
3 Refer to the documentation of the newly refined Reuters-21578 collection.
counts the assigned and incorrect cases,
ffl c counts the not assigned but incorrect cases, and
ffl d counts the not assigned and correct cases.
Table
2. A contingency table
YES is correct No is correct
Assigned YES a b
Assigned NO c d
The recall (r), precision (p), error (e) and fallout (f) are defined to be:
c) if a
Given a classifier, the values of often depend on internal parameter tuning; there is a trade-off
between recall and precision in general. A commonly used measure in method comparison [9, 1, 15, 12] is the
break-even point (BrkEvn) of recall and precision, i.e., when r and p are tuned to be equal. Another common
is called the F -measure, defined to be:
where fi is the parameter allowing differential weighting of p and r. When the value of fi is set to one (denoted as
precision is weighted equally:
When the value of F 1 (r; p) is equivalent to the break-even point. Often the break-even point is close to the
optimal score of F 1 (r; p), but they are not necessarily equivalent. In other words, the optimal score of F 1 (r; p) given a
system can be higher-valued than the break-even point of this system. Therefore, the break-even point of one system
should not be compared directly with the optimal F 1 value of another system.
4.3 Global averaging
There are two ways to measure the average performance of a binary classifier over multiple categories, namely, the
macro-average and the the micro-average. In macro-averaging, one contingency table per category is used, and the
local measures are computed first and then averaged over categories. In micro-averaging, the contingency tables of
individual categories are merged into a single table where each cell of a, b, c and d is the sum of the corresponding
cells in the local tables. The global performance then is computed using the merged table. Macro-averaging gives
an equal weight to the performance on every category, regardless how rare or how common a category is. Micro-
averaging, on the other hand, gives an equal weight to the performance on every document (category instance), thus
favoring the performance on common categories. The micro-average is used in the following evaluation section.
Table
3 summarizes the results of all the categorization methods investigated in this study. The results of
kNN, LLSF and WORD are newly obtained. The results of the other methods are either directly from previous
publications.
Table
3. Results of different methods in category assignments
Reuters Reuters OHSUMED OHSUMED Reuters Reuters
Apte PARC full range HD big Lewis CONSTRUE
BrkEvn BrkEvn F
NNets (N) - .82* -
DTree
NaiveBayes (L) .71 (\Gamma16%) - .65 -
"L" indicates a linear model, and "N" indicates a non-linear model;
"*" marks the local optimal on a fixed collection;
"(.)" includes the performance improvement relative to kNN;
"[.]" includes a F(1) score; the corresponding break-even point should be the same or slightly lower.
5.1 The new experiments
The KNN, LLSF and WORD experiments used the SMART system for unified preprocessing, including stop word
removal, stemming and word weighting. A phrasing option is also available in SMART but not used in these
experiments. Several term weighting options (labelled as "ltc", "atc", "lnc" , "bnn" etc. in SMART's notation) were
tried, which combine the term frequency (TF) measure and the Inverted Document Frequency (IDF) measure in a
variety of ways. The best results (with "ltc" in most cases) are reported in the Table 3.
In kNN and LLSF, aggressive vocabulary reduction based on corpus statistics was also applied as another step
of the preprocessing. This is necessary for LLSF which would otherwise be too computationally expensive to apply
to large training collections. Computational tractability is not an issue for kNN but vocabulary reduction is still
desirable since it improves categorization accuracy. About 1-2% improvements in average precision and break-even
point were observed in both kNN and LLSF when an 85% vocabulary reduction was applied. Several word selection
criteria were tested, including information gain, mutual information, a - 2 statistic and document frequency[20]. The
best results (using the - 2 statistic) were included in Table 3. Aggressive vocabulary reduction was not used in
WORD because it would reduce the chance of word-based matching between documents and category names.
KNN, LLSF and WORD produces a ranked list of categories first when a test document is given. A threshold
on category scores then is applied to obtain binary category assignments to the document. The thresholding on
category scores was optimized on training sets (for individual categories) first, and then applied to the test sets.
Other parameters in these systems include:
ffl k in kNN indicates the number of nearest neighbors used for category prediction, and
ffl p in LLSF indicates the number of principal components (or singular vectors) used in computing the linear
regression.
The performance of kNN is relatively stable for a large range of k, so three values (30, 45 and 65) were tried, and the
best results are included in the result table. A satisfactory performance of LLSF depends on whether p is sufficiently
large. In the experiments of LLSF on the Reuters sets, the optimal or nearly optimal results were obtained when
using about 800 to 1000 singular vectors. A Sun SPARC Ultra-2 Server was used for the experiments. LLSF has not
yet applied to the full set of OHSUMED training documents due to computational limitations.
5.2 Cross-experiment comparison
A row-wise comparison in Table 3 allows observation of the performance variance of a method across collections.
Unfortunately, most of the rows are sparse except for kNN and WORD. A column-wise comparison allows observation
of different methods on a fixed collection. A star marks the best result for each collection.
KNN is chosen to provide the baseline performance on each collection. Several characteristics of this method
make it preferable, i.e., efficient to test, easy to scale up, and relatively robust as a learning method. LLSF is equally
effective, based on the empirical results obtained so far; however, its training is computationally intensive, and thus
has not yet been applied to the full range of the OHSUMED collection. WORD is chosen to provide an secondary
reference point in addition to kNN, to enable a quantitative comparison between learning approaches to a simple
method that requires no knowledge or training.
The Reuters Apte set has the densest column where the results of eight systems are available. Although the
document counts reported by different researchers are somewhat inconsistent[1, 2] 4 , the differences are relatively
small compared to the size of the corpus (i.e., at most 21 miscounted out of over ten thousands training documents,
and at most 7 miscounted out of over three thousands of test documents), so the impact of such differences on the
evaluation results for this set maybe be considered negligible.
The results on the Lewis set, on the other hand, are more problematic. That is, the inclusion of the 58%
"mysteriously" unlabelled documents in the test set makes the results difficult to interpret. For example, most of the
methods (kNN, RIPPER, Rocchio and WORD) which were evaluated on both the Apte set and the Lewis set show a
significant decrease in their performance scores on the Lewis set, but the scores of EXPERTS are almost insensitive
to the inclusion or exclusion of the large amounts of unlabelled documents in the test set. Moreover, EXPERTS has
a score near the lower end among all the learning methods evaluated on the Apte set, but the highest score on the
Lewis set. Cohen concluded EXPERTS the best performer ever reported on the Lewis set without an explanation
on its mysterious insensitivity to the large change in test documents[2]. This is suspicious because the inclusion of a
large amounts of incorrectly labelled documents in the test set should decrease the performance of a good classifier,
as analyzed in Section 3.
Another example of potential difficulties is the misleading comparison by Apte et al. between SWAP-1 (or rule
learning), NaiveBayes and DTree methods (Section 1). They claim an advantage for SWAP-1 based on a score on
the Apte set versus scores for the other methods on the Lewis set. To see the perils in such an inference, kNN has a
score of 85% on the Apte set, versus the SWAP-1 score of 79% on the same set. On the Lewis set, however, the kNN
score is 69%, i.e., 10% lower than Apte SWAP-1 score. Should we then conclude that SWAP-1 is better than kNN,
or the opposite? More interestingly, a recent result using a DTree algorithm (via C4.5) due to Moulinier scores 79%
on the Apte set[11], which is exactly the same as the SWAP-1 result. How should this be interpreted? To make the
point clear, the Lewis set should not be used for text categorization evaluation unless the status of the unlabelled
documents is resolved. Results obtained on this set can be seriously misleading, and therefore should not be used
for a comparison or to draw any conclusions. Inferences based on the CONSTRUE set should also be questioned
because the test set is much smaller than the other sets, contains 20% mysteriously unlabelled documents, and may
possibly be a biased selection (Section 3).
Finally, it may worth mentioning that the cross-method comparisons here are not necessarily precise, because
some experimental parameters might contribute to a difference in the results but are not available. For instance,
different choices could be made in stemming, term selection, term weighting, sampling strategies for training data,
thresholding for binary decisions, and so on. Without detailed information, we cannot be sure that a one or two
percent difference in break-even point or F-measure is an indication of the theoretical strength or weakness of a
learning method. It is also unclear how a significance test should be designed, given that the performance of a
method is compressed into a single number, e.g., to the break-even point of averaged recall and precision. A variance
analysis would be difficult given that the necessary input data is not generally published. Further research is needed
on this issue. Nonetheless, missing detailed information should not prohibit the good use of available information. As
long as the related issues are carefully addressed, as shown above, an integrated view across methods and experiments
is possible, especially for significant variations in results on a fully-labelled common test set.
4 Inconsistent numbers about the documents in the Apte set were found in previous papers and the corpus documentation, presumably
due to counting errors or processing errors by the individuals. The numbers included in Table 1 are those agreed by at least two research
sites. Details are available through yiming@cs.cmu.edu.
6 Discussions
Despite the imperfectness of the comparison across collections and experiments, the integrated results are clearly
informative, enabling a global observation which is not possible otherwise. Several points in the results appear to be
interesting regarding the analysis of classification models.
The impressive performance of kNN is rather surprising given that the method is quite simple and computationally
efficient. It has the best performance, together with LLSF, on the Apte set, and is equally effective as
NNets on the PARC set. On the OHSUMED set, it is the only learning method evaluated on the full domain, i.e.,
a category space which is more than one hundred times larger than those used in the evaluations of most learning
algorithms. When extending the target space from the sub-domain of 49 "HD big" categories to the full domain
of 14,321 categories, the performance decline of kNN is only 5% in absolute value, or a 9% decrease relative. In
contrast, the performance of WORD declined from 44% to 27%, or a 39% decrease relatively. This suggests that
kNN is more powerful than WORD in making fine distinctions between categories. Or, it "failed" more gracefully
when the category space grows by several orders of magnitude.
The good performance of WH on "HD big" calls for deeper analysis. WH is an incremental learning algorithm
trained based on an least squares fit criterion. Its optimal performance therefore should be bounded by or close to a
least squares fit solution obtained in a batch-mode training, such as LLSF. It would be interesting in future research
to compare the empirical results of LLSF with WH. It is also worth asking whether there is something else, beyond
the core theory, which contributed to the good performance. In the WH experiment on "HD big", Lewis used a
"pocketing" strategy to select a subset of training instances from a large pool[8]. This is similar or equivalent to a
sampling strategy which divides available training instances into small chunks, examines one chunk at a time using
a validation set, and adds a new chunk to the selected ones only if it improves the performance on the validation set.
This strategy would be particularly effective when the training data are highly noisy, such as OHSUMED documents.
Nevertheless, the sampling strategy is not a part of the WH algorithm, and can be used in any other classifiers. It
would be interesting to examine the effect of the pocketing strategy in kNN on OHSUMED in feature research, for
example.
Rocchio has a relatively poor performance compared to the other learning methods, and is almost as poor as
WORD on the "HD big" subset, surprisingly. This suggests that Rocchio may not be a good choice (although
commonly used) for the baseline in evaluating learning methods, because it is inferior to most methods and thus
would be not very informative especially when the comparison includes only one or two other learning methods. In
other words, Rocchio is a straw man rather than a challenging standard. KNN would be a better alternative, for
instance.
The mixture of the linear (L) and non-linear (N) classifiers among the top-ranking performers (WH, NNets,
kNN and LLSF) suggests that no general conclusion can be fetched regarding reliable improvement of non-linear
approaches over linear approaches, or vice versa. It is also hard to draw a conclusion about the advantage of a
multiple-category classification model (kNN or LLSF) over unary classification models (WH, NNets, EG, RIPPER
etc.) Either the category independence assumption in the latter type of methods is reasonable, or an improvement
in kNN and LLSF is needed in the handling of the dependence or mutual exclusiveness among categories. Resolving
this issue requires future research.
The rule induction algorithms (SWAP-1, RIPPER and CHARADE) have a similar performance, but below the
local optimum of kNN on the Apte set, and also below some other classifiers (WH, NNets) based on an indirect
comparison across collections via kNN as the baseline. This observation raises a question with respect to a claim
about the particular advantage of rule learning in text categorization. The claim was based on context-sensitivity,
i.e., the power in capturing term combinations[1, 2]. It seems that the methods which do not explicitly identify term
combinations but use the context implicitly (such as in WH, NNets, kNN and LLSF) performed at least as well.
It may be worth mentioning that a classifier can have a degree of context-sensitivity without explicitly identifying
term combinations or phrases. The classification function in LLSF, for instance, is sensitive to weighted linear
combinations of words that co-occur in training documents. This does not makes it equivalent to a non-linear model,
but makes a fundamental distinction from the methods based on a term independence assumption, such as naive
Bayes models. This may be a reason for the impressive performance of kNN and LLSF. It would be interesting to
compare them with NaiveBayes if the latter were tested on the Apte set, for example.
Conclusions
The following conclusions are reached from this study:
1. The performance of a classifier depends strongly on the choice of data used for evaluation. Using a seriously
problematic collection[8], comparing categorization methods without analyzing collection differences[1], and
drawing conclusion based on the results of flawed experiments[2] raise questions about the validity of some
published evaluations. These problems need to be addressed to clarify of the confusions among researchers,
and to prevent the repetition of similar mistakes. Providing information and analysis on these problems is a
major effort in this study.
2. Integrating results from different evaluations into a global comparison across methods is possible, as shown in
this paper, by evaluating one or more baseline classifiers on multiple collections, by normalizing the performance
of other classifiers using a common baseline classifier, and by analyzing collection biases based on performance
variations of several baseline classifiers. Such an integration allows insights on methods and collections which
are rarely apparent in comparisons involving two or three classifiers. It also shows an evaluation methodology
which is complementary to the effort to standardize collections and unify evaluations.
3. WH, kNN, NNets and LLSF are the top performers among the learning methods whose results were empirically
validated in this study. Rocchio had a relatively poor performance, on the other hand. All the learning methods
outperformed WORD, the non-learning method. However, the differences between some learning methods are
not as large as previously claimed[1, 2]. It is not evident in the collected results that non-linear models are
better than linear models, or that more sophisticated methods outperform simpler ones. Conclusive statements
on the strengths and weaknesses of different models requires further research.
4. Scalability of a classifier when the problem size grows by several magnitudes, or when the category space
becomes a hundred times denser, has been rarely examined in text categorization evaluations. KNN is the
only learning method evaluated on the full set of the OHSUMED categories. Its robustness in scaling up and
dealing with harder problems, and its computational efficiency make it the method of choice for approaching
very large and noisy categorization problems.
Acknowledgement
I would like to thank Jan Pedersen at Verity, David Lewis and William Cohen at AT&T, and Isabelle Moulinier at
University of Paris VI for providing the information of their experiments. I would also like to thank Jaime Carbonell
at Carnegie Mellon University for suggesting an improvement in binary decision making, Yibing Geng and Danny
Lee for the programming support, and Chris Buckley at Cornell for making the SMART system available.
--R
Towards language independent automated learning of text categorization models.
Trading mips and memory for knowledge engineering: classifying census returns on the connection machine.
Harman.
Construe/tis: a system for content-based indexing of a database of new stories
Ohsumed: an interactive retrieval evaluation and new large text collection for research.
Training algorithms for linear text classifiers.
Comparison of two learning algorithms for text categorization.
Une approche de la cat'egorisation de textes par l'apprentissage symbolique.
Is learning bias an issue on the text categorization problem?
Text categorization: a symbolic approach.
Automatic Text Processing: The Transformation
Automatic indexing based on bayesian inference networks.
A neural network approach to topic spotting.
Expert network: Effective and efficient learning from human decisions in text categorization and retrieval.
An evaluation of a statistical approaches to medline indexing.
A linear least squares fit mapping method for information retrieval from natural language texts.
An example-based mapping method for text categorization and retrieval
Feature selection in statistical learning of text categorization.
--TR
Automatic text processing: the transformation, analysis, and retrieval of information by computer
Trading MIPS and memory for knowledge engineering
Automatic indexing based on Bayesian inference networks
An example-based mapping method for text categorization and retrieval
Expert network
Towards language independent automated learning of text categorization models
OHSUMED
Document filtering for fast ranking
Noise reduction in a statistical approach to text categorization
Cluster-based text categorization
The design of a high performance information filtering system
Training algorithms for linear text classifiers
Context-sensitive learning methods for text categorization
Feature selection, perception learning, and a usability case study for text categorization
Information Retrieval
Introduction to Modern Information Retrieval
Induction of Decision Trees
CONSTRUE/TIS
A Comparative Study on Feature Selection in Text Categorization
A Linear Least Squares Fit mapping method for information retrieval from natural language texts
--CTR
Verayuth Lertnattee , Thanaruk Theeramunkong, Effect of term distributions on centroid-based text categorization, Information SciencesInformatics and Computer Science: An International Journal, v.158 n.1, p.89-115, January 2004
J. Scott Olsson, An analysis of the coupling between training set and neighborhood sizes for the kNN classifier, Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, August 06-11, 2006, Seattle, Washington, USA
Sofus A. Macskassy , Haym Hirsh , Arunava Banerjee , Aynur A. Dayanik, Converting numerical classification into text classification, Artificial Intelligence, v.143 n.1, p.51-77, January
Tan , Xueqi Cheng, Using hypothesis margin to boost centroid text classifier, Proceedings of the 2007 ACM symposium on Applied computing, March 11-15, 2007, Seoul, Korea
Anne R. Diekema , Jiangping Chen, Experimenting with the automatic assignment of educational standards to digital library content, Proceedings of the 5th ACM/IEEE-CS joint conference on Digital libraries, June 07-11, 2005, Denver, CO, USA
Pui Y. Lee , Siu C. Hui , Alvis Cheuk M. Fong, Neural Networks for Web Content Filtering, IEEE Intelligent Systems, v.17 n.5, p.48-57, September 2002
Anne R. Diekema , Ozgur Yilmazel , Jennifer Bailey , Sarah C. Harwell , Elizabeth D. Liddy, Standards alignment for metadata assignment, Proceedings of the 2007 conference on Digital libraries, June 18-23, 2007, Vancouver, BC, Canada
Deendayal Dinakarpandian , Vijay Kumar, BlOMIND-protein property prediction by property proximity profiles, Proceedings of the 2002 ACM symposium on Applied computing, March 11-14, 2002, Madrid, Spain
Xiaobing Wu, Knowledge Representation and Inductive Learning with XML, Proceedings of the 2004 IEEE/WIC/ACM International Conference on Web Intelligence, p.491-494, September 20-24, 2004
Hang Su , Qiaozhu Mei, Template extraction from candidate template set generation: a structure and content approach, Proceedings of the 43rd annual southeast regional conference, March 18-20, 2005, Kennesaw, Georgia
Fred J. Damerau , Tong Zhang , Sholom M. Weiss , Nitin Indurkhya, Text categorization for a comprehensive time-dependent benchmark, Information Processing and Management: an International Journal, v.40 n.2, p.209-221, March 2004
Parisut Jitpakdee , Worapoj Kreesuradej, Dimensionality reduction of features for text categorization, Proceedings of the third conference on IASTED International Conference: Advances in Computer Science and Technology, p.506-509, April 02-04, 2007, Phuket, Thailand
Minoru Yoshida , Hiroshi Nakagawa, Reformatting web documents via header trees, Proceedings of the ACL 2005 on Interactive poster and demonstration sessions, p.121-124, June 25-30, 2005, Ann Arbor, Michigan
Yiming Yang , Jian Zhang , Jaime Carbonell , Chun Jin, Topic-conditioned novelty detection, Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, July 23-26, 2002, Edmonton, Alberta, Canada
Makoto Suzuki, Text classification based on the bias of word frequency over categories, Proceedings of the 24th IASTED international conference on Artificial intelligence and applications, p.400-405, February 13-16, 2006, Innsbruck, Austria
Jos M. Gmez Hidalgo , Manuel Maa Lpez , Enrique Puertas Sanz, Combining text and heuristics for cost-sensitive spam filtering, Proceedings of the 2nd workshop on Learning language in logic and the 4th conference on Computational natural language learning, September 13-14, 2000, Lisbon, Portugal
Sholom M. Weiss , Chidanand Apte , Fred J. Damerau , David E. Johnson , Frank J. Oles , Thilo Goetz , Thomas Hampp, Maximizing Text-Mining Performance, IEEE Intelligent Systems, v.14 n.4, p.63-69, July 1999
Sholom M. Weiss , Brian F. White , Chidanand V. Apte , Fredrick J. Damerau, Lightweight Document Matching for Help-Desk Applications, IEEE Intelligent Systems, v.15 n.2, p.57-61, March 2000
Marti A. Hearst, Support Vector Machines, IEEE Intelligent Systems, v.13 n.4, p.18-28, July 1998
Ronen Feldman , Benjamin Rosenfeld , Ron Lazar , Joshua Livnat , Benjamin Segal, Computerized retrieval and classification: An application to reasons for late filings with the securities and exchange commission, Intelligent Data Analysis, v.10 n.2, p.183-195, March 2006
Norbert Gvert , Mounia Lalmas , Norbert Fuhr, A probabilistic description-oriented approach for categorizing web documents, Proceedings of the eighth international conference on Information and knowledge management, p.475-482, November 02-06, 1999, Kansas City, Missouri, United States
Youngjoong Ko , Jungyun Seo, Automatic text categorization by unsupervised learning, Proceedings of the 18th conference on Computational linguistics, p.453-459, July 31-August
Taeho Jo, Index based approach for categorizing online news articles, Proceedings of the 2nd WSEAS International Conference on Computer Engineering and Applications, p.125-130, January 25-27, 2008, Acapulco, Mexico
Daniel Billsus , Michael J. Pazzani, A personal news agent that talks, learns and explains, Proceedings of the third annual conference on Autonomous Agents, p.268-275, April 1999, Seattle, Washington, United States
Son Doan , Susumu Horiguchi, An efficient feature selection using multi-criteria in text categorization for nave Bayes classifier, Proceedings of the 4th WSEAS International Conference on Artificial Intelligence, Knowledge Engineering Data Bases, p.1-6, February 13-15, 2005, Salzburg, Austria
Daniel Billsus , Clifford A. Brunk , Craig Evans , Brian Gladish , Michael Pazzani, Adaptive interfaces for ubiquitous web access, Communications of the ACM, v.45 n.5, May 2002
Fuchun Peng , Xiangji Huang , Dale Schuurmans , Shaojun Wang, Text classification in Asian languages without word segmentation, Proceedings of the sixth international workshop on Information retrieval with Asian languages, p.41-48, July 07-07, 2003, Sappro, Japan
D. Ferrucci , A. Lally, Building an example application with the unstructured information management architecture, IBM Systems Journal, v.43 n.3, p.455-475, July 2004
Tong Zhang , Alexandrin Popescul , Byron Dom, Linear prediction models with graph regularization for web-page categorization, Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, August 20-23, 2006, Philadelphia, PA, USA
Francois Paradis , Jian-Yun Nie, Contextual feature selection for text classification, Information Processing and Management: an International Journal, v.43 n.2, p.344-352, March 2007
Jeonghee Yi , Neel Sundaresan, A classifier for semi-structured documents, Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining, p.340-344, August 20-23, 2000, Boston, Massachusetts, United States
Wenqian Shang , Houkuan Huang , Haibin Zhu , Yongmin Lin , Youli Qu , Zhihai Wang, A novel feature selection algorithm for text categorization, Expert Systems with Applications: An International Journal, v.33 n.1, p.1-5, July, 2007
Ruihua Song , Haifeng Liu , Ji-Rong Wen , Wei-Ying Ma, Learning block importance models for web pages, Proceedings of the 13th international conference on World Wide Web, May 17-20, 2004, New York, NY, USA
Zhaohui Zheng , Xiaoyun Wu , Rohini Srihari, Feature selection for text categorization on imbalanced data, ACM SIGKDD Explorations Newsletter, v.6 n.1, June 2004
Ji He , Ah-Hwee Tan , Chew-Lim Tan, Machine learning methods for Chinese web page categorization, Proceedings of the second workshop on Chinese language processing: held in conjunction with the 38th Annual Meeting of the Association for Computational Linguistics, October 08-08, 2000, Hong Kong
Ruihua Song , Haifeng Liu , Ji-Rong Wen , Wei-Ying Ma, Learning important models for web page blocks based on layout and content analysis, ACM SIGKDD Explorations Newsletter, v.6 n.2, p.14-23, December 2004
Rui Fang , Alexander Mikroyannidis , Babis Theodoulidis, A Voting Method for the Classification of Web Pages, Proceedings of the 2006 IEEE/WIC/ACM international conference on Web Intelligence and Intelligent Agent Technology, p.610-613, December 18-22, 2006
Jhy-Jong Tsay , Jing-Doo Wang, Improving automatic Chinese text categorization by error correction, Proceedings of the fifth international workshop on on Information retrieval with Asian languages, p.1-8, September 30-October 01, 2000, Hong Kong, China
Chih-Chin Lai, An empirical study of three machine learning methods for spam filtering, Knowledge-Based Systems, v.20 n.3, p.249-254, April, 2007
adaptive k-nearest neighbor text categorization strategy, ACM Transactions on Asian Language Information Processing (TALIP), v.3 n.4, p.215-226, December 2004
Thomas Galen Ault , Yiming Yang, Information Filtering in TREC-9 and TDT-3: A Comparative Analysis, Information Retrieval, v.5 n.2-3, p.159-187, April-July 2002
Yiming Yang, A study of thresholding strategies for text categorization, Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, p.137-145, September 2001, New Orleans, Louisiana, United States
Ganesh Ramakrishnan , Krishna Prasad Chitrapura , Raghu Krishnapuram , Pushpak Bhattacharyya, A model for handling approximate, noisy or incomplete labeling in text classification, Proceedings of the 22nd international conference on Machine learning, p.681-688, August 07-11, 2005, Bonn, Germany
Aixin Sun , Ee-Peng Lim , Wee-Keong Ng, Web classification using support vector machine, Proceedings of the 4th international workshop on Web information and data management, November 08-08, 2002, McLean, Virginia, USA
Tanveer Syeda-Mahmood , Dulce Ponceleon, Learning video browsing behavior and its application in the generation of video previews, Proceedings of the ninth ACM international conference on Multimedia, September 30-October 05, 2001, Ottawa, Canada
Jyh-Jong Tsay , Jing-Doo Wang, Improving linear classifier for Chinese text categorization, Information Processing and Management: an International Journal, v.40 n.2, p.223-237, March 2004
Nadia Ghamrawi , Andrew McCallum, Collective multi-label classification, Proceedings of the 14th ACM international conference on Information and knowledge management, October 31-November 05, 2005, Bremen, Germany
Jinsuk Kim , Myoung Ho Kim, An Evaluation of Passage-Based Text Categorization, Journal of Intelligent Information Systems, v.23 n.1, p.47-65, July 2004
Oh-Woog Kwon , Jong-Hyeok Lee, Web page classification based on k-nearest neighbor approach, Proceedings of the fifth international workshop on on Information retrieval with Asian languages, p.9-15, September 30-October 01, 2000, Hong Kong, China
Massimiliano Ciaramita, Boosting automatic lexical acquisition with morphological information, Proceedings of the ACL-02 workshop on Unsupervised lexical acquisition, p.17-25, July 12-12, 2002, Philadelphia, Pennsylvania
Yiming Yang , Jian Zhang , Bryan Kisiel, A scalability analysis of classifiers in text categorization, Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, July 28-August 01, 2003, Toronto, Canada
Jae-Ho Kim , Key-Sun Choi, Patent document categorization based on semantic structural information, Information Processing and Management: an International Journal, v.43 n.5, p.1200-1215, September, 2007
Jos Mara Gmez Hidalgo , Guillermo Cajigas Bringas , Enrique Puertas Snz , Francisco Carrero Garca, Content based SMS spam filtering, Proceedings of the 2006 ACM symposium on Document engineering, October 10-13, 2006, Amsterdam, The Netherlands
Jos Mara Gmez Hidalgo, Evaluating cost-sensitive Unsolicited Bulk Email categorization, Proceedings of the 2002 ACM symposium on Applied computing, March 11-14, 2002, Madrid, Spain
Manabu Sassano, Virtual examples for text classification with Support Vector Machines, Proceedings of the conference on Empirical methods in natural language processing, p.208-215, July 11,
Sholom M. Weiss , Chidanand Apte , Fred J. Damerau , David E. Johnson , Frank J. Oles , Thilo Goetz , Thomas Hampp, Maximizing Text-Mining Performance, IEEE Intelligent Systems, v.14 n.4, p.63-69, July 1999
Arnulfo P. Azcarraga , Teddy N. Yap, Jr., Extracting meaningful labels for WEBSOM text archives, Proceedings of the tenth international conference on Information and knowledge management, October 05-10, 2001, Atlanta, Georgia, USA
Fabrizio Sebastiani , Alessandro Sperduti , Nicola Valdambrini, An improved boosting algorithm and its application to text categorization, Proceedings of the ninth international conference on Information and knowledge management, p.78-85, November 06-11, 2000, McLean, Virginia, United States
Wahyu Wibowo , Hugh E. Williams, Strategies for minimising errors in hierarchical web categorisation, Proceedings of the eleventh international conference on Information and knowledge management, November 04-09, 2002, McLean, Virginia, USA
Henrik Nottelmann , Norbert Fuhr, Learning probabilistic datalog rules for information classification and transformation, Proceedings of the tenth international conference on Information and knowledge management, October 05-10, 2001, Atlanta, Georgia, USA
Stephan Busemann , Sven Schmeier , Roman G. Arens, Message classification in the call center, Proceedings of the sixth conference on Applied natural language processing, p.158-165, April 29-May
Hana Kopackova, Text categorization: potential tool for managerial decision-making, Proceedings of the 5th WSEAS International Conference on Applied Informatics and Communications, p.209-214, September 15-17, 2005, Malta
Youngjoong Ko , Jungyun Seo, Using the feature projection technique based on a normalized voting method for text classification, Information Processing and Management: an International Journal, v.40 n.2, p.191-208, March 2004
Fuchun Peng , Dale Schuurmans , Shaojun Wang, Language and task independent text categorization with simple language models, Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, p.110-117, May 27-June 01, 2003, Edmonton, Canada
Yiming Yang , Tom Ault , Thomas Pierce , Charles W. Lattimer, Improving text categorization methods for event tracking, Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval, p.65-72, July 24-28, 2000, Athens, Greece
Hyoungdong Han , Youngjoong Ko , Jungyun Seo, Using the revised EM algorithm to remove noisy data for improving the one-against-the-rest method in binary text classification, Information Processing and Management: an International Journal, v.43 n.5, p.1281-1293, September, 2007
Parry Husbands , Horst Simon , Chris Ding, Term norm distribution and its effects on latent semantic indexing, Information Processing and Management: an International Journal, v.41 n.4, p.777-787, July 2005
Vikramjit Mitra , Chia-Jiu Wang , Satarupa Banerjee, Text classification: A least square support vector machine approach, Applied Soft Computing, v.7 n.3, p.908-914, June, 2007
Quan Wang , Yiu-Kai Ng, An Ontology-Based Binary-Categorization Approach for Recognizing Multiple-Record Web Documents Using a Probabilistic Retrieval Model, Information Retrieval, v.6 n.3-4, p.295-332, September-December
Mark Steyvers , Padhraic Smyth , Michal Rosen-Zvi , Thomas Griffiths, Probabilistic author-topic models for information discovery, Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, August 22-25, 2004, Seattle, WA, USA
Moshe Koppel , Jonathan Schler, Authorship verification as a one-class classification problem, Proceedings of the twenty-first international conference on Machine learning, p.62, July 04-08, 2004, Banff, Alberta, Canada
Youngjoong Ko , Jinwoo Park , Jungyun Seo, Improving text categorization using the importance of sentences, Information Processing and Management: an International Journal, v.40 n.1, p.65-79, January 2004
Mei-Ling Shyu , Choochart Haruechaiyasak , Shu-Ching Chen, Category cluster discovery from distributed WWW directories, Information SciencesInformatics and Computer Science: An International Journal, v.155 n.3-4, p.181-197, 15 October
Daqing He , Ayse Gker , David J. Harper, Combining evidence for automatic web session identification, Information Processing and Management: an International Journal, v.38 n.5, p.727-742, September 2002
Chun-Yan Liang , Li Guo , Zhao-Jie Xia , Feng-Guang Nie , Xiao-Xia Li , Liang Su , Zhang-Yuan Yang, Dictionary-based text categorization of chemical web pages, Information Processing and Management: an International Journal, v.42 n.4, p.1017-1029, July 2006
Rey-Long Liu, Dynamic category profiling for text filtering and classification, Information Processing and Management: an International Journal, v.43 n.1, p.154-168, January 2007
Yiming Yang , Jaime G. Carbonell , Ralf D. Brown , Thomas Pierce , Brian T. Archibald , Xin Liu, Learning Approaches for Detecting and Tracking News Events, IEEE Intelligent Systems, v.14 n.4, p.32-43, July 1999
Sun Lee Bang , Jae Dong Yang , Hyung Jeong Yang, Hierarchical document categorization with k-NN and concept-based thesauri, Information Processing and Management: an International Journal, v.42 n.2, p.387-406, March 2006
Hang Cui , Min-Yen Kan , Tat-Seng Chua, Unsupervised learning of soft patterns for generating definitions from online news, Proceedings of the 13th international conference on World Wide Web, May 17-20, 2004, New York, NY, USA
Sue J. Ker , Jen-Nan Chen, A text categorization based on summarization technique, Proceedings of the ACL-2000 workshop on Recent advances in natural language processing and information retrieval: held in conjunction with the 38th Annual Meeting of the Association for Computational Linguistics, October 08-08, 2000, Hong Kong
Yiming Yang , Jaime G. Carbonell , Ralf D. Brown , Thomas Pierce , Brian T. Archibald , Xin Liu, Learning Approaches for Detecting and Tracking News Events, IEEE Intelligent Systems, v.14 n.4, p.32-43, July 1999
Dimitris Fragoudis , Dimitris Meretakis , Spiridon Likothanassis_aff1n3, Best terms: an efficient feature-selection algorithm for text categorization, Knowledge and Information Systems, v.8 n.1, p.16-33, July 2005
Tong Zhang , Frank J. Oles, Text Categorization Based on Regularized Linear Classification Methods, Information Retrieval, v.4 n.1, p.5-31, April 2001
Tong Zhang , Vijay S. Iyengar, Recommender systems using linear classifiers, The Journal of Machine Learning Research, 2, p.313-334, 3/1/2002
Patrick Ruch, Query translation by text categorization, Proceedings of the 20th international conference on Computational Linguistics, p.686-es, August 23-27, 2004, Geneva, Switzerland
Ying Liu , Han Tong Loh , Shu Beng Tor, Comparison of extreme learning machine with support vector machine for text classification, Proceedings of the 18th international conference on Innovations in Applied Artificial Intelligence, p.390-399, June 22-24, 2005, Bari, Italy
Filippo Menczer , Gautam Pant , Padmini Srinivasan , Miguel E. Ruiz, Evaluating topic-driven web crawlers, Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, p.241-249, September 2001, New Orleans, Louisiana, United States
Goran Nenadi , Simon Rice , Irena Spasi , Sophia Ananiadou , Benjamin Stapley, Selecting text features for gene name classification: from documents to terms, Proceedings of the ACL workshop on Natural language processing in biomedicine, p.121-128, July 11-11, 2003, Sapporo, Japan
Yiming Yang , Sen Slattery , Rayid Ghani, A Study of Approaches to Hypertext Categorization, Journal of Intelligent Information Systems, v.18 n.2-3, p.219-241, March-May 2002
Oh-Woog Kwon , Jong-Hyeok Lee, Text categorization based on k-nearest neighbor approach for web site classification, Information Processing and Management: an International Journal, v.39 n.1, p.25-44, January
Yan Liu , Yiming Yang , Jaime Carbonell, Boosting to correct inductive bias in text classification, Proceedings of the eleventh international conference on Information and knowledge management, November 04-09, 2002, McLean, Virginia, USA
Natalie Glance , Matthew Hurst , Kamal Nigam , Matthew Siegler , Robert Stockton , Takashi Tomokiyo, Deriving marketing intelligence from online discussion, Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, August 21-24, 2005, Chicago, Illinois, USA
Fernando Ruiz-Rico , Jose Luis Vicedo , Mara-Consuelo Rubio-Snchez, NEWPAR: an automatic feature selection and weighting schema for category ranking, Proceedings of the 2006 ACM symposium on Document engineering, October 10-13, 2006, Amsterdam, The Netherlands
Shen , Jian-Tao Sun , Qiang Yang , Zheng Chen, Text classification improved through multigram models, Proceedings of the 15th ACM international conference on Information and knowledge management, November 06-11, 2006, Arlington, Virginia, USA
H. Cenk Ozmutlu , Fatih avdur, Application of automatic topic identification on excite web search engine data logs, Information Processing and Management: an International Journal, v.41 n.5, p.1243-1262, September 2005
Yaxin Bi , Sally Mcclean , Terry Anderson, Combining rough decisions for intelligent text mining using Dempster's rule, Artificial Intelligence Review, v.26 n.3, p.191-209, November 2006
Athanasios Kehagias , Vassilios Petridis , Vassilis G. Kaburlasos , Pavlina Fragkou, A Comparison of Word- and Sense-Based Text Categorization Using Several Classification Algorithms, Journal of Intelligent Information Systems, v.21 n.3, p.227-247, November
George Karypis , Eui-Hong (Sam) Han, Fast supervised dimensionality reduction algorithm with applications to document categorization & retrieval, Proceedings of the ninth international conference on Information and knowledge management, p.12-19, November 06-11, 2000, McLean, Virginia, United States
Hana Kopackova , Ludek Kopacek , Renata Bilkova , Karel Naiman, New methods for text categorization, Proceedings of the 5th WSEAS International Conference on Computational Intelligence, Man-Machine Systems and Cybernetics, p.240-245, November 20-22, 2006, Venice, Italy
Yiming Yang , Xin Liu, A re-examination of text categorization methods, Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval, p.42-49, August 15-19, 1999, Berkeley, California, United States
Mohammed Benkhalifa , Abdelhak Mouradi , Houssaine Bouyakhf, Integrating External Knowledge to Supplement Training Data in Semi-Supervised Learning for Text Categorization, Information Retrieval, v.4 n.2, p.91-113, July 2001
Xin Li , Hsinchun Chen , Zhu Zhang , Jiexun Li, Automatic patent classification using citation network information: an experimental study in nanotechnology, Proceedings of the 2007 conference on Digital libraries, June 18-23, 2007, Vancouver, BC, Canada
Pei-Yi Hao , Jung-Hsien Chiang , Yi-Kun Tu, Hierarchically SVM classification based on support vector clustering method and its application to document categorization, Expert Systems with Applications: An International Journal, v.33 n.3, p.627-635, October, 2007
Jrg Ontrup , Helge Ritter, Large-scale data exploration with the hierarchically growing hyperbolic SOM, Neural Networks, v.19 n.6, p.751-761, July 2006
Daniel Billsus , Michael J. Pazzani, User Modeling for Adaptive News Access, User Modeling and User-Adapted Interaction, v.10 n.2-3, p.147-180, 2000
Dharmendra S. Modha , W. Scott Spangler, Feature Weighting in k-Means Clustering, Machine Learning, v.52 n.3, p.217-237, September
Wendy W. Chapman , Lee M. Christensen , Michael M. Wagner , Peter J. Haug , Oleg Ivanov , John N. Dowling , Robert T. Olszewski, Classifying free-text triage chief complaints into syndromic categories with natural languages processing, Arificial Intelligence in Medicine, v.33 n.1, p.31-40, 1 January 2005
Steven C. H. Hoi , Rong Jin , Michael R. Lyu, Large-scale text categorization by batch mode active learning, Proceedings of the 15th international conference on World Wide Web, May 23-26, 2006, Edinburgh, Scotland
Efstathios Stamatatos , George Kokkinakis , Nikos Fakotakis, Automatic text categorization in terms of genre and author, Computational Linguistics, v.26 n.4, p.471-495, December 2000
Theodore Dalamagas , Tao Cheng , Klaas-Jan Winkel , Timos Sellis, A methodology for clustering XML documents by structure, Information Systems, v.31 n.3, p.187-228, May 2006
Saddys Segrera , Mara N. Moreno, An experimental comparative study of web mining methods for recommender systems, Proceedings of the 6th WSEAS International Conference on Distance Learning and Web Engineering, p.56-61, September 22-24, 2006, Lisbon, Portugal
Tom M. Mitchell , Rebecca Hutchinson , Radu S. Niculescu , Francisco Pereira , Xuerui Wang , Marcel Just , Sharlene Newman, Learning to Decode Cognitive States from Brain Images, Machine Learning, v.57 n.1-2, p.145-175, October-November 2004
Nayer M. Wanas , Dina A. Said , Nadia H. Hegazy , Nevin M. Darwish, A study of local and global thresholding techniques in text categorization, Proceedings of the fifth Australasian conference on Data mining and analystics, p.91-101, November 29-30, 2006, Sydney, Australia
Andreas S. Weigend , Erik D. Wiener , Jan O. Pedersen, Exploiting Hierarchy in Text Categorization, Information Retrieval, v.1 n.3, p.193-216, October 1999
Rudy Prabowo , Mike Thelwall, A comparison of feature selection methods for an evolving RSS feed corpus, Information Processing and Management: an International Journal, v.42 n.6, p.1491-1512, December 2006
Shen , Rong Pan , Jian-Tao Sun , Jeffrey Junfeng Pan , Kangheng Wu , Jie Yin , Qiang Yang, Query enrichment for web-query classification, ACM Transactions on Information Systems (TOIS), v.24 n.3, p.320-352, July 2006
S. Jaillet , A. Laurent , M. Teisseire, Sequential patterns for text categorization, Intelligent Data Analysis, v.10 n.3, p.199-214, January 2006
Bill B. Wang , R. I. Bob Mckay , Hussein A. Abbass , Michael Barlow, A comparative study for domain ontology guided feature extraction, Proceedings of the twenty-sixth Australasian conference on Computer science: research and practice in information technology, p.69-78, February 01, 2003, Adelaide, Australia
Robert E. Schapire , Yoram Singer, BoosTexter: A Boosting-based Systemfor Text Categorization, Machine Learning, v.39 n.2-3, p.135-168, May-June 2000
David D. Lewis , Yiming Yang , Tony G. Rose , Fan Li, RCV1: A New Benchmark Collection for Text Categorization Research, The Journal of Machine Learning Research, 5, p.361-397, 12/1/2004
Mieczysaw A. Kopotek, Very large Bayesian multinets for text classification, Future Generation Computer Systems, v.21 n.7, p.1068-1082, July 2005
Haoran Wu , Tong Heng Phang , Bing Liu , Xiaoli Li, A refinement approach to handling model misfit in text categorization, Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, July 23-26, 2002, Edmonton, Alberta, Canada
O. de Vel , A. Anderson , M. Corney , G. Mohay, Mining e-mail content for author identification forensics, ACM SIGMOD Record, v.30 n.4, December 2001
Miguel E. Ruiz , Padmini Srinivasan, Hierarchical Text Categorization Using Neural Networks, Information Retrieval, v.5 n.1, p.87-118, January 2002
B. Barla Cambazoglu , Evren Karaca , Tayfun Kucukyilmaz , Ata Turk , Cevdet Aykanat, Architecture of a grid-enabled Web search engine, Information Processing and Management: an International Journal, v.43 n.3, p.609-623, May, 2007
I. E. Kuralenok , I. S. Nekrestyanov, Evaluation of Text Retrieval Systems, Programming and Computing Software, v.28 n.4, p.226-242, July-August 2002
Kamal Nigam , Andrew Kachites McCallum , Sebastian Thrun , Tom Mitchell, Text Classification from Labeled and Unlabeled Documents using EM, Machine Learning, v.39 n.2-3, p.103-134, May-June 2000
Tina Eliassi-Rad , Jude Shavlik, Intelligent Web agents that learn to retrieve and extract information, Intelligent exploration of the web, Physica-Verlag GmbH, Heidelberg, Germany,
Tai-Yue Wang , Huei-Min Chiang, Fuzzy support vector machine for multi-class text categorization, Information Processing and Management: an International Journal, v.43 n.4, p.914-929, July, 2007
Raymond J. Mooney , Loriene Roy, Content-based book recommending using learning for text categorization, Proceedings of the fifth ACM conference on Digital libraries, p.195-204, June 02-07, 2000, San Antonio, Texas, United States
Amir Ahmad , Lipika Dey, A k-mean clustering algorithm for mixed numeric and categorical data, Data & Knowledge Engineering, v.63 n.2, p.503-527, November, 2007
Michelangelo Ceci , Donato Malerba, Classifying web documents in a hierarchy of categories: a comprehensive study, Journal of Intelligent Information Systems, v.28 n.1, p.37-78, February 2007
Fuchun Peng , Dale Schuurmans , Shaojun Wang, Augmenting Naive Bayes Classifiers with Statistical Language Models, Information Retrieval, v.7 n.3-4, p.317-345, September-December 2004
Tsvi Kuflik , Zvi Boger , Peretz Shoval, Filtering search results using an optimal set of terms identified by an artificial neural network, Information Processing and Management: an International Journal, v.42 n.2, p.469-483, March 2006
Goran Nenadi , Sophia Ananiadou, Mining semantically related terms from biomedical literature, ACM Transactions on Asian Language Information Processing (TALIP), v.5 n.1, p.22-43, March 2006
Fabrizio Sebastiani, Machine learning in automated text categorization, ACM Computing Surveys (CSUR), v.34 n.1, p.1-47, March 2002 | comparative study;evaluation;text categorization;statistical learning algorithms |
357485 | Secure Execution of Java Applets Using a Remote Playground. | AbstractMobile code presents a number of threats to machines that execute it. We introduce an approach for protecting machines and the resources they hold from mobile code and describe a system based on our approach for protecting host machines from Java 1.1 applets. In our approach, each Java applet downloaded to the protected domain is rerouted to a dedicated machine (or set of machines), the playground, at which it is executed. Prior to execution, the applet is transformed to use the downloading user's web browser as a graphics terminal for its input and output and so the user has the illusion that the applet is running on her own machine. In reality, however, mobile code runs only in the sanitized environment of the playground, where user files cannot be mounted and from which only limited network connections are accepted by machines in the protected domain. Our playground thus provides a second level of defense against mobile code that circumvents language-based defenses. The paper presents the design and implementation of a playground for Java 1.1 applets and discusses extensions of it for other forms of mobile code, including Java 1.2. | Introduction
Advances in mobile code, particularly Java, have
considerably increased the exposure of networked computers
to attackers. Due to the "push" technologies
that often deliver such code, an attacker can download
and execute programs on a victim's machine without
the victim's knowledge or consent. The attacker's
code could conceivably delete, modify, or steal data
on the victim's machine, or otherwise abuse other resources
available from that machine. Moreover, mobile
code "sandboxes" intended to constrain mobile
code have in many cases proven unsatisfactory, in that
implementation errors enable mobile code to circumvent
the sandbox's security mechanisms [1, 9].
One of the oldest ideas in security, computer or oth-
erwise, is to physically separate the attacker from the
resources of value. In this paper we present a novel
approach for physically separating mobile code from
those resources. The basic idea is to execute the mobile
code somewhere other than the user's machine,
where the resources of value to the user are not avail-
able, and to force the mobile code to interact with the
user only from this sanitized environment. Of course,
this could be achieved by running the mobile code at
the server that serves it (thereby eliminating its mo-
bility). However, the challenge is to achieve this physical
separation without eliminating the benefits derived
from code mobility, in particular reducing load
on the code's server and increasing performance by
co-locating the code and the user.
In order to achieve this protection at an organizational
level, we propose the designation of a distinguished
machine (or set of machines), a playground,
on which all mobile code served to a protected domain
is executed. That is, any mobile code pushed
to a machine in this protected domain is automatically
rerouted to and executed on the domain's play-
ground. To enable the user to interact with the mobile
code during its execution, the user's computer acts as
a graphics terminal to which the mobile code displays
its output and from which it receives its input. How-
ever, at no point is any mobile code executed on the
user's machine. Provided that valuable resources are
not available to the playground, the mobile code can
entirely corrupt the playground with no risk to the do-
main's resources. Moreover, because the playground
can be placed in close network proximity to the machines
in the domain it serves, performance degradation
experienced by users is minimal. There can even
be many playgrounds serving a domain to balance load
among them.
In this paper we report on the design and implementation
of a playground for Java 1.1 applets. As described
above, our system reroutes all Java applets retrieved
via the web to the domain's playground, where
the applets are executed using the user's browser essentially
as an I/O terminal. By disallowing the playground
to mount protected file systems or open arbitrary
network connections to domain machines-in
the limit, locate the playground just "outside" the do-
main's firewall-the domain's resources can be protected
even if the playground is completely corrupted.
Our system is largely transparent to users and applet
developers, and in some configurations requires
no changes to web browsers in use today. While there
is a class of applets that are not amenable to execution
on our present playground prototype, e.g., due
to performance requirements or code structure, in our
experience this class is a small fraction of Java applets.
As described above, the playground need not be
trusted for our system to work securely. Indeed, the
only trusted code that is common to all configurations
of our system is the browser itself and a small
"graphics server", itself a Java applet, that runs in
the browser. The graphics server implements interfaces
that the untrusted applet, running on the play-
ground, calls to interact with the user. The graphics
server is a simply structured piece of code, and thus
should be amenable to analysis. In one configuration
of our system, trust is limited to only the graphics
server and the browser, but doing so requires a minor
change to browsers available today. If we are constrained
to using today's browsers off-the-shelf, then
a web proxy component of our system, described in
Section 3, must also be trusted.
The rest of this paper is structured as follows. In
Section 2 we relate our work to previous efforts at protecting
resources from hostile mobile code. We give
an overview of our system in Section 3 and refine this
description in Section 4, where we describe the implementation
of the system in some detail. The security
of the system is discussed in Section 5, and limitations
are discussed in Section 6.
Related work
There are three general approaches that have been
previously proposed for securing hosts from mobile
code. The first to be deployed on a large scale for
Java is the "sandbox" model. In this model, Java
applets are executed in a restricted execution environment
(the sandbox) within the user's browser; this
sandbox attempts to prevent the applet from performing
illegal actions. This approach has met with mixed
success, in that even small implementation errors can
enable applets to entirely bypass the security restrictions
enforced by the sandbox [1].
The second general approach is to execute only mobile
code that is trusted based on some criteria. For ex-
ample, Balfanz has proposed a Java filter that allows
users to specify the servers from which to accept Java
applets (see http://www.cs.princeton.edu/sip/).
Here the criterion by which an applet is trusted is
the server that serves it. A related approach is to
determine whether to trust mobile code based on its
author, which can be determined, e.g., if the code is
digitally signed by the author. This is the approach
adopted for securing Microsoft's ActiveX content, and
is also supported for applets in JDK 1.1. Combinations
of this approach and the sandbox model are implemented
in JDK 1.2 [3, 4] and Netscape Communicator
(see [14]), which enforce access controls on an
applet based on the signatures it possesses (or other
properties). A third variation on this theme is proof-carrying
code [11], where the mobile code is accompanied
by a proof that it satisfies certain properties.
However, these techniques have not yet been applied
to languages as rich as Java (or Java bytecodes).
Our approach is compatible with both of the approaches
described above. Our playground executes
applets in sandboxes (hence the name "playground"),
and could easily be adapted to execute only "trusted"
applets based on any of the criteria above. Our approach
provides an orthogonal defense against hostile
applets, and in particular, in our system a hostile
applet is still physically separated from valuable resources
after circumventing these other defenses.
The third approach to securing hosts from mobile
code is simply to not run mobile code. A course-grained
approach for Java is to simply disable Java in
the browser. Another approach is to filter out all applets
at a firewall [10] (see also [9, Chapter 5]), which
has the advantage of allowing applets served from behind
the firewall to be executed.
Independently of our work, a system similar to
ours has recently been marketed by Digitivity, Inc.,
a California-based company. While there are no
descriptions of their system in the scientific liter-
ature, we have inferred several differences in our
systems from a white paper [5], their web site
(http://www.digitivity.com), and discussions with
company representatives. First, elements of the Digi-
tivity system-notably the protocol for communication
between the user's browser and (their analog
of) the playground, and the Java Virtual Machine
(JVM) running on their playground-are proprietary,
whereas our system is built using only public, widely-used
protocols and JVMs. This may enable Digitiv-
ity to more easily tune its system's performance, but
our approach promotes greater confidence in the security
of our system by exposing it to maximum public
scrutiny and understanding. Second, our system
does not require trust in certain elements of the system
that, according to [5], are trusted in their architecture.
We discuss the trusted elements of our architecture in
Section 5.1.1.
3 Architecture
The core idea in this paper is to establish a dedicated
machine (or set of machines) called a playground
at which mobile code is transparently executed, using
users' browsers as I/O terminals. In this section we
give an overview of the playground and supporting
architecture that we implemented for Java, deferring
many details to Section 4.
To understand how our system works, it is first necessary
to understand how browsers retrieve, load, and
run Java applets. When a browser retrieves a web page
written in Hypertext Markup Language (HTML), it
takes actions based on the HTML tags in that page.
One such tag is the !applet? tag, which might appear
as follows:
!applet code=hostile.class .?
This tag instructs the browser to retrieve and run the
applet named hostile.class from the server that
served this page to the browser. The applet that
returns is in a format called Java bytecode, suitable
for running in any JVM. This bytecode is subjected
to a bytecode verification process, loaded into the
browser's JVM, and executed (see, e.g., [8]).
In our system, when a browser requests a web page,
the request is sent to a proxy (Figure 1, step 1). The
proxy forwards the request to the end server (step 2)
and receives the requested page (step 3). As the page
is received, the proxy parses it to identify all !applet?
tags on the returning page, and for each !applet? tag
so identified, the proxy replaces the named applet with
the name of a trusted graphics server applet stored locally
to the browser. 1 The proxy then sends this modified
page back to the browser (step 4), which loads the
graphics server applet upon receiving the page. For
each !applet? tag the proxy identified, the proxy retrieves
the named applet (steps 5-6) and modifies its
bytecode to use the graphics server in the requesting
browser for all input and output. The proxy forwards
the modified applet to the playground (step 7), where
it is executed using the graphics server in the browser
as an I/O terminal (step 8).
To summarize, there are three important components
in our architecture: the graphics server applet
that is loaded into the user's browser, the proxy, and
the playground. None of these need be executed on the
same machine, and indeed there are benefits to executing
them on different machines (this is discussed
in Section 5). The graphics server and the playground
are implemented in Java, and thus can run on any Java
compliant environment; the proxy is a Perl script. The
same proxy can be used for multiple browsers and mul-
"stored locally" means stored in a directory named
by the CLASSPATH environment variable.
tiple playgrounds. In the case of multiple playgrounds,
the proxy can distribute load among playgrounds for
improved performance. In the following subsections,
we describe the functions of these components in more
detail. Security issues are discussed in Section 5.
3.1 The graphics server
In this section we give an overview of the graphics
server that is loaded into a user's browser in place of an
applet provided by a web server. Because the graphics
server is a Java applet, we must introduce some Java
terminology to describe it. In Java, a class is a collection
of data fields and functions (called methods) that
operate on those fields. An object is an instance of a
class; at any point in time it has a state-i.e., values
assigned to its data fields-that can be manipulated
by invoking the methods of that object (defined by
the object's class). Classes are arranged in a hierar-
chy, so that a subclass can inherit fields and methods
from its superclass. A running Java applet consists of
a collection of objects whose methods are invoked by a
runtime system, and that in turn invoke one another's
methods. For more information on Java see, e.g., [2].
3.1.1 Remote AWT classes
The
Abstract
Window Toolkit (AWT) is the standard
API for implementing graphical user interfaces (GUI)
in Java programs. The AWT contains classes for user
input and output devices, including buttons, choice
boxes, text fields, images, and a variety of types of
windows, to name a few. Virtually every Java applet
interacts with the user by instantiating AWT classes
and invoking the methods of the objects so created.
The intuitive goal of the graphics server is to provide
versions of the AWT classes whose instances can
be created and manipulated from the playground. For
example, the graphics server should enable a program
running on the playground to create a dialog window
in the user's browser, display it to the user, and be
informed when the user clicks the "ok" button. In
the parlance of distributed object technology, such an
object-i.e., one that can be invoked from outside the
virtual machine in which it resides-is called a remote
object, and the class that defines it is called a
remote class. So, the graphics server, running in the
user's browser, should allow other machines (the play-
ground) to create and use "remote AWT objects" in
the user's browser for interacting with the user.
Accordingly, the graphics server is implemented as
a collection of remote classes, where each remote class
(with one exception that is described in Section 3.1.2)
is a remote version of a corresponding AWT class. The
7. Modified
applet
[load graphics
server]
[change
4. Modified page
1. Request for page
8. I/O
[change I/O]
Browser Proxy
6. Applet
3. Page
[load applet]
WEB
5. Request for applet
2. Request for page
Playground
Figure
1: Playground architecture
(modified) Java applet running on the playground creates
a collection of graphical objects in the graphics
server to implement its GUI remotely, and uses stubs
to interact with them (see Section 3.3). To minimize
the amount of code in the graphics server, each remote
class is a subclass of its corresponding AWT class,
which enables it to inherit many method implementations
from the original AWT class. Other methods
must be overwritten, for example those involving event
monitoring: in the remote class, methods involving
the remote object's events (e.g., mouse clicks on a remote
button object) must be adapted to pass back the
event to the stubs residing in the playground JVM. In
our present implementation, the remote classes employ
Remote Method Invocation (RMI) to communicate
with the playground. RMI is available on all Java
1.1 platforms, including Netscape Communicator, Internet
Explorer 4.0, JDK 1.1, and HotJava 1.0.
3.1.2 The remote Applet class
As described in Section 3.1.1, most classes that comprise
the graphics server are remote versions of AWT
classes. The main exception to this is a remote version
of the java.applet.Applet (or just Applet) class,
which is the class that all Java applets must subclass.
The main purpose of the Applet class is to provide
a standard interface between applets and their envi-
ronment. Thus, the remote version of this class serves
to provide this interface between the applet running
on the playground and the environment with which it
must interact, namely the user's browser.
More specifically, this class implements two types
of methods. First, it provides remote interfaces to the
methods of the Applet class, so that applets on the
playground can invoke them to interact with the user's
environment. Second, this class defines a new "con-
structor" method for each remote AWT class (see Section
3.1.1). For example, there is a constructButton
method for constructing a remote button in the user's
browser. This constructor returns a reference to the
newly-created button, so that the remote methods of
the button can be invoked directly from the play-
ground. Similarly, there is a constructor method for
each of the other remote AWT classes.
When initially started, the graphics server consists
of only one object, whose class is the remote Applet
class, called BrowserServer.class. The applet on
the playground can then invoke the methods of this
object (and objects so created) to create the graphical
user interface that it desires for the user.
3.2 The proxy
The proxy serves as the browser's and playground's
interface to the web. It retrieves HTML pages for the
browser and Java bytecodes for the playground, and
transforms them to formats suitable for the browser
or playground to use.
When retrieving an HTML page for the browser,
the proxy parses the returning HTML, identifies all
!applet? tags in the page, and replaces them with
references to the remote Applet class of the graphics
server (see Section 3.1.2). Thus, when the browser receives
the returned HTML page it loads this remote
Applet class (stored locally), instead of the applet
originally referenced in the page.
When retrieving a Java bytecode file for the play-
ground, the proxy transforms it into bytecode that
interacts with the user on the browser machine while
running at the playground. It does so by replacing all
invocations of AWT methods with invocations of the
corresponding remote AWT methods at the browser,
or more precisely, with invocations of playground-side
stubs for those remote AWT methods (which in turn
call remote AWT methods). This involves parsing the
incoming bytecode and making automatic textual substitutions
to change the names of AWT classes to the
names of the representative stubs of the corresponding
remote AWT classes.
3.3 The playground
The playground is a machine that loads modified
applets from the proxy and executes them. As described
above, the proxy modifies the applet's byte-codes
so that playground-resident stubs for remote
AWT methods are called instead of the (non-remote)
AWT methods themselves. So, when a modified applet
runs on the playground, a "skeleton" of its GUI
containing stubs for corresponding remote graphics
objects is built on the playground. The stubs contain
code for remotely invoking the remote objects'
methods at the user's browser and for handling events
passed back from the browser. For example, in the
case of a dialog window with an "ok" button, stubs
for the window and for button objects are instantiated
at the playground. Calls to methods having to do
with displaying the window and button are passed to
the remote objects at the user's browser, and "button
press" events are passed back to methods provided by
the button's stub to handle such events. These stubs
are stored locally on the playground, but aside from
this, the playground is configured as a standard JVM.
A playground is a centralized resource that can
be carefully administered. Moreover, investments in
the playground (e.g., upgrading hardware or performing
enhanced monitoring) can improve applet performance
and security for all users in the protected do-
main. There can even be multiple playgrounds for
load-balancing.
Implementation
4.1 An example
In order to understand how the components described
in Section 3 work together, in this section we
illustrate the execution of a simple applet. This example
describes how the applet is automatically transformed
to interact with the browser remotely, and how
the graphics server and its playground-side stubs interact
during applet execution. This section necessarily
involves low-level detail, but the casual reader can skip
ahead to Section 5 without much loss of continuity.
The applet we use for illustration is shown in Figure
2. This is a very simple (but complete) applet
that prints the word "Click!" wherever the user clicks
a mouse button. For the purposes of this discussion, it
implements two methods that we care about. The first
is an init() method that is invoked once and registers
this (i.e., the applet object) as one that should receive
mouse click events. This registration is achieved via
a call to its own addMouseListener() method, which
it inherits from its Applet superclass. The second
method it implements is a mouseClicked() method
that is invoked whenever a mouse click occurs. This
method calls getGraphics(), again inherited from
Applet, to obtain a Graphics object whose methods
can be called to display graphics. In this case, the
method invokes the drawString()
method of the Graphics object to draw the string
"Click!" where the mouse was clicked.
import java.applet.*;
import java.awt.*;
import java.awt.event.*;
public class Click extends Applet
implements MouseListener -
public void init() -
// Tell this applet what MouseListener objects
// to notify when mouse events occur. Since we
// implement the MouseListener interface ourselves,
// our own methods are called.
// A method from the MouseListener interface.
Invoked when the user clicks a mouse button.
public void mouseClicked(MouseEvent e) -
Graphics
g.drawString("Click!", e.getX(), e.getY());
// The other, unused methods of the MouseListener
// interface.
public void mousePressed(MouseEvent e) -
public void mouseReleased(MouseEvent e) -
public void mouseEntered(MouseEvent e) -
public void mouseExited(MouseEvent e) -
Figure
2: An applet that draws "Click!" wherever the
user clicks
The Click applet is a standalone applet that is
not intended to be executed using a remote graphics
display for its input and output. Thus, when
our proxy retrieves (the bytecode for) such an applet,
the applet must be altered before being run on the
playground. For one thing, the addMouseListener()
method invocation must somehow be passed to the
browser to indicate that this playground applet wants
to receive mouse events, and the mouse click events
must be passed back to the playground so that
mouseClicked() is invoked.
In our present implementation, passing this information
is achieved using Java Remote Method Invocation
(RMI). Associated with each remote class is
a stub for calling it that executes in the calling JVM.
The stub is invoked exactly as any other object is, and
once invoked, it marshals its parameters and passes
them across the network to the remote object that
services the request. Below, the stubs are described
by interfaces with the suffix Xface. For example,
BrowserXface is the interface that is used to call the
BrowserServer object of the graphics server.
RMI provides the mechanism to invoke methods
remotely, but how do we get the Click applet to use
RMI? To achieve this, we exploit the subclass inheritance
features of Java to interpose our own versions of
the methods it invokes. More precisely, we alter the
Click applet to subclass our own PGApplet, rather
than the standard Applet class. This is a straight-forward
modification of the bytecode for the Click
applet. By changing what Click subclasses in this
way, the addMouseListener() method called in Figure
2 is now the one in Figure 3. PGApplet, shown in
Figure
calls, when necessary, to the remote
applet object of the graphics server described in Section
3.1.2. The addMouseListener() method simply
adds its argument (the Click applet) to an array of
mouse listeners and, if this is the first to register, registers
itself as a mouse listener at the graphics server.
This registration at the graphics server is handled
by the addPGMouseListener() remote method
of BrowserServer, the class of the remote applet
object running on the browser (see Section 3.1.2).
The relevant code of BrowserServer is shown in
Figure
5. Recall that BrowserServer implements
the BrowserXface interface that specifies the remote
methods that can be called from the playground.
The addPGMouseListener() method, which is one of
those remote methods, records the fact that the playground
applet wants to be informed of mouse events
and then registers its own object as a mouse listener,
so that its object's mouseClicked() method is invoked
when the mouse is clicked. Such an invocation
passes the mouse-click event-or more precisely, a reference
to a BrowserEvent remote object that holds
a reference to the actual mouse-click event object-
back to the PGMouseClicked() remote method of the
PGApplet class. The PGMouseClicked() method invokes
mouseClicked() with a PGMouseEvent object,
2 For readability, in Figures 3-6 we omit import statements,
error checking, exception handling (try/catch statements), etc.
public class PGApplet
extends Applet implements PGAppletXface -
BrowserXface bx;
MouseListener new MouseListener[.];
int
// Adds a MouseListener. If this is the first, then
// register this object at the graphics server as a
// MouseListener.
public synchronized void
addMouseListener(MouseListener l) -
if (ml-index == 1) -
// Part of the PGAppletXface remote interface.
// Invoked from the browser graphics server when the
// mouse is clicked.
public void PGMouseClicked(BrowserEventXface e) -
int
PGMouseEvent new PGMouseEvent(e);
for
// Returns an object that encapsulates the remote
// graphics object of the browser applet.
public Graphics getGraphics () -
return new PGGraphics(bx.getBrowserGraphics());
Figure
3: Part of the PGApplet class (executes on the
playground)
public class PGGraphics extends Graphics -
// Reference to the BrowserGraphics remote object
// in the graphics server.
private BrowserGraphicsXface bg;
// The constructor for this object. Calls its
// superclass constructor and saves the reference
// to the BrowserGraphics remote object.
public PGGraphics(BrowserGraphicsXface b) -
// Invokes drawString in the graphics server.
public void drawString(String str, int x, int y) -
bg.drawString(str, x, y);
Figure
4: Part of the PGGraphics class (executes on
the playground)
which holds the reference to the BrowserEvent object.
That is, the PGMouseEvent object translates invocations
of its own methods (e.g., getX() and getY()
in
Figure
into invocations of the corresponding
BrowserEvent remote methods, which in turn translates
them into invocations of the actual Event object
in the browser. For brevity, the BrowserEvent and
PGMouseEvent classes are not shown.
The call to getGraphics() in Click is also replaced
with the PGApplet version. As shown in Figure
3, the getGraphics() method of PGApplet retrieves
a reference to a remote BrowserGraphics ob-
ject, via the getBrowserGraphics() method of Figure
5. The getGraphics() method of PGApplet
then returns this BrowserGraphics reference encapsulated
within a PGGraphics object for calling it. So,
when Click invokes drawString(), the arguments are
passed to the browser and executed (Figures 4,6).
4.2 Passing by reference
In the example of the previous section, all parameters
that needed to be passed across the network were
serializable. Object serialization refers to the ability
to write the complete state of an object to an output
stream, and then recreate that object at some
later time by reading its serialized state from an input
stream [12, 2]. Object serialization is central to
remote method invocation-and thus to communication
between the graphics server and the playground
stubs-because it allows for method parameters to be
passed to a remote method and the return value to
be passed back. In the example of Section 4.1, the remote
method invocation bg.drawString(str, x, y)
in the PGGraphics.drawString() method of Figure 4
causes no difficulty because each of str (a string) and
x and y (integers) are serializable.
However, not all classes are serializable. An example
is the Image class, which represents a displayable
image in a platform-dependent way. So, while the previous
invocation of bg.drawString(str, x, y) suc-
ceeds, a similar invocation bg.drawImage(img, x,
y, .) fails because img (an instance of Image) cannot
be serialized and sent to the graphics server. Even
if all objects could be serialized, serializing and transmitting
large or complex objects can result in substantial
cost. For such reasons, an object that can
be passed as a parameter to a remote method of the
graphics server is generally constructed in the graphics
server originally (with a corresponding stub on the
playground). Then, a reference to this object in the
browser is passed to graphics server routines in place
of the object itself. In this way, only the object reference
is ever passed over the network.
public class BrowserServer
extends Applet
implements BrowserXface, MouseListener -
// Reference to the MouseListener object on
// the playground
PGMouseListenerXface ml;
// Part of the BrowserXface remote interface. Invoked
// from the playground to add a remote MouseListener.
public void
addPGMouseListener(PGMouseListenerXface pml) -
Invoked whenever the mouse is clicked. Passes the
// event to the MouseListener on the playground.
public void mouseClicked(MouseEvent event) -
ml.PGMouseClicked(new BrowserEvent(event));
// Returns a remote object that encapsulates the
// graphics context of this applet.
public BrowserGraphicsXface getBrowserGraphics() -
Graphics
return new BrowserGraphics(g);
Figure
5: Part of the BrowserServer class (executes in
the browser)
public class BrowserGraphics
extends Graphics implements BrowserGraphicsXface -
private Graphics
// The constructor for this class. Calls the
// superclass constructor, saves the pointer to the
// "real" Graphics object (passed in), and exports
// its interface to be callable from the playground.
public BrowserGraphics(Graphics gx) -
// Part of the BrowserGraphicsXface remote interface.
// Invoked from the playground to draw a string.
public void drawString(String str, int x, int y) -
g.drawString(str, x, y);
Figure
Part of the BrowserGraphics class (executes
in the browser)
To illustrate this manner of passing objects by ref-
erence, we continue with the example of an Image.
When the downloaded applet calls for the creation
of an Image object, e.g., via the Applet.getImage()
method, our interposed PGApplet.getImage() passes
the arguments (a URL and a string, both serializable)
to a remote image creation method in the graphics
server. This remote method constructs the image,
places it in an array of objects, and returns the array
index it occupies. Playground objects then pass this
index to remote graphics server methods in place of
the image itself. For example, the BrowserGraphics
class in the graphics server implements versions of the
drawImage() method that accept image indices and
display the corresponding Image.
Conversely, there are circumstances in which objects
that need to be passed as parameters to remote
methods cannot first be created in the browser. This
can be due to security reasons-e.g., the object's class
is a user-defined class that overwrites methods of an
AWT class-or because the class of the parameter object
is unknown (e.g., it is only known to implement
some interface). In these circumstances, a reference
is passed in the parameter's place, and method invocations
intended for the object are passed back to the
playground object for processing. Continuing with our
Image() example, such "callbacks" can occur when
the downloaded applet applies certain image filters to
an image before displaying it. One such filter is an
RGBImageFilter: A subclass of RGBImageFilter defines
a per-pixel transformation to apply to an image
by overwriting the filterRGB() method. To avoid
loading untrusted code in the browser, such a filter
must be executed on the playground with callbacks to
its filterRGB() method.
In some circumstances, the need to pass objects
by reference can considerably hurt performance. Continuing
with the RGBImageFilter example above, filtering
an image may require that every image pixel
be passed from the browser to the playground, transformed
by the filterRGB() method, and passed back.
This can result in considerable delay in rendering the
image, though our experience is that this delay is
reasonable for images whose pixel values are indices
into a colormap array (i.e., for images that employ an
IndexColorModel).
4.3 Addressing
The previous sections described how an applet running
on the playground is coerced into using the user's
browser as its I/O terminal. Before any I/O can be
performed at the browser, however, the applet running
on the playground and the graphics server running in
the user's browser must be able to find each other to
communicate. This is complicated by the fact that
an HTML page can contain any number of !applet?
tags that, when modified by the proxy, result in multiple
instances of the graphics server running in the
browser. To retain the intended function of the page,
it is necessary to correctly match each applet running
on the playground with its corresponding instance of
the graphics server in the browser.
The addressing scheme that we use requires that
the proxy make additional changes to the HTML page
containing applet references prior to forwarding it to
the browser. Specifically, if the page contains an
!applet? tag of the form
!applet code=hostile.class .?
then the proxy not only replaces hostile.class with
BrowserServer.class (as described in Section 3),
but also adds a parameter tag to the HTML page, like
this:
!applet code=BrowserServer.class .?
!param name=ContactAddress value=address?
Parameter tags are tags that contain name/value
pairs. This one assigns an address value, which
the proxy generates to be unique, to be the value
of ContactAddress. The !param? tags that appear
between an !applet? tag and its terminator
(!/applet?, not shown) are used to specify parameters
to the applet when it is run. In this case, the
BrowserServer.class object (i.e., the remote Applet
object of the graphics server; see Section 3.1.2) looks
for the ContactAddress field in its parameters and
obtains the address assigned by the proxy. Once the
BrowserServer object is initialized and prepared to
service requests from the playground, it binds a remote
reference to itself to the address assigned by the proxy;
this binding is stored in an RMI name server [13].
The proxy remembers what address it assigned
to each !applet? tag and provides this address to
the playground in a similar fashion. That is, the
proxy loads applets into the playground by sending
to the playground an HTML page with identical
ContactAddress !param? tags to what it forwarded
to the browser (for simplicity, this step is not shown
in
Figure
3). A JVM on the playground loads the
referenced applets (via the proxy) and uses the corresponding
!param? tags provided with each to look up
the corresponding graphics servers in the RMI name
server.
As discussed previously, the security goal of our system
is to protect resources in the protected domain
from hostile applets that are downloaded by users in
that domain. We limit our attention to protecting
data that users do not offer to hostile applets. Protecting
data that users offer to hostile applets by, e.g.,
typing it into the applet's interface, must be achieved
via other protections that are not our concern here
(though we can utilize them on our playground if avail-
able).
5.1 Requirements
Achieving strong protection for the domain's private
resources relies on at least the following two distinct
requirements.
1. Prevent the JVM in the user's browser from loading
any classes from the network. If this is achieved,
then untrusted code can never be loaded into the
browser (unless the browser machine's local class
files are maliciously altered, a possibility that we
do not consider here). Later in this section we
describe how by disabling network class loading,
many classes of attacks that have been successfully
mounted on JVMs are prevented by our system.
2. Prevent untrusted applets running on the playground
from accessing valuable resources. Because
we assume that untrusted applets might circumvent
the language-based protections on the playground,
this requirement can be met only by relying on underlying
operating system protections on the play-
ground, or preferably by isolating the playground
from valuable user resources.
Below we describe alternatives for meeting these
requirements in a playground system.
5.1.1 Preventing network class loading
Preventing network class loading by the browser can
be achieved in one of two ways in our system.
Trusted proxy One approach is to depend on the
proxy to rewrite all incoming !applet? tags on HTML
pages to point to the graphics server class (stored locally
to the browser), and to intercept and deny entry
to any class files destined for protected machines
(as in [10]). Rewriting all !applet? tags ensures that
the browser is never directed by an incoming page to
load anything but the trusted graphics server. 3 More-
over, if the playground passes an unknown class to
3 This is not strictly true: in JavaScript-enabled browsers, an
!applet? tag may be dynamically generated when the page is
the graphics server (e.g., as a parameter to a remote
method call) the graphics server is unable to load the
class because the class' passage is denied by the proxy.
This approach works with any browser "off-the-shelf":
it requires no changes to the browser beyond specifying
the proxy as the browser's HTTP and SSL proxy,
which can typically be done using a simple preferences
menu in the browser. A disadvantage, however, is that
the proxy becomes part of the trusted computing base
of the system. For this reason, the proxy must be
written and protected carefully, and we refer to this
approach as the "trusted proxy" approach.
Untrusted proxy A second approach to preventing
the browser from loading classes over the network is to
directly disable network class loading in the browser.
The main disadvantage of this approach is that it requires
either configuration or source-code changes to
all browsers in the protected domain. In particular,
for most popular browsers today (including Netscape
Communicator and Internet Explorer 4.0), a source-code
change seems to be required to achieve this, but
we expect such changes to become easier as browser's
security policies become more configurable. An advantage
of this approach is that it excludes the proxy
from the trusted computing base of the system, and
hence we call this the "untrusted proxy" approach. In
this approach only the browser and the graphics server
classes are in the trusted computing base.
To more precisely show how the above approaches
prevent network class loading by the browser, below
we describe what causes classes to be loaded from the
network and how our approaches prevent this.
1. Section 3 briefly described the process by which a
browser loads an applet specified in an !applet?
tag in an HTML page. As described in Sections 3
and 4.3, the proxy modifies the code attribute
of each !applet? tag to reference the trusted
graphics server applet that is stored locally to the
browser. Thus, the browser is coerced into always
loading the graphics server applet as its ap-
plet. In the untrusted proxy approach, this coercion
serves merely a functional purpose-i.e., running
the graphics server to which the playground
applet can connect-but security is not threatened
if the proxy fails to rewrite the !applet? tag. In
interpreted by the browser, possibly in response to user action;
such a tag is not visible to the proxy and thus cannot be rewrit-
ten. The retrieval of the applet must then be blocked by the
proxy when it is attempted.
the trusted proxy approach, this coercion is central
to both security and function.
2. Once an applet is loaded and started, the applet
class loader in the browser loads any classes referenced
by the applet. If a class is not in the core
Java library or stored locally, the applet class loader
would normally retrieve the class over the network.
However, in our approach, the applet class loader
needs to load only local classes, because the graphics
server applet refers only to other local classes.
3. As described in Section 3, the modified applet
on the playground invokes remote methods in the
server. Because our present implementation
uses RMI to carry out these invocations, the
RMI class loader can load additional classes to pass
parameters and return values. As above, the RMI
class loader first looks in local directories to find
these classes before going to the network to retrieve
them. It is possible that the playground applet
passes an object whose class is not stored locally on
the browser machine (particularly if the playground
is corrupted). In the trusted proxy approach, the
RMI class loader goes to the network, via the proxy,
to retrieve the class, but the class is detected and
denied by the proxy. In the untrusted proxy ap-
proach, the RMI class loader returns an exception
immediately upon determining that the class is not
available locally.
5.1.2 Isolating untrusted applets
Once untrusted applets are diverted to the play-
ground, security relies on preventing those applets
from accessing protected resources. A first step is for
the playground's JVMs (and thus the applets) to execute
under accounts different from actual users' accounts
and that have few permissions associated with
them. For some resources, e.g., user files available to
the playground, this achieves security equivalent to
that provided by the access control mechanisms of the
playground's operating system. (Similarly, JVMs on
the playground execute under different accounts, to reduce
the threat of inter-JVM attacks; see Section 5.3.)
If the complete compromise of the playground is
feared, then further configuration of the network may
provide additional defenses. For example, if network
file servers are configured to refuse requests from the
playground (and if machines' requests are authenti-
cated, as with AFS [6]), then even the total corruption
of the playground does not immediately lead to the
compromise of user files. Similarly, if all machines in
the protected domain are configured to refuse network
connections from the playground, except to designated
ports reserved for browsers' graphics servers to listen,
then the compromise of the playground should gain
little for the attacker.
In the limit, an organization's playground can be
placed outside its firewall, thereby giving applets no
greater access than if they were run on the servers that
served them. However, because most firewalls disallow
connections from outside the firewall to inside, additional
steps may be necessary so that communication
can proceed uninhibited between the graphics server
in the browser and the applet on the playground.
In particular, RMI in Java 1.1 (used in our proto-
opens network connections between the browser
and the playground in both directions, i.e., from the
browser to the playground and vice versa. One approach
to enable these connections across a firewall is
to multiplex them over a single connection from the
graphics server to the playground (i.e., from inside
to outside). This can be achieved if both the graphics
server and the playground applet interpose a customized
connection implementation (e.g., by changing
the SocketImplFactory), but for technical reasons
this does not appear to be possible with all off-the-
shelf browsers (e.g., it appears to work with HotJava
1.0 but not Netscape 3.0). Another alternative is to
establish reserved ports on which graphics servers listen
for connections from playground applets, and then
configure the firewall to admit connections from the
playground to those ports.
5.2 RMI security
Though the requirements discussed in Section 5.1
are necessary for security in our system, they may not
be sufficient. In particular, RMI is a relatively new
technology that could conceivably present new vulner-
abilities. A first step toward securing RMI is to support
authenticated and encrypted transport, so that
a network attacker cannot alter or eavesdrop on communication
between the browser and the playground.
This can also be achieved by interposing encryption
at the object serialization layer (see [12]).
A more troubling threat is possible vulnerabilities
in the object serialization routines that are used to
marshal parameters to and return values from remote
method invocations. In the worst case, a corrupted
playground could conceivably send a stream of bytes
that, when unserialized at the browser, corrupt the
type system of the JVM in the browser. Here our decision
to generally pass only primitive data types (e.g.,
integers, strings) as parameters to remote method invocations
(see Section 4.2) would seem to be fortu-
itous, as it greatly limits the number structurally interesting
classes that the attacker has at its disposal
for attempting such an attack. However, the possibility
of a vulnerability here cannot yet be ruled out, and
several research efforts are presently examining RMI in
an effort to identify and correct such problems. This
process of public scrutiny is one of the main advantages
to building our system from public and widely
used components.
5.3 Resistance to known attacks
One way to assess the security of our approach is
to examine it in the face of known attack types. In
this section we review several classes of attack that
applets can mount, and describe the extent to which
our system defends against them.
Accessing and modifying protected resources
Several bugs in the type safety mechanisms of Java
have provided ways for applets to bypass Java sand-
boxes, including some in popular browsers [1, 9].
These penetrations typically enable the applet to perform
any operation that the operating system allows,
including reading and writing the user's files and opening
network connections to other machines to attack
them. We anticipate that in the foreseeable future,
type safety errors will continue to exist, and therefore
we must presume that applets running on our playground
may run unconstrained by the sandbox. How-
ever, they are still confined by the playground operating
system's protections and, ultimately, to attack
only those resources available to the playground. In
Section 5.1.2 we described several approaches to limit
what resources are available to applets on the play-
ground. We expect that through proper network and
operating system configuration, hostile applets can be
effectively isolated from protected resources.
Denial of service In a denial of service attack, a
hostile applet might disable or significantly degrade
access to system resources such as the CPU, disk, net-work
and interactive devices. Ladue [7] presents several
such applets, e.g., that consume CPU even after
the user clicks away from the applet origin page,
that monopolize system locks, or that pop up windows
on the user's screen endlessly. Using our Java
playground, most of these applets have no effect on
the protected domain, and only affect the playground
machine. However, an applet that pops up windows
endlessly causes the graphics server running in the
user's browser to create an infinite stream of windows.
Uncontrolled, this may prevent access to the user terminal
altogether and require that the user reboot her
machine or otherwise shut off her browser. One approach
to defend against this is to configure the graphics
server and/or the playground to limit the number
or rate of window creations.
In another type of denial of service, an applet may
deny service to other applets within the JVM, e.g., by
killing off others' threads [7]. Although the sandbox
mechanisms of most browsers are intended to separate
applets in different web pages from one another, several
ways of circumventing this separation have been
shown [1, 7]. This can be prevented in our system if
the applets for each page run in a separate JVM on the
playground under a separate user account, and hence
are unable to directly affect applets from another page
(except by attacking the playground itself).
Violating privacy The Java security policy in
browsers is geared towards maintaining user privacy
by disallowing loaded applets access to any local in-
formation. In some cases, however, a Java applet can
reveal a lot about a user whose browser executes it.
For example, in [7], Ladue presents an applet that
uses a sendmail trick to send mail on the user's behalf
to a sendmail daemon running on the applet's
server. When this applet is downloaded onto a Unix
host (running the standard ident service), this mail
identifies not only the user's IP address, but also the
user's account name. In our system, the applet runs
on the playground machine under an account other
than the user's, and the information that it can reveal
is limited only to what is available on the playground.
6 Limitations
In our experience, our system is transparent to
users for most applets. There are, however, classes
of applets for which our playground architecture is
not transparent, and indeed our system may be unable
to execute certain applets at all. In particular,
the remote interface supported by the graphics server
supports the passage of certain classes as parameters
and return values of its remote methods. If the code
running on the playground attempts to pass an object
parameter whose class is an unknown subclass of the
expected parameter class, then the browser is required
to load that subclass to unserialize that parameter.
However, because class loading from the network is
prevented in our system (see Section 5.1.1), the load
does not complete and an exception is generated. At
the time of this writing, the number of applets that
cannot be supported due to this limitation does not
seem significant; indeed, we have yet to encounter an
applet for which this is a problem.
A second limitation of our approach is that by moving
the applet away from the machine on which the
user's browser executes, the applet's I/O incurs the
overhead of communicating over the network. Our experience
indicates that for many applets, this cost is
barely noticeable over typical local area network links
such as a 10-Mbit/s Ethernet. However, for applets
whose output involves intensive I/O operations, e.g.,
low-level image filtering (see Section 4.2), this overhead
can be considerable.
The emergence of more functional applets raises
new challenges in transparently executing them on a
playground. For example, one can envision a text editor
applet that can be downloaded from the network
and then used to compose a document (and save it to
disk). Such an applet is inconsistent with the sandbox
policies implemented in Netscape 3.X and Internet
Explorer 3.X, because network-loaded applets are
not allowed to write files. However, it may well be
possible with the more flexible policies implemented
in recently (or soon-to-be) released versions of these
products (e.g., [14, 4]). Our primary direction of on-going
work is extending our playground architecture
to support such applets when they become available,
while still offering protections to data not intended for
the applet.
7 Conclusion
This paper presented a novel approach to protecting
hosts from mobile code and an implementation of
this approach for Java applets. The idea behind our
approach is to execute mobile code in the sanitized
environment of an isolated machine (a "playground")
while using the user's browser as an I/O terminal.
We gave a detailed account of the technology to allow
transparent execution of Java applets separately from
their graphical interface at the user's browser. Using
our system, users can enjoy applets downloaded from
the network, while exposing only the isolated environment
of the playground machine to untrusted code.
Although we presented the playground approach and
technology in the context of Java and applets, other
mobile-code platforms may also utilize it.
Acknowledgements
We are grateful to Drew Dean, Ed Felten, Li Gong
and the anonymous referees for helpful comments.
--R
Java security: From Hotjava to Netscape and beyond.
Java in a Nutshell
Java security: Present and near future.
Going beyond the sandbox: An overview of the new security architecture in the Java TM Development Kit 1.2.
Secure mobile code management: Enabling Java for the enterprise.
Scale and Performance in a Distributed File System.
Pushing the limits of Java security.
The Java Virtual Machine Specification
Java Security: Hostile Ap- plets
Blocking Java applets at the firewall.
Safe kernel extensions without run-time checking
Extensible security architectures for Java.
--TR
--CTR
F. M. T. Brazier , B. J. Overeinder , M. van Steen , N. J. E. Wijngaards, Agent factory: generative migration of mobile agents in heterogeneous environments, Proceedings of the 2002 ACM symposium on Applied computing, March 11-14, 2002, Madrid, Spain
Peng Liu , Wanyu Zang , Meng Yu, Incentive-based modeling and inference of attacker intent, objectives, and strategies, ACM Transactions on Information and System Security (TISSEC), v.8 n.1, p.78-118, February 2005
K. G. Anagnostakis , S. Sidiroglou , P. Akritidis , K. Xinidis , E. Markatos , A. D. Keromytis, Detecting targeted attacks using shadow honeypots, Proceedings of the 14th conference on USENIX Security Symposium, p.9-9, July 31-August 05, 2005, Baltimore, MD
Peng Liu , Wanyu Zang, Incentive-based modeling and inference of attacker intent, objectives, and strategies, Proceedings of the 10th ACM conference on Computer and communications security, October 27-30, 2003, Washington D.C., USA
Alexander Moshchuk , Tanya Bragin , Damien Deville , Steven D. Gribble , Henry M. Levy, SpyProxy: execution-based detection of malicious web content, Proceedings of 16th USENIX Security Symposium on USENIX Security Symposium, p.1-16, August 06-10, 2007, Boston, MA
Pieter H. Hartel , Luc Moreau, Formalizing the safety of Java, the Java virtual machine, and Java card, ACM Computing Surveys (CSUR), v.33 n.4, p.517-558, December 2001 | security;remote method invocation;mobile code |
357769 | An Optimal Hardware-Algorithm for Sorting Using a Fixed-Size Parallel Sorting Device. | AbstractWe present a hardware-algorithm for sorting $N$ elements using either a p-sorter or a sorting network of fixed I/O size $p$ while strictly enforcing conflict-free memory accesses. To the best of our knowledge, this is the first realistic design that achieves optimal time performance, running in $\Theta ( {\frac{N \log N}{p \log p}})$ time for all ranges of $N$. Our result completely resolves the problem of designing an implementable, time-optimal algorithm for sorting $N$ elements using a p-sorter. More importantly, however, our result shows that, in order to achieve optimal time performance, all that is needed is a sorting network of depth $O(\log^2 p)$ such as, for example, Batcher's classic bitonic sorting network. | Introduction
Recent advances in VLSI have made it possible to implement algorithm-structured chips as building blocks
for high-performance computing systems. Since sorting is one of the most fundamental computing problems,
it makes sense to endow general-purpose computer systems with a special-purpose parallel sorting device,
invoked whenever its services are needed.
In this article, we address the problem of sorting N elements using a sorting device of I/O size p, where N
is arbitrary and p is fixed. The sorting device used is either a p-sorter or a sorting network of fixed I/O size
p. We assume that the input as well as the partial results reside in several constant-port memory modules.
In addition to achieving time-optimality, it is crucial that we sort without memory access conflicts.
In real-life applications, the number N of elements to be sorted is much larger than the fixed size p that
a sorting device can accommodate. In such a situation, the sorting device must be used repeatedly in order
to sort the input. The following natural question arises: "How should one schedule memory accesses and
the calls to the sorting device in order to achieve the best possible sorting performance?" Clearly, if this
question does not find an appropriate answer, the power of the sorting device will not be fully utilized.
A p-sorter is a sorting device capable of sorting p elements in constant time. Computing models for a p-
sorter do exist. For example, it is known that p elements can be sorted in O(1) time on a p \Theta p reconfigurable
Work supported by ONR grant N00014-97-1-0526, NSF grants CCR-9522093 and ECS-9626215, and by Louisiana grant
LEQSF(1996-99)-RD-A-16.
y Department of Computer Science, Old Dominion University, Norfolk, VA 23529-0162, USA
z Istituto di Elaborazione dell'Informazione, C.N.R., Pisa 56126, ITALY
x Department of Computer Science, University of Texas at Dallas, Richardson, Richardson,
mesh [3, 7, 8]. Beigel and Gill [2] showed that the task of sorting N elements, N - p,
N log N
log p
calls to a p-sorter and presented an algorithm that achieves this bound. However, their algorithm assumes
that the p inputs to the p-sorter can be fetched in unit time, irrespective of their location in memory. Since,
in general, the address patterns of the operands of p-sorter operations are irregular, it appears that the
algorithm of [2] cannot realistically achieve the time complexity of \Theta
N log N
log p
, unless one can solve in
constant time the addressing problem inherent in accessing the p inputs to the p-sorter and in scattering the
output back into memory. In spite of this, the result of [2] poses an interesting open problem, namely that
of designing an implementable \Theta
N log N
log p
time sorting algorithm that uses a p-sorter.
Consider an algorithm A that sorts N elements using a p-sorter in O(f(N; p)) time. It is not clear
that algorithm A also sorts N elements using a sorting network T of I/O size p in O(f(N; p)) time. The
main reason is that the task of sorting p elements using the network T requires O(D(T
proportional to the depth D(T ), which is the maximum number of nodes on a path from an input to an
output, of the network. Thus, if each p-sorter operation is replaced naively by an individual application of
T , the time required for sorting becomes O(D(T ) \Delta f(N; p)). To eliminate this O(D(T )) slowdown factor, the
network must be used in a pipelined fashion. In turn, pipelining requires that sufficient parallelism in the
p-sorter operations be identified and exploited. Recently, Olariu, Pinotti and Zheng [9] introduced a simple
but restrictive design - the row merge model - and showed that in this model N elements can be sorted in
\Theta
log N
time using either a p-sorter or with a sorting network of I/O size p.
To achieve better sorting performance, a new algorithm-structured architecture must be designed. This
involves devising a sorting algorithm suitable for hardware implementation and, at the same time, an architecture
on which the algorithm can be executed directly. Such an algorithm-architecture combination
is commonly referred to as a hardware-algorithm. The major contribution of this article is to present the
first realistic hardware-algorithm design for sorting an arbitrary number of input elements using a fixed-size
sorting device in optimal time, while strictly enforcing conflict-free memory accesses. We introduce a
parallel sorting architecture specially designed for implementing a carefully designed algorithm. The components
of this architecture include a parallel sorting device, a set of random-access memory modules, a set of
conventional registers, and a control unit. This architecture is very simple and feasible for VLSI realization.
We show that in our architectural model N elements can be sorted in \Theta
N log N
log p
time using either a
p-sorter or a sorting network of fixed I/O size p and depth O(log 2 p). In conjunction with the theoretical work
of [2], our result completely resolves the problem of designing an implementable, time optimal, algorithm for
sorting N elements using a p-sorter. More importantly, however, our result shows that in order to achieve
optimal sorting performance a p-sorter is not really necessary: all that is needed is a sorting network of depth
O(log 2 p) such as, for example, Batcher's classic bitonic sorting network. As we see it, this is exceedingly
important since any known implementation of a p-sorter requires powerful processing elements, whereas
Batcher's bitonic sort network uses simple comparators.
Architectural Assumptions
In this section we describe the architectural framework within which we specify our optimal sorting algorithm
using a fixed-size sorting device. We consider that a sequential sorting algorithm is adequate for the case
. Consequently, from now on, we assume that
This assumption implies that just for addressing purposes we need at least 2 log p bits. 1 For the reader's
convenience, Figure 1 depicts our design for 9. To keep the figure simple, control signal lines are not
shown. The basic architectural assumptions of our sorting model include:
R
R R R R R R R R
Memory
Modules
AR AR AR AR AR AR AR AR AR
Sorting Device
Control
Unit
Figure
1: The proposed architecture for
(i) A data memory organized into p independent, constant-port, memory modules Each
word is assumed to have a length of w bits, with w - 2 log p. We assume that the N input elements
are distributed evenly, but arbitrarily, among the p memory modules. The words having the same
address in all memory modules are referred to as a memory row. Each memory module M i is randomly
addressed by an address register AR i , associated with an adder. Register AR i can be loaded with a
word read from memory module M i or by a row address broadcast from the CU (see below).
(ii) A set of data registers, R i , (1 - i - p), each capable of storing a (w We refer to
the word stored in register R i as a composed word, since it consists of three fields:
ffl an element field of w bits for storing an element,
ffl a long auxiliary field of log p bits, and
ffl a short auxiliary field of 0:5 log p bits.
1 In the remainder of this article all logarithms are assumed to be base 2.
These fields are arranged such that the element field is to the left of the long auxiliary field, which is to
the left of the short auxiliary field. Each field of register R i can be loaded independently from memory
module M i , from the i-th output of the sorting device, or by a broadcast from the CU. The output of
register R i is connected to the i-th input of the sorting device, to the CU, and to memory module M i .
We assume that:
ffl In constant time, the p elements in the data registers can be loaded into the address registers or
can be stored into the p modules addressed by the address registers.
ffl The bits of any field of register R i , (1 - i - p), can be set/reset to all 0's in constant time.
ffl All the fields of data register R i , (1 - i - p), can be compared with a particular value, and each
of the individual fields can be set to a special value depending on the outcome of the comparison.
Moreover, this parallel compare-and-set operation takes constant time.
(iii) A sorting device of fixed I/O size p, in the form of a p-sorter or of a sorting network of depth O(log 2 p).
We assume that the sorting device provides data paths of width w+ 1:5 log p bits from its input to its
output. The sorting device can be used to sort composed words on any combination of their element or
auxiliary fields. In case a sorting network is used as the sorting device, it is assumed that the sorting
network can operate in pipelined fashion.
(iv) A control unit (CU, for short), consisting of a control processor capable of performing simple arithmetic
and logic operations and of a control memory used to store the control program as well as the control
data. The CU generates control signals for the sorting device, for the registers, and for memory accesses.
The CU can broadcast an address or an element to all memory modules and/or to the data registers,
and can read an element from any data register. We assume that these operations take constant time.
Described above are minimum hardware requirements for our architectural model. In case a sorting
network is used as the sorting device, one can use a "half-pipelining" scheme: the input to the network is
provided in groups of D rows. The next group is supplied only after the output of the previous group is
obtained. D is the depth of the sorting network. For the sorting network to operate at full capacity, one may
add an additional set of address (resp. data) registers. One set of address (resp. data) registers is used for
read operations, while the order set is used for write operations; both operations are performed concurrently.
Let us now estimate the VLSI area that our design uses for hardware other than data memory, the sorting
device and the CU under the word model, i.e. assuming that the word length w is a constant. We exclude
the area taken by the CU: this is because in a high-performance computer system, one of the processors can
be assigned the task of controlling the parallel sorting subsystem. Clearly, the extra area is only that used
for the address and the data registers, and this amounts to O(p) - which does not exceed the VLSI area of
any implementation of a p-sorter or of a sorting network of I/O size p.
We do not include the VLSI area for running the data memory address bus, which has a width of log N
bits, and the control signal lines to data memory and to the sorting device, since they are needed for any
architecture involving a data memory and a sorting device. It should be pointed out that for any architecture
that has p memory modules involving a total of N - p 2 words, the control circuitry itself requires at least
\Omega\Gamma0/1 flog p; log N
area. Since the operations performed by the control processor are
simple, we can assume that it takes constant area. The length of the control memory words is at least log N
which is the length of data memory addresses. As will become apparent, our algorithms require O( N
of data memory, and consequently, the control memory words have length O(log N
). The control program
is very simple and takes constant memory. However, O( N
are used for control information,
which can be stored in data memory.
3 An Extended Columnsort Algorithm
In this section, we present an extension of the well known Columnsort algorithm [5]. This extended Column-
algorithm will be implemented in our architectural model and will be invoked repeatedly when sorting
a large number of elements. There are two known versions of Columnsort [5, 6]: one involves eight steps, the
other seven. We provide an extension of the 8-step Columnsort, because the 7-step version does not map
well to our architecture.
Columnsort was designed to sort, in column-major order, a matrix of r rows and s columns. The
"classic" Columnsort contains 8 steps. The odd-numbered steps involve sorting each of the columns of the
matrix independently. The even-numbered steps permute the elements of the matrix in various ways. The
permutation of Step 2 picks up the elements in column-major order and lays them down in row-major order.
The permutation of Step 4 is just the reverse of that in Step 2. The permutation of Step 6 amounts to a b rc
shift of the elements in each column. The permutation of Step 8 is the reverse of the permutation in Step 6.
The 8-step Columnsort works under the assumption that r - In [5], Leighton poses as an open
problem to extend the range of applicability of Columnsort without changing the algorithm "drastically". We
provide such an extension. We show that one additional sorting step is necessary and sufficient to complete
the sorting in case r - s(s \Gamma 1). Our extension can be seen as trading one additional sorting step for a larger
range of applicability of the algorithm.
Figure
2: Step by step application of the extended Columnsort algorithm. The first eight steps correspond to
the classic 8-step Columnsort.
Figure
2 shows a matrix of r rows and s columns with which the condition r -
is not satisfied. The first eight steps of this example correspond to the 8-step Columnsort algorithm which
does not produce a sorted matrix. By adding one more step, Step 9, in which the elements in each column
are sorted, we obtain an extended Columnsort algorithm. We assume a matrix M of r rows and s columns,
numbered from 0 to r \Gamma 1 and from 0 to s \Gamma 1, respectively. Our arguments rely, in part, on the following
well-known gem of computer science mentioned by Knuth [4].
Proposition 1 Let M be a matrix whose rows are sorted. After sorting the columns, the rows remain sorted.
The following result was proved in [5].
Lemma 1 If some element x ends up in position M [i; j] at the end of Step 3, then x has rank at least
The following result was mentioned without proof in [5].
Lemma 2 If element x ends up in position M [i; j] at the end of Step 3, then its rank is at most si
Proof. We are interested in determining a lower bound on the number of elements known to be larger
than or equal to x. For this purpose, we note that since at the end of Step 3, element x was in position
are known to be larger than of equal to x. Among these, at most s are
known to be smaller than of equal to s \Gamma j elements in their columns at the end of Step 1. The remaining
must be smaller than or equal to s other elements in their column at the end of Step 1.
Consequently, x is known to be smaller than or equal to at least
rs
elements of M . It follows that the rank of x is at most si + sj, as claimed. 2
For later reference, we now choose r such that
Lemma 3 If some element x ends up in column c at the end of Step 4, then the correct position of x in the
sorted matrix is in one of the pairs of columns 1).
Proof. Consider, again, a generic element x that ended up in position M (i; j) at the end of Step 3. The
permutation specific to Step 4 guarantees that x will be moved, in Step 4, to a position that corresponds, in
the sorted matrix, to the element of rank si + j. In general, this is not the correct position of x. However,
as we shall prove, x is "close" to its correct position in the following sense: if x is in column c at the end of
Step 4, then in the sorted matrix x must be in one of the pairs of columns 1).
Recall that by virtue of Lemmas 1 and 2, combined, x has rank no smaller than si
no larger than si sj. Moreover, simple algebraic manipulations show that
Now consider the elements y and z of ranks si respectively. The number N (y; z)
of elements of the matrix M lying between y and z, in sorted order, is:
and so, by (2), we have
Observe that equation (4) implies that y and z must lie in adjacent columns of the sorted matrix. As we
saw, at the end of Step 4, x lies in the position corresponding to the element of rank is in the sorted
matrix. confirms that x lies somewhere between y and z. Assume that x lies in column
c at the end of Step 4. Thus, the correct position of x is in one of the columns c \Gamma 1 or c in case z is in the
same column as x, and in one of the columns c or c y is in the same column as x. 2
Lemma 4 The rows of M are sorted at the end of Step 4.
Proof. Consider an arbitrary column k, (0 - k - s \Gamma 1), at the end of Step 3. The permutation specified in
Step 4 guarantees that the first r
s elements in column k will appear in positions k; k+s; k+2s;
column 0; the next group of r
s elements will appear in positions k; k+s; k+2s;
on. Since the columns were sorted at the end of Step 3, it follows that all the rows k; k+s;
of M are sorted at the end of Step 4. Since k was arbitrary, the conclusion follows. 2
Lemma 5 If some element x is in the bottom half of column c at the end of Step 5, then its correct position
in the sorted matrix is in one of the columns c or c + 1.
Proof. By Lemma 3, we know that the correct position of x is in one of the pairs of columns
or 1). Thus, to prove the claim we only need to show that x cannot be in column c \Gamma 1. For this
purpose, we begin by observing that by Proposition 1 and by Lemma 4, combined, the rows and columns
are sorted at the end of Step 5. Now, suppose that element x ends up in row t, t - b rc, at the end of Step
5. If x belongs to column c \Gamma 1 in the sorted matrix, then all the elements of the matrix in columns
and c belonging to rows 0; must belong to column c \Gamma 1 or below. By Lemma 3, all elements that
are already in columns 0; must belong to columns 0; in the sorted matrix. Thus, at
least additional elements must belong to column c \Gamma 1 or below, a contradiction. 2
In a perfectly similar way one can prove the following result.
Lemma 6 If some element is in the top half of column c at the end of Step 5, then its correct position in
the sorted matrix is in one of the columns c \Gamma 1 or c.
Now, suppose that we find ourselves at the end of Step 8 of the 8-step Columnsort.
Lemma 7 Every item x that is in column c at the end of Step 8 must be in column c in the sorted matrix.
Proof. We begin by showing that
no element in column c can be in column
We proceed by induction on c. The basis is trivial: no element in column 0 can lie in the column to its left.
Assume that (5) is true for all columns less than c. In other words, no element that ends up in one of the
columns at the end of Step 8, can lie in the column to its left. We only need to prove that the
statement also holds for column c. To see that this must be the case, consider first an element u that lies
in the bottom half of column c at the end of Step 8. At the end of Step 5, u must have been either in the
bottom half of column c or in the top half of column c + 1. If u belonged to the bottom half of column c
then, by Lemma 6, it must belong to columns c or c + 1 in the sorted matrix. If u belonged to the top half
of column c must belong to columns c or c + 1 in the sorted matrix. Therefore, in
either case, u cannot belong to column c \Gamma 1.
Next, consider an element v that lies in the top half of column c at the end of Step 8. If v belonged to
all the elements in the bottom half of column c \Gamma 1 as well as the elements occurring above
v in column c must belong to column c \Gamma 1. By the induction hypothesis, no element that lies in column
at the end of Step 8 can lie in column c \Gamma 2. By Lemmas 5 and 6 combined, no element that lies in the
top half of column c \Gamma 1 can belong to column c. But now, we have reached a contradiction: column
must contain more than r elements. Thus, (5) must hold.
What we just proved is that no element in a column can belong to the column to its left. A symmetric
argument shows that no element belongs to the column immediately to its right, completing the proof. 2
By Lemma 7, one more sorting step completes the task. Thus, we have obtained a 9-step Columnsort
that trades an additional sorting step for a larger range of r versus s.
Theorem 1 The extended 9-step Columnsort algorithm correctly sorts an r \Theta s matrix such that r - s(s \Gamma 1).
4 The Basic Algorithm
In this section we show how to sort, in row-major order, m,
using our architectural
model while enforcing conflict-free memory accesses. The resulting algorithm, referred to as the basic
algorithm, will turn out to be the first stepping stone in the design of our time-optimal sorting algorithm.
The basic algorithm is an implementation of the extended Columnsort discussed in Section 3 with
Our presentation will focus on the efficient use of a generic sorting device of I/O size p. With this in
mind, we shall keep track of the following two parameters that will become key ingredients in evaluating the
running time of the algorithm:
ffl the number of calls to the sorting device, and
ffl the amount of time required by all the data movement tasks that do not involve sorting.
Assume that we have to sort, in row-major order, the elements in
memory rows. The case
2 is perfectly similar. We assume, without loss of generality, that the input is placed, in some
order, in memory rows a
for some integer a - 0. The sorted elements will be placed in
memory rows b
2 such that the ranges [a
do not overlap.
Step 1: Sort all the rows independently.
This step consists of the following loop:
do
read the i-th memory row and sort it in non-decreasing order using the sorting device;
be the resulting sorted sequence;
for all do in parallel
store x j in the i-th word of memory module M j
endfor
endfor
2 calls to the sorting device and O(p 1
data movement not involving sorting.
Step 2: Permuting rows.
The permutation specific to Step 2 of Columnsort prescribes picking up the elements in each memory row
and laying them down column by column. For an illustration, consider the case with the initial
element distribution featured in the following matrix:
At the end of Step 2, the permuted matrix reads:
A careful examination of the permuted matrix reveals that consecutive elements in the same memory row
will end up in the same memory module (e.g. elements 1, 2, 3 will occur in memory module M 1 ). Therefore,
in order to achieve the desired permutation without memory-access conflicts, one has to devise a different
way of picking up the elements in various memory rows. For this purpose, we find it convenient to view
each element x stored in a memory module as an ordered triple hx; row(x); module(x)i where row(x) and
module(x) stand for the identity of the memory row and of the memory module, respectively, containing
x. Further, we let row(x)jmodule(x) denote the binary number obtained by concatenating the binary
representations of row(x) and module(x). The details are spelled out in the following procedure.
procedure
begin
for all do in parallel
read the
-th word of memory module M j
endfor
using the sorting device, sort the p elements in non-decreasing order of row(x)jmodule(x);
be the resulting sorted sequence;
for all do in parallel
store x j in the
-th word of memory module M j
endfor
endfor
Clearly, this procedure involves p 1
iterations. In each iteration, p words are read, one from each memory
module, sorted, and then written back into memory, one word per module, with no read and write memory
access conflicts. It would seem as though each memory module requires an arithmetic unit to compute
the address of the word to be accessed in each iteration. In fact, as we now point out, such arithmetic
capabilities are not required. Specifically, we can use p 1
memory rows to store "offsets" used for memory
access operations. For the above example, the offsets are
At the beginning of Step 2 all the address registers contain a + 1. In the first iteration, the entries in the
first row of the offset matrix are added to the contents of the address registers, guaranteeing that the correct
word in each memory module is being accessed. As an illustration, referring to (7), we note that the offsets
in the first row indicate that the words involved in the read operation will be found at address a
memory module M 1 , address a memory
module M 3 , and so on.
The key observation for understanding what happens in all the iterations is that in any column of the
offset matrix (7), once the entry in the first row is available, the subsequent elements in the same column
can be generated by modulo
2 arithmetic. In our architecture, this computation can be performed by the
adder associated with each address register. In turn, this observation implies that, in fact, the offset matrix
need not be stored at all, as its entries can be generated on the fly.
Yet another important point to note is that each ordered triple hx; row(x); module(x)i is a composed
word with three fields and that the composed words are sorted using the combination of two fields, namely,
row(x) and module(x). Clearly, module(x) has log p bits, but it seems that in order to represent row(x)
we need log N
bits. Actually, we can replace row(x) with the address offset contained in the offset matrix
discussed above. Since the entries in that matrix are integers no larger than p 1
logp bits are sufficient.
Therefore, the concatenation row(x)jmodule(x) involves 1:5 log p bits.
From the above discussion it is clear that Step 2 requires p 1
2 calls to the sorting device and that the time
spent on data movement operations not involving sorting is bounded by O(p 1
Step 3: Same as Step 1.
Step 4: The permutation of Step 2 is performed in reverse; the permuted set of words are stored in rows
a
.
Step 5: Same as Step 1.
Step Shifting rows.
We shall permute the elements slightly differently from the way specified by Columnsort. However, it is easy
to verify that the elements supposed to end up in a given row, indeed end up in the desired row. Since Step
7 sorts the rows, the order in which the elements are placed in the row in Step 6 is immaterial.
The permutation of Step 6 is best illustrated by considering a particular example. Specifically, the
permutation specified by Step 6 of Columnsort involving the three rows shown in (6) is:
Our permutation is a bit different:
Assume that the p 1
consecutive input rows are stored in memory starting from memory row a + 1. In
addition, we assume that memory row a is available to us. Some of its contents are immaterial and will
be denoted by "?"s. The motivation is anchored in the observation that in Step 7 we do not have to sort
memory rows a and a
the elements in these rows will be sorted in Step 9. Consequently, the only rows
that have to be sorted in Step 7 are rows a
1. The details follow.
procedure ROW SHIFT
begin
for all
\Sigma p\Upsilon
, do in parallel
read the i-th word of memory module M j and
store it in the (i \Gamma 1)-th word of memory module M j
endfor
endfor
It is important to note that, in our implementation, Step 6 does not involve sorting. However, O(p 1
time is spent on data movement operations that do not involve sorting.
Step 7: Same as Step 1.
Step 8: This is simply the reverse of the data movement in Step 6.
Step 9: Same as Step 1.
To summarize, we have proved the following result.
Theorem 2 A set of p 3
elements stored in p 1
memory rows can be sorted, in row-major order, without
memory-access conflicts, in at most 7p 1
2 calls to a sorting device of I/O size p and in O(p 1
data
movement not involving sorting.
In essentially the same way one can prove the following companion result to Theorem 2.
Theorem 3 The task of sorting, in row-major order, a set of mp elements stored in m,
memory rows can be performed, without memory-access conflicts, in at most 7m calls to a sorting device of
I/O size p and in O(m) time for data movement operations not involving sorting.
In the remainder of this section we present an important application of the basic algorithm. Suppose
that we wish to merge two sorted sequences an and Our
algorithm for merging A and B relies on the following technical result.
Lemma 8 Assume that a d n 2 e - b b n 2 c+1 and let be the sequence obtained by merging
e and a d n
an . Then, no element in the sequence
strictly larger than any element in the sequence
Proof. We begin by showing that no a i , (1 - i - d ne), is strictly larger than any element in E. The
assumption that a d n 2 e - b b n 2 c+1 guarantees that if the claim is false, then some element a i ,
strictly larger than some element c k ,
To evaluate the position of the element c k in the sorted sequence C, observe that all the b nc elements
in C that come from A are known to be larger than or equal to a i and, therefore, strictly larger than c k .
Consequently, if n is even, then b nc elements in C are larger than c k , implying that k - b nc, a contradiction.
On the other hand, if n is odd, then d ne = b nc+1, and, by assumption, b d n
2 e is larger than a i and, therefore,
strictly larger than c k . In this case, at least d ne elements in C are strictly larger than c k . It follows that
contradicting that c k belongs to E.
Next, we claim that no c i , larger than any element in E. Since C is sorted, if the
statement is false, then c i ? b k for some k, (k - b nc+1). Notice that all elements of C that come from B are
smaller than or equal to b k and, therefore, strictly smaller than c i . It follows that contradicting
that c i belongs to D. This completes the proof of the lemma. 2
A mirror argument proves the following companion result to Lemma 8.
Lemma 9 Assume that a d n be the sequence obtained by merging
no element in the sequence D
2 c is strictly larger than any element in the sequence
It is worth noting that Lemmas 8 and 9, combined, show that given two sorted sequences, each of size
n, the task of merging them can be handled as follows: we begin by splitting the two sequences into two
sequences of size n each, such that no element in the first one is strictly larger than any element in the
second one. Once this "separation" is available, all that remains to be done is to sort the two sequences
independently. The noteworthy feature of this approach is that it fits extremely well our architecture.
2 and consider a sorted sequence stored in m memory rows
stored in m memory rows
1. The goal is to merge these two sequences and to store the resulting sorted
sequence in memory rows r A ; r A 1. The details follow.
procedure MERGE TWO GROUPS
begin
if a d mp
use the basic algorithm to sort b 1
non-increasing
order as c mp - c store the result in memory rows r
else
use the basic algorithm to sort a 1 ; a
non-increasing
order as c mp - c store the result in memory rows r
do
copy memory row r row r A
copy memory row r A +m \Gamma i into memory row r B
endfor
if m is odd then
copy the leftmost d pe elements in row r
into the leftmost d pe positions of row r A
copy the rightmost b pc elements in row r A
into the rightmost b pc positions of row r
endif
endif
if m is odd then
copy the leftmost d pe elements in memory row r C
into the leftmost d pe positions of row r
copy the rightmost b pc elements in row r C
into the rightmost b pc positions of row r A
endif
do
copy memory row r C
copy memory row r C
endfor
use the basic algorithm to sort memory rows r A ; r A non-decreasing
use the basic algorithm to sort memory rows r non-decreasing order
It is obvious that procedure MERGE TWO GROUPS can be implemented directly in our architectural
model. One point is worth discussing, however. Specifically, the task of sorting a sequence in non-increasing
order can be performed in our architecture as follows. The signs of all the elements to be sorted are flipped
and the resulting sequence is then sorted in non-decreasing order. Finally, the signs are flipped back to their
original value. The correctness of the procedure follows from Lemmas 8 and 9. Moreover, the procedure
requires three calls to the basic algorithm.
Consider the task of sorting a collection of 2mp memory rows, with m as above. Having partitioned the
input into two subgroups of m consecutive memory rows each, we use the basic algorithm to sort each group.
Once this is done, we complete the sorting using procedure MERGE TWO GROUPS. Thus, we have the
following result.
Theorem 4 The task of sorting 2mp,
2 , elements stored in 2m memory rows can be performed
in five calls to the basic algorithm and O(m) time for data movement operations not involving sorting. 2
5 An Efficient Multiway Merge Algorithm
Consider a collection A =! A 1 ; A
sequences, each of size p i
We assume that A is stored, top-down, in the order A 1 ; A
consecutive memory
rows. The multiway merge problem is to sort these sequences in row-major order. The goal of this section is
to propose an efficient algorithm MULTIWAY MERGE for the multiway merge problem, and to show how
it can be implemented on our architecture.
procedure MULTIWAY MERGE(A;m; i);
each of size p i
Output: the resulting sorted sequence stored in row-major order in mp i\Gamma2
contiguous memory rows. g
Step 1. Select a sample S of size mp i\Gamma2
2 from A by retaining every p-th element in each
sequence A j , (1 - j - m), and move S to its own dmp i\Gamma4
discussed below;
Step 2.
by one call to the sorting device
else if
by one call to the basic algorithm
else
frecursively multiway merge Sg
endif
(i\Gamma2)=2 be the sorted version of
Step 3. Partition A into p i\Gamma2
each containing at most 2mp elements,
as discussed below, and move the elements of A to their buckets without memory access conflicts;
Step 4. Sort all the buckets individually using the basic algorithm and procedure MERGE TWO GROUPS;
Step 5. Coalesce the sorted buckets into the desired sorted sequence.
2 Notice that if the sample S will be stored in one memory row.
The remainder of this section is devoted to a detailed implementation of this procedure on our architecture
5.1 Implementing Step 1 and Step 2
For convenience, we view A as a matrix of size mp i\Gamma2
2 \Theta p, with the t-th element of memory row j being
denoted by A[j; t]. The element A[j; p] is termed the leader of memory row j.
The goal of Step 1 is to extract a sample S of A by retaining the leader s of every memory row in A,
along with the identity k of the subsequence A k to which the leader belongs. In this context, k is referred to
as the sequence index of s. Two disjoint groups of dmp i\Gamma4
consecutive memory rows each are set aside to
store the sample S and the corresponding set I of sequence indices. In the remainder of this subsection, we
view the memory rows allocated to S and I as two matrices of size dmp i\Gamma4
p. The intention is that at the
end of Step 1, S[x; y] and I[x; y] store the y)-th leader of A and its sequence index, respectively.
To see how Step 1 can be implemented without memory access conflicts, notice that in each memory row
the leader to be extracted is stored in memory module M p . For a generic memory row j, the CU interchanges
temporarily the elements A[j; p] and A[j; d(j)], where p. (This interchange will be
undone at the end of Step 1). Next, dmp i\Gamma4
parallel read operations are performed, each followed by two
parallel write operations. The j-th parallel read operation picks up the k)-th word of memory
module M k , (1 - k - p), and these p elements are stored in the j-th memory row allocated to S. The second
parallel write operation stores the sequence indices of these p elements in the j-th memory row allocated to
I. Thus, Step 1 can be implemented in O(mp i\Gamma2
data movement and no calls to the sorting device.
The sampling process continues, recursively, until a level is reached where procedure MULTIWAY MERGE
is invoked with either which case the corresponding sample set is stored in one memory row and
will be sorted in one call to the sorting device, or with 4, in which case the sample set is stored in m
memory rows, and will be sorted in one call to the basic algorithm. Since the operation of sorting one row
is direct, we only discuss the way the basic algorithm operates in this context.
Conceptually, the process of sorting the samples benefits from being viewed as one of sorting the concatenation
sjk, where s is a sample element and k its sequence index. Recall that, as described in Section
2, our design assumes that the sorting device provides data paths of size w + 1:5 log p from its inputs to
its outputs. This implies that Steps 1, 3, 5, 7, and 9 of the extended Columnsort can be executed directly.
To sort a row r of S and the corresponding row r of I, the CU loads, in two parallel read operations, the
element field and the short auxiliary field of data register R j , (1 - j - p), with S[r; j] and I[r; j], and the
long auxiliary field with 0.
Let s r;j and k r;j be the element and its sequence index stored in register R j and let s r;j jk r;j denote their
concatenation. Next, the contents of the data registers are supplied as input to the sorting device. Let
received by R j after sorting, with s r;j 0 and k r;j 0 stored, respectively in the element and short
auxiliary field of R j . In two parallel write operations, the CU stores the element field and the short auxiliary
field of each register R j , (1 - j - p), into S[r; j] and I[r; j], respectively.
Steps 2, 4, 6, and 8 of the basic algorithm perform permutations. The implementation of Steps 6 and
8 does not involve sorting. In this case, the data movement involving the sample elements and that of the
corresponding sequence indices will be performed in two companion phases. Specifically, viewing the sample
set S and its corresponding sequence index set I as two matrices, the same permutation is performed on S
and I. Steps 2 and 4 of the basic algorithm involve both data movement operations and sorting. The data
movement operations in these steps are similar to those in Steps 6 and 8 and will not be detailed any further.
Recall that the sorting operations in Steps 2 and 4 of the basic algorithm are performed on the concatenation
of the two auxiliary fields storing the relative row number and the column number of the element. Hence, we
perform two companion sorting phases, one for permuting the sample elements and the other for permuting
sequence indices. Clearly, this can be implemented with the same time complexity.
It is easy to confirm that at the end of Step 2 of procedure MULTIWAY MERGE the sample set S is
sorted in row-major order. Furthermore, viewed as matrices, I[x; y] is the sequence index of the sample
element S[x; y]. Let the sorted version of S be
Equation (8) will be used in Step 3 to partition the elements of A into buckets. In order to do so, the leader
of each row in A needs to learn its rank in (8).
Our next goal is to associate with every memory row in A the rank
in S. This task will be carried out in two stages. In the first stage, using the sequence index and the rank
of s in S the CU assigns to s a row number row(s) in A. For every s in S, row(s) is either the exact
row number from which s was extracted in Step 1 or, in case the leaders of several rows are equal, row(s)
achieves a possible reassignment of leaders to rows. The details of the first stage are spelled out in procedure
ASSIGN ROW NUMBERS presented below. For convenience, we use the matrix representation of S and
I. These operations can be easily implemented using the addresses of words corresponding to S[x; y] and
Initially, I contains the sequence indices of samples in S. When the procedure terminates, I[x; y]
contains row(s) corresponding to
procedure ASSIGN ROW NUMBERS
begin
do
r k := the row number of the first memory row storing the sequence A k
endfor
2 e do
for do
endfor
endfor
In the second stage, the CU assigns the rank with the memory row row(s)
contained in I[x; y]. The operations performed on the matrix representations of S and I can be easily implemented
using the addresses of words corresponding to S[x; y] and I[x; y]. Since only read/write operations
are used in the procedure described, the total time spent on these operations is bounded by O(mp i\Gamma2
5.2 Implementing Step 3 and Step 4
Once the rank of each leader in A is known, we are ready to partition A into buckets. Our first objective is
to construct a collection of buckets such that the following conditions are satisfied:
(b1) every element of A belongs to exactly one bucket;
(b2) no bucket contains more than 2mp elements;
(b3) for every i and j, (1 -
strictly larger than any element in B j .
Before presenting our bucket partitioning scheme, we need a few definitions. Let
s mp (i\Gamma2)=2 be as in (8). The memory row with leader s b is said to be regular with respect to bucket B j ,
(j
Notice that equation (9) guarantees that every memory row in A is regular with respect to exactly one bucket
and that the identity of this bucket can be determined by the CU in constant time. Conversely, with respect
to each bucket there are exactly m regular memory rows.
A memory row r with leader s b in some sequence A k , (1 - k - m), is termed special with respect to
bucket B t if, with s a standing for the leader of the preceding memory row in A k , if any, we have
a
Let the memory rows with leaders s a and s b be regular with respect to buckets respectively,
such that j. It is very important to note that equation (10) implies that the memory row whose leader
is s b is special with respect to all the buckets
Conceptually, our bucket partitioning scheme consists of two stages. In the first stage, by associating all
regular and special rows with respect to a generic bucket
2 ), we obtain a set C j of candidate
elements . In the second stage, we assign the elements of A to buckets in such a way that the
actual elements assigned to bucket B j form a subset of the candidate set C j .
Specifically, an element x of a memory row regular with respect to bucket B j is assigned to B j if one of
the conditions below is satisfied:
s (j \Gamma1)m
or
s (j \Gamma1)m - x - s jm whenever s (j
The elements of A that have been assigned to a bucket by virtue of (11) or (12) are no longer eligible for
being assigned to buckets in the remainder of the assignment process.
Consider, further, an element x that was not assigned to the bucket with respect to which its memory
row is regular. Element x will be assigned to exactly one of the buckets with respect to which the memory
row containing x is special. Assume that the memory row containing x is special with respect to buckets
with be the smallest index, 1 - n - l(x), for which one of
the equations (11) or (12) holds with j n in place of j. Now, x is assigned to bucket B jn . The next result
shows that the buckets we just defined satisfy the conditions (b1)-(b3).
the conditions (b1), (b2), and (b3).
Proof. Clearly, our assignment scheme guarantees that every element of A gets assigned to some bucket
and that no element of A gets assigned to more than one bucket. Thus, condition (b1) is verified.
Further, notice that by (9) and (10), combined, for every j, (1
2 ), the candidate set C j
with respect to bucket B j contains at most 2m memory rows, and, therefore, at most 2mp elements of A.
Moreover, as indicated, the elements actually assigned to bucket B j are a subset of C j , proving that (b2) is
Finally, equations (11) and (12) guarantee that if an element x belongs to some bucket b j then it cannot
be strictly larger than any element in a bucket B k with k. Thus, condition (b3) holds as well. 2
It is worth noting that the preceding definition of buckets works perfectly well even if all the input elements
are identical. In fact, if all elements are distinct, one can define buckets in a simpler way. Moreover, in the
case of distinct elements, Steps 1-3 of procedure MULTIWAY MERGE can be further simplified.
We now present the implementation details of the assignment of elements to buckets. Write s
and denote, for every j, (1
2 ), the ordered pair (s (j \Gamma1)m ; s jm ) as the j-th bounding pair. Notice
that equations (11) and (12) amount to testing whether a given element lies between a bounding pair.
By (b2), no bucket contains more than 2mp elements from A. This motivates us to set aside 2m memory
rows for each bucket B j . Out of these, we allocate the first m memory rows to elements assigned to B j coming
from regular memory rows with respect to B j ; we allocate the last m memory rows to elements assigned
to bucket B j that reside in special memory rows with respect to B j . In addition, we find it convenient to
initialize the contents of the 2m memory rows allocated to B j to all +1's.
It is important to note that the regular memory rows with respect to a bucket B j are naturally ordered
from 1 to m by the order of the corresponding leaders in S. To clarify this last point, recall that by (9) the
leaders belonging to bucket B j are
s (j \Gamma1)m+1 ; s (j
Accordingly, the memory row whose leader is s (j \Gamma1)m+1 is the first regular row with respect to B j , the
memory row whose leader is s (j \Gamma1)m+2 is the second regular row with respect to B j , and so on. Similarly,
the fact that each sequence A k is sorted guarantees that it may contain at most one special memory row
with respect to bucket B j . Now, in case such a special row exists it will be termed the k-th special memory
row with respect to B j , to distinguish it from the others.
In order to move the elements to their buckets, the CU scans the memory rows in A one by one. Suppose
that the current memory row being scanned is row r in some sequence A k . We assume that the leader of
row r is s b and that the leader of row r \Gamma 1 is s a . Using equation (9), the CU establishes that row r is
regular with respect to bucket B j , where
similarly, that the previous memory row is regular
with respect to bucket
In case row r is the first row of A k , j 0 is set to 1.
Next, the elements in memory row r are read into the element fields of the data registers the CU broadcasts
to these registers the bounding pair (s (j \Gamma1)m ; s jm ). Using compare-and-set, each register stores in the short
auxiliary field a 1 if the corresponding element is assigned to bucket B j by virtue of (11) or (12) and a 0
otherwise. We say that an element x in some data register is marked if the value in the short auxiliary field
is otherwise, x is unmarked.
Clearly, every element x that is marked at the end of this first broadcast has been assigned to bucket
. In a parallel write operation, the CU copies all the marked elements to the corresponding words of the
row allocated to bucket B j . Once this is done, using compare-and-set, all
the marked elements in the data registers are set to +1 and the short auxiliary fields are cleared.
Further, the CU broadcasts to the data registers, in increasing order, the bounding pairs of the buckets
us follow the processing specific to bucket B j 0 . Having received the bounding pair
data register determines whether the value x stored in its element field satisfies (11)
or (12) with j 0 in place of j and marks x accordingly. In a parallel write operation, the CU copies all the
marked elements to the corresponding words of the next available memory row allocated to bucket B j . Next,
using compare-and-set all the marked elements in the data registers are set to +1, and the short auxiliary
fields are cleared. The same process is then repeated for all the remaining buckets with respect to which row
r is special.
The reader will not fail to note that when the processing of row r is complete, each of its elements has
been moved to the bucket to which it has been assigned. Moreover, by (9) and (10) there are, altogether, at
most mp i\Gamma2
regular rows and at most mp i\Gamma2
special rows, and so the total time involved in assigning the
elements of A to buckets is bounded by O(mp i\Gamma2
no calls to the sorting device. In summary, Step 3
can be implemented in O(mp i\Gamma2
data movement and no calls to the sorting device.
In Step 4, the buckets are sorted independently. If a bucket has no more than p 1
memory rows, it
can be sorted in one call to the basic algorithm. Otherwise, the bucket is partitioned in two halves,
each sorted in one call to the basic algorithm. Finally, the two sorted halves are merged using procedure
GROUPS. By Theorem 4, the task of sorting all the buckets individually can be performed
in O(mp i\Gamma2
calls to the sorting device and in O(mp i\Gamma2
data movement not involving sorting.
5.3 Implementing Step 5
To motivate the need for the processing specific to Step 5, we note that after sorting each bucket individually
in Step 4, there may be a number of +1's in each bucket. We refer to such elements as empty; memory
rows consisting entirely of empty elements will be termed empty rows. A memory row is termed impure if it
is partly empty. It is clear that each bucket may have at most one impure row. A memory row that contains
no empty elements is referred to as pure.
The task of coalescing the non-empty elements in the buckets into mp i\Gamma2
consecutive memory rows will
be referred to as compaction. For easy discussion, we assume that all sorted buckets are stored in consecutive
rows. That is, the non-empty rows of B 2 follow the non-empty rows of B 1 , the non-empty rows of B 3 follow
the non-empty rows of B 2 , and so on, assuming that all empty rows have been removed. The compaction
process consists of three phases.
Phase 1: Let C be the row sequence obtained by concatenating non-empty rows of B j 's obtained in Step 4
of MULTIWAY MERGE in the increasing order of their indices. We partition sequence C into subsequences
x such that each C j contains p 1
consecutive rows of C, except the last subsequence C x , which
may contain fewer rows. Clearly, x - 2mp i\Gamma3
2 . We use the basic algorithm to sort these subsequences
independently. Let the sorted subsequence corresponding to C i be C 0
i with empty rows eliminated for future
consideration. Let D be the row sequence obtained by concatenating rows of C 0
's in the increasing order
of their indices. We partition sequence D into subsequences y such that each D j contains p 1consecutive rows of D, except the last subsequence D y , which may contain fewer rows. We then use the
basic algorithm to sort these subsequences independently. Let the sorted subsequence corresponding to D i
be D 0
i with empty rows eliminated. Let E be the row sequence obtained by concatenating rows of D 0
's in
the increasing order of their indices.
Lemma 11 The preceding row of every impure row, except the last row, of E is a pure row.
Proof. We notice the following fact: except the last row of D, every row of D either contains at least p 1non-empty elements, or if it contains fewer than p 1
non-empty elements then its preceding row must be a
pure row. This is because that each row of C contains at least one non-empty element. An impure row of
D can be generated under one of two conditions: (a) if C j contains fewer than p non-empty elements, then
contains only one row, an impure row, with its non-empty elements coming from p 1
impure rows of C j ,
and (b) if a C j contains more than p non-empty elements, then C 0
contains only one impure row, and its
preceding row is a pure row. The lemma directly follows from this fact. 2
Phase 2: This phase computes a set of parameters, which will be used in the next phase. Let w be the
total number of (non-empty) rows in E. Assume that the rows of E are located from row 1 through row
w. For every j, (1
stand for the number of non-empty elements in the impure
memory row c j . The first subtask of Phase 2 is to determine
i\Gamma2. Consider a generic impure
row c j . To determine n j the CU reads the entire row c j into the data registers R 1
for every k, (1 - k - p), the c j -th word of memory module M k is read into register R k . The long
auxiliary field of data register R k is set to k. By using the compare-and-set feature, the CU instructs each
register R k to reset this auxiliary field to \Gamma1 if the element it holds is +1 (i.e. empty). Next, the data
registers are loaded into the sorting device and sorted in increasing order of their long auxiliary fields. It
is easy to confirm that, after sorting, the largest such value k j is precisely the position of the rightmost
non-empty element in memory row c j . Therefore, the CU sets . Consequently, the task
of computing all the numbers
calls to the sorting device and O(p i\Gamma2
read/write operations and does not involve sorting. Once the numbers are available, the CU
computes the prefix sums oe
This, of course, involves only additions and can be performed by the CU in O(mp i\Gamma2
call
to the sorting device. Let
e.
mod g. Define
Phase 3: Construct row group g, of consecutive rows as follows: if ff k\Gamma2 ? 0 then row
is the starting row of E k , else row k(p 1
is the starting row of E k ; the ending row of E k ,
is row k(p 1
and the ending row of E g is row w. Note that E k and E k+1 may share at most two
rows. By Lemma 11, for contains is at least (p+1)p2elements, and the last two rows
of contains at least p elements. For each g, perform the following operations: (a) sort
using the basic algorithm; (b) replace the fi smallest elements by +1's; (c) sort using the basic
algorithms; and (d) if ff k ? 0 and k ! g, eliminate the last row. For E g , perform (a), (b) and (c) only. Let
k be the row group obtained from E k , and let F be the row sequence obtained by concatenating rows of
's in the increasing order of their indices. F is the compaction of C.
Setting selected elements in a row to +1's can be done in O(1) time by a compare-and-set operation.
For example, setting the leftmost s elements of a row to +1's can be carried out as follows: read the row
into R i 's, then CU broadcast s to all R i 's, and each R i compare i with s and set its content to +1 if i - s;
then the modified row is written back to the memory array.
Based on Lemma 10, it is easy to verify that elements in F are in sorted order after Step 5, which can be
implemented in O(mp i\Gamma2
calls to the sorting device and O(mp i\Gamma2
data movement not involving sorting.
5.4 Complexity Analysis
With the correctness of our multiway merge algorithm being obvious, we now turn to the complexity.
Specifically, we are interested in assessing the total amount of data movement, not involving sorting, that
is required by procedure MULTIWAY MERGE. Specifically, let J(mp i
stand for the time spent on data
movement tasks that do not involve the use of the sorting device. If takes O(1) time. In case
takes O(m) time (refer to Theorem 3). Finally, if i ? 4, our previous discussion shows that
each of Step 1, Step 3, Step 4, and Step 5 require at most O(mp i\Gamma2
recursively,
J(mp i\Gamma2
time. Thus, we obtain the following recurrence system:
It is easy to confirm that, for p - 4, the solution of the above recurrence satisfies J(mp i
A similar analysis, that is not repeated, shows that the total number of calls to a sorting device of I/O size
performed by procedure MULTIWAY MERGE for merging m,
sequences, each of
2 , is bounded by O(mp i\Gamma2
To summarize our discussion we state the following important result.
Theorem 5 Procedure MULTIWAY MERGE performs the task of merging m,
quences, each of size p i
2 , in our architecture, using O(mp i\Gamma2
calls to the sorting device of I/O size p, and
O(mp i\Gamma2
data movement not involving sorting.
6 The Sorting Algorithm
With the basic algorithm and the multiway merge at our disposal, we are in a position to present the details
of our sorting algorithm using a sorting device of fixed I/O size p. The input is a set \Sigma of N items stored, as
evenly as possible, in p memory modules. Dummy elements of value +1 are added, if necessary, to ensure
that all memory modules contain d N
e elements: these dummy elements will be removed after sorting. Our
goal is to show that using our architecture-algorithm combination the input can be sorted in O
N log N
log p
time and O(N ) data space. We assume that p - 16, which along with (1) implies that
log
Equation (13) will be important in the analysis of this section, as our discussion will focus on the case where
a sorting network of I/O size p and depth O(log 2 p) is used as the sorting device 3 . A natural candidate for
such a network is Batcher's classic bitonic sorting network [1] that we shall tacitly assume.
Recall that by virtue of (1) we have, for some t; t - 4,
In turn, equation (14) guarantees that
log N
log p 1-
At this point we note that (14) and (15), combined, guarantee that
log
2: (16)
Write
and observe that by (14),
For reasons that will become clear later, we pad \Sigma with an appropriate number of +1 elements in such a
way that, with N 0 standing for the length of the resulting sequence \Sigma 0 , we have
It is important to note that (14), (17), and (19), combined, guarantee that
suggesting that the number of memory rows used by the sorting algorithm is bounded by O( N
will show that this is, indeed, the case.
The sorting algorithm consists of iterations. In order to guarantee an overall running time
of O( N log N
log p ), we ensure that each iteration can be performed in O( N
As we will see shortly, the
sorting network will be used in the following three contexts:
(i) to sort, individually, M memory rows;
3 As it turns out, the same complexity claim holds if the sorting device used is, instead, a p-sorter.
(ii) to sort, individually, M groups, each consisting of m consecutive memory rows, where m -
(iii) to sort, individually, M groups, each consisting of 2m consecutive memory rows, where m -
.
For an efficient implementation of (i) we use simple pipelining: the M memory rows to sort are input to
the sorting network, one after the other. After an initial overhead of O(log 2 p) time, each subsequent time
unit produces a sorted memory row. Clearly, the total sorting time is bounded by O(log
Our efficient implementation of (ii) uses interleaved pipelining. Let G GM be the groups we wish
to sort. In the interleaved pipelining we begin by running Step 1 of the basic algorithm in pipelined fashion
on group G 1 , then on group G 2 , and on so. In other words, Step 1 of the basic algorithm is performed on
all groups using simple pipelining. Then, in a perfectly similar fashion, simple pipelining is used to carry
out Step 2 of the basic algorithm on all the groups G 1 . The same strategy is used with all the
remaining steps of the basic algorithm that require the use of the sorting device. Consequently, the total
amount of time needed to sort all the groups using interleaves pipelining is bounded by O(log
An efficient implementation of (iii) relies on extended interleaved pipelining. Let G GM be the
groups we want to sort. Recall that Theorem 4 states that sorting a group of 2m consecutive memory rows
requires five calls to the basic algorithm. The extended interleaved pipelining consists of five interleaved
pipelining steps, each corresponding to one of the five calls to the basic algorithm. Thus, the task of sorting
all groups can be performed in O(log time. We now discuss each of the iterations of our sorting
algorithm in more detail.
Iteration 1
The input is Partitioned into N 0
groups, each involving p 1
memory rows. By using interleaved pipelining
with
, each such group is sorted individually. As discussed above, the running time of Iteration 1 is
bounded by O(log
Iteration
1. The input to Iteration k is a collection of N 0
ksorted sequences each of size p
stored in
consecutive memory rows. The output of iteration k is a collection of N 0
+1sorted sequences, each of
size p
stored in p
consecutive memory rows.
Having partitioned these sorted sequences into N 0
consecutive
sequences each, we proceed to sort each group G(k; j) by the call MULTIWAY MERGE(S 1 (k;
We refer to the call MULTIWAY MERGE(S 1 (k;
call of the first level. Observe that, since there are N
there will be altogether N
WAY MERGE calls of the first, one for each group. In Step 1 of a MULTIWAY MERGE call of the first
level we extract a sample S 2 (k; j) of S 1 (k; consisting of p 1
sorted sequences, each of size p
stored
in
consecutive memory rows. In turn, for every j, (1 - j - N
the sample S 2 (k; j) is sorted by
invoking MULTIWAY MERGE(S 2 (k;
which is referred to as a MULTIWAY MERGE call of
the second level. Step 1 of a MULTIWAY MERGE call of the second level extracts a sample S 3 (k; j) of
For every u, 1 -
2 c, a MULTIWAY MERGE call of level u is of the form MULTIWAY MERGE
In Step 2 of the call MULTIWAY MERGE(S u (k;
a MULTIWAY MERGE call of level u+1, which is of the form MULTIWAY MERGE(S u+1 (k;
Let r k;u denote the total number of rows in all samples S u (k; j) of level u. Clearly, we have r
p u . By
(13), r k;u - qp, and r when t is even and 2. The recursive calls to MULTIWAY MERGE
end at level b i k \Gamma1c, the last call being of the form
Note that depending on whether or not i k is odd.
We proceed to demonstrate that for takes O(r k;1 ) time. We will do this by
showing that the total time required by each of the five steps of the MULTIWAY MERGE calls of each level
u is bounded by O(r k;u ).
Consider a particular level u. Step 1 of all MULTIWAY MERGE calls of level u is performed on the
samples S u (k; j), in increasing order of j, so that all the samples S u+1 (k; are extracted one after the other.
Clearly, the total time for these operations is O(r k;u ).
We perform Step 3 of all the MULTIWAY MERGE calls of level u, in increasing order of j, to partition
into buckets each of the samples S u (k; using the corresponding S u+1 (k; j). By Lemma 10, each sample
buckets, and no bucket contains more than 2p 3
elements. As discussed
in Subsection 5.2, the task of moving all the elements of each S u (k; j) to their buckets can be carried out in
O(p
using the sorting device. Thus, the total time for partitioning the samples S u (k;
in all the MULTIWAY MERGE calls of level u is bounded by O( N 0
Step 4 of a MULTIWAY MERGE call of level u sorts the buckets (involving the elements of S u (k; j))
obtained in Step 3. We perform Step 4 of all MULTIWAY MERGE calls of level u in increasing order of
j, and use extended interleaved pipelining with
2 to sort all buckets of each S u (k; j). There are,
altogether, N 0
2u+1buckets in all the S u (k; j)'s. Thus, the total time for sorting all buckets is bounded by
and (d), the total time for sorting the buckets in all
MULTIWAY MERGE calls of level u is O(r k;u ).
Step 5 of a MULTIWAY MERGE call of level u has three phases. As discussed in Subsection 5.3, The
operations of Phase 1 and Phase 3 that involve the sorting device can be carried out using interleaved
pipelining. The operations of Phase 2 that involves the sorting device can be carried out using simple
pipelining. Clearly, the time complexity of Step 5 for all MULTIWAY MERGE calls of level u is bounded
by O(log
We now evaluate the time needed to perform Step 2 of all the MULTIWAY MERGE calls of level u. First,
consider the call of level b i k \Gamma1c, MULTIWAY MERGE(S b
1)). The sample
extracted in Step 1 of this call has p elements
is odd, we use simple pipelining to sort all the samples S b
(k;
we use interleaved pipelining with
2 to sort all the samples S b
(k;
In
either case, the time required is bounded by O( N 0
which is no more than O( N 0
Thus, the total time for Steps 1 through 5 of all the MULTIWAY MERGE calls of level b i k \Gamma1c) is no more
than O(r k;b
). Next, the time to perform Step 2 of all MULTIWAY MERGE calls of level u is inductively
derived as O(r k;u ) using our claim that the total time for Steps 1, 3, 4 and 5 of all MULTIWAY MERGE
calls of level u is no more than O(r k;u ), and hypothesis that Step 2 of all MULTIWAY MERGE calls of level
This, in turn, proves that the total time required for all the MULTIWAY MERGE calls
of level u is bounded by O(r k;u ).
Having shown that the time required for all the MULTIWAY MERGE calls of level u of Iteration k is
O(r k;u ), we conclude that the total time to perform Iteration k is O(r k;1 ), which is O( N
Iteration
2 the N input elements are sorted at the end of iterations. Assume that the algorithm does
not terminate in iterations. The input to Iteration t \Gamma 1 is a collection of
sorted sequences,
2 . Each such sequence is of size p t 2 , stored in p t\Gamma2
consecutive rows. To complete the
sorting, we need to merge these q sequences into the desired sorted sequence. This task is performed by the
call MULTIWAY MERGE(\Sigma t). The detailed implementation of MULTIWAY MERGE(\Sigma using a
sorting network as the sorting device and the analysis involved are almost the same as that of Iteration 2 to
Iteration different parameters are used. If the interleaved pipelining with
2 is used in
a step of MULTIWAY MERGE for iterations 2 to t \Gamma 2, then the corresponding step of MULTIWAY MERGE
for iteration uses the interleaved pipelining with Similarly, if the extended interleaved pipelining
with
2 is used in a step of MULTIWAY MERGE for iterations 2 to t \Gamma 2, then the corresponding
step of MULTIWAY MERGE for iteration uses the extended interleaved pipelining with q. The
MULTIWAY MERGE call of level b t\Gamma1c is MULTIWAY MERGE(S b
If t is odd, then 4. The recursion stops at
the (b t\Gamma1c)-th level. The sample set S b
obtained in Step 1 of the MULTIWAY MERGE call of
level b t\Gamma1c has qp 1
is odd, and it has qp elements if t is even.
Let r t\Gamma1;u be the total number of memory rows in S
. By a simple
induction, we conclude that the MULTIWAY MERGE call of level u, 1 - takes no more than
O(r t\Gamma1;u ) time. The running time of Iteration is the running time of the MULTIWAY MERGE call of
the first level, and it takes O(r t\Gamma1;1
We have shown that each of the iterations of MULTIWAY MERGE can be implemented with time
O( N
we conclude that the running time of our sorting algorithm is O
N log N
log p
. Since a p-sorter
can be abstracted as a sorting network of I/O size p and depth O(1), this time complexity stands if the sorting
device used is a p-sorter. The working data memory for each iteration is O(N ) simply because that the sample
size of an MULTIWAY MERGE call of level u is p times the sample size of an MULTIWAY MERGE call of
level u+1. Since the working data memory of one iteration can be reused by another iteration, the total data
memory required by our sorting algorithm remains to be O(N ). Summarizing all our previous discussions,
we have proved the main result of this work.
Theorem 6 Using our simple architecture, a set of N items stored in N
memory rows can be sorted in
row-major order, without any memory access conflicts, in O
N log N
log p
time and O(N ) data space, by using
either a p-sorter or a sorting network of I/O size p and depth O(log 2 p) as the sorting device.
--R
Sorting n objects with a k-sorter
An optimal sorting algorithm on reconfigurable mesh
The Art of Computer Programming
Tight bounds on the complexity of parallel sorting
Introduction to Parallel Algorithms and Architectures: Arrays
Sorting in O(1) time on a reconfigurable mesh of size n
Sorting n numbers on n
How to sort N items using a sorting Network of fixed I/O size
--TR
--CTR
Classifying Matrices Separating Rows and Columns, IEEE Transactions on Parallel and Distributed Systems, v.15 n.7, p.654-665, July 2004
Giuseppe Campobello , Marco Russo, A scalable VLSI speed/area tunable sorting network, Journal of Systems Architecture: the EUROMICRO Journal, v.52 n.10, p.589-602, October 2006
Brian Grattan , Greg Stitt , Frank Vahid, Codesign-extended applications, Proceedings of the tenth international symposium on Hardware/software codesign, May 06-08, 2002, Estes Park, Colorado | sorting networks;VLSI;special-purpose architectures;columnsort;hardware-algorithms |
357777 | Tracing the lineage of view data in a warehousing environment. | We consider the view data lineageproblem in a warehousing environment: For a given data item in a materialized warehouse view, we want to identify the set of source data items that produced the view item. We formally define the lineage problem, develop lineage tracing algorithms for relational views with aggregation, and propose mechanisms for performing consistent lineage tracing in a multisource data warehousing environment. Our result can form the basis of a tool that allows analysts to browse warehouse data, select view tuples of interest, and then drill-through to examine the exact source tuples that produced the view tuples of interest. | Introduction
In a data warehousing system, materialized views over source data are defined, computed, and
stored in the warehouse to answer queries about the source data (which may be stored in distributed
and legacy systems) in an integrated and efficient way [Wid95]. Typically, on-line analytical
processing and mining (OLAP and OLAM) systems operate on the data warehouse, allowing
users to perform analysis and predictions [CD97, HCC98]. In many cases, not only are the views
themselves useful for analysis, but knowing the set of source data that produced specific pieces of
view information also can be useful. Given a data item in a materialized view, determining the
source data that produced it and the process by which it was produced is termed the data lineage
problem. Some applications of view data lineage are:
ffl OLAP and OLAM: Effective data analysis and mining needs facilities for data exploration
at different levels. The ability to select a portion of relevant view data and "drill-down"
to its origins can be very useful. In addition, an analyst may want to check the origins of
suspect or anomalous view data to verify the reliability of the sources or even repair the
source data.
ffl Scientific Databases: Scientists apply algorithms to commonly understood and accepted
source data to derive their own views and perform specific studies. As in OLAP, it can be
useful for the scientist to focus on specific view data, then explore how it was derived from
the original raw data.
This work was supported by Rome Laboratories under Air Force Contract F30602-96-1-0312, and by the
Advanced Research and Development Committee of the Community Management Staff as a project in the MDDS
Program.
ffl On-line Network Monitoring and Diagnosis Systems: From anomalous view data computed
by the diagnosis system, the network controller can use data lineage to identify the faulty
data within huge volumes of data dumped from the network monitors.
ffl Cleansed Data Feedback: Information centers download raw data from data sources and
"cleanse" the data by performing various transformations on it. Data lineage helps locate
the origins of data items, allowing the system to send reports about the cleansed data back
to their sources, and even link the cleansed items to the original items.
Materialized View Schema Evolution: In a data warehouse, users may be permitted to
change view definitions (e.g., add a column to a view) under certain circumstances. View
data lineage can help retrofit existing view contents to the new view definition without
recomputing the entire view.
ffl View Update Problem: Not surprisingly, tracing the origins of a given view data item
is related to the well-known view update problem [BS81]. In Section 8.2, we discuss this
relationship, and show how lineage tracing can be used to help translate view updates into
corresponding base data updates.
In general, a view definition provides a mapping from the base data to the view data. Given a
state of the base data, we can compute the corresponding view according to the view definition.
However, determining the inverse mapping-from a view data item back to the base data that
produced it-is not as straightforward. To determine the inverse mapping accurately, we not only
need the view definition, but we also need the base data and some additional information.
The warehousing environment introduces certain challenges to the lineage tracing problem,
such as how to trace lineage when the base data is distributed among multiple sources, and what
to do if the sources are inaccessible or not consistent with the warehouse views. At the same time,
the warehousing environment can help the lineage tracing process by providing facilities to merge
data from multiple sources, and to store auxiliary information in the warehouse in a consistent
fashion.
In this paper, we focus on the lineage problem for relational Select-Project-Join views with
aggregation (ASPJ views) in a data warehousing environment. Our results extend to additional
relational operators as we show in [CWW98]. In summary, we:
ffl Formulate the view data lineage problem. We give a declarative definition of tuple derivations
for relational operators, and inductively define view tuple derivations based on the
tuple derivations for the operators (Section 4).
ffl Develop derivation tracing algorithms, including proofs of their correctness (Sections 5
and 6).
ffl Discuss issues of derivation tracing in a warehousing environment, and show how to trace
tuple derivations for views defined on distributed, legacy sources consistently and efficiently
(Section 7).
We first discuss related work in Section 2. We then motivate the lineage problem using detailed
examples in Section 3. After defining the lineage problem and presenting our solutions as summarized
above, in Section 8 we revisit some related issues (e.g., the view update problem) in detail.
Conclusions and future work are covered in Section 9. All proofs are provided in the Appendix.
Related Work
There has been a great deal of work in view maintenance and related problems in data warehous-
ing, but to the best of our knowledge the lineage problem has not been addressed. Overviews
of research directions and results in data warehousing can be found in [CD97, Wid95, WB97].
specifically covers view maintenance problems in data warehousing. Incremental view
maintenance algorithms have been presented for relational algebra views [QW91], for aggregation
[Qua96], and for recursive views [GMS93]. View "self-maintainability" issues are addressed
in [QGMW96]. Warehouse view consistency is studied in [ZGMW96, ZWGM97], to ensure that
views in the warehouse are consistent with each other and reflect consistent states of the sources.
All of these papers consider computing warehouse views but do not address the reverse problem
of view data lineage.
OLAP systems, usually sitting on top of a data warehouse, allow users to perform analysis
and make predictions based on the warehouse view information [Col96, HCC98]. The data
cube is a popular OLAP structure that facilitates multi-dimensional aggregation over source
data [GBLP96]. Cube "rolling-up" and "drilling-down" enable the user to browse the view data
at any level and any dimension of the aggregation [HRU96, MQM97]. However, data cubes are
based on a restricted form of relational views, and usually only allow drilling down within the
warehouse, not to the original data sources.
Metadata for warehouse views can be maintained to record lineage information about a particular
view column [CM89, HQGW93]. However, this approach only provides schema-level lineage
tracing, while many applications require lineage at a finer (instance-level) granularity. Some
scientific databases use tuple-level annotations to keep track of lineage [HQGW93], which can
introduce high storage overhead in warehousing applications.
introduces "weak inversion" to compute fine-grained data lineage. However, not all
views have an inverse or weak inverse. Also, the system in [WS97] requires users to provide a
view's inverse function in order to compute lineage, which we feel is not always practical. Our
algorithms trace tuple-level lineage automatically for the user and maintain the necessary auxiliary
information to ensure view invertibility.
The problem of reconstructing base data from summary data is studied in [FJS97]. Their
statistical approach estimates the base data using only the summary data and certain constraints;
it does not guarantee accurate lineage tracing. In this paper, we focus on accurate lineage tracing
with the base data available either in remote sources or stored locally in the warehouse.
The view update problem [DB78, Kel86] is to translate updates against a view into updates
against the relevant base tables, so that the updated base tables will derive the updated view.
View data lineage can be used to help solve the view update problem, employing a different
approach from previous techniques; see Section 8.
Finally, Datalog can perform top-down recursive rule-goal unification to provide proofs for a
goal proposition [Ull89]. The provided proofs find the supporting facts for the goal proposition,
and therefore also can be thought of as providing the proposition's lineage. However, we take an
approach that is very different from rule-goal unification; a detailed comparison is presented in
Section 8.
item id item name category
binder stationery
stationery
shirt clothing
pants clothing
pot kitchenware
Figure
1: item table
store id store name city state
004 Macy's New York City NY
Figure
2: store table
store id item id price num sold
Figure
3: sales table
Motivating Examples
In this section, we provide examples that motivate the definition of data lineage and how lineage
tracing can be useful. Consider a data warehouse with retail store data over the following base
tables:
item(item id, item name, category)
store(store id, store name, city, state)
sales(store id, item id, price, num sold)
The item and store tables are self-explanatory. The sales table contains sales information,
including the number and price of each product sold by each store. Example table contents are
shown in Figures 1, 2, and 3.
Example 3.1 (Lineage of SPJ View) Suppose a sales department wants to study the selling
patterns of California stores. A materialized view Calif can be defined in the data warehouse for
the analysis. The SQL definition of the view is:
SELECT store.store-name, item.item-name, sales.num-sold
FROM store, item, sales
The view definition also can be expressed using the relational algebra tree in Figure 4. The
materialized view for Calif over our sample data is shown in Figure 5.
The analyst browses the view table and is interested in the second tuple !Target, pencil,
3000?. He would like to see the relevant detailed information and asks question Q1: "Which base
data produced tuple !Target, pencil, 3000? in Calif?" Using the algorithms we present in
Section 5, we obtain the answer in Figure 6. The answer tells us that the Target store in Palo
Alto sold 3000 pencils at a price of 1 dollar each.
sales
store
store_name, item_name, num_sold
item
Calif
Figure
4: View definition for Calif
store name item name num sold
Target binder 1000
Target pencil 3000
Target pants 600
Macy's shirt 1500
Macy's pants 600
Figure
5: Calif table
store
s id s name city state
item
stationery
sales
s id i id price num sold
Figure
Calif lineage for !Target, pencil, 3000?
Example 3.2 (Lineage of Aggregation View) Now let's consider another warehouse view
Clothing, for analyzing the total clothing sales of the large stores (which have sold more than
5000 clothing items).
Clothing AS
SELECT sum(num-sold) as total
FROM item, store, sales
GROUP BY store-name
The relational algebra definition of the view is shown in Figure 7. We extend relational algebra
with an aggregation operator, denoted ff G;aggr(B) , where G is a list of groupby attributes, and
aggr(B) abbreviates a list of aggregate functions over attributes. (Details are given in Section 4.1.)
The materialized view contains one tuple, !5400?, as shown in Figure 8.
The analyst may wish to learn more about the origins of this tuple, and asks question Q2:
"Which base data produced tuple !5400? in Clothing?" Not surprisingly, due to the more
complex view definition, this question is more difficult to answer than Q1. We develop the
appropriate algorithms in Section 6, and Figure 9 presents the answer. It lists all the branches of
Macy's, the clothing items they sell (but not other items), and the sales information. All of this
information is used to derive the tuple !5400? in Clothing.
Questions such as Q1 and Q2 ask about the base tuples that derive a view tuple. We call
these base tuples the derivation (or lineage) of the view tuple. In the next section, we formally
define the concept of derivation. Sections 5, 6, and 7 then present algorithms to compute view
tuple derivations.
a
item sales
store_name, sum(num_sold) as total
s
total
total > 5000
store
Clothing
Figure
7: View definition for Clothing
total
Figure
8: Clothing table
store
s id s name city state
004 Macy's New York City NY
item
shirt clothing
pants clothing
sales
s id i id price num sold
Figure
9: Clothing lineage for !5400?
4 View Tuple Derivations
In this section, we define the notion of a tuple derivation, which is the set of base relation tuples
that produce a given view tuple. Section 4.1 first introduces the views on which we focus in
this paper. Tuple derivations for operators and views are then defined in Sections 4.2 and 4.3,
respectively.
We assume that a table (or relation) R with schema R contains a set of tuples
with no duplicates. (Thus, we consider set semantics in this paper. We have adapted our work
to bag semantics as well; please see [CWW98].) A database D contains a list of base tables
view V is a virtual or materialized result of a query over the base tables in D.
The query (or the mapping from the base tables to the view table) is called the view definition,
denoted as v. We say that
4.1 Views
We consider a class of views defined over base relations using the relational algebra operators
selection (oe), projection (-), join (./), and aggregation (ff). Our framework applies as well to set
union ([), set intersection ("), and set difference (\Gamma), however these operators are omitted in
this paper due to space constraints. Please see [CWW98] for details.
We use the standard relational semantics, included here for completeness:
Base case:
ffl Projection: -A
ig.
We consider the multi-way natural join
Thus, the grammar of our view definition language is as follows:
where R is a base table, are views, C is a selection condition (any boolean expression)
on attributes of V 1 , A is a projection attribute list from V 1 , G is a groupby attribute list from V 1 ,
and aggr(B) abbreviates a list of aggregation functions to apply to attributes of V 1 .
For convenience in formulation, when a view references the same relation more than once, we
consider each relation instance as a separate relation. For example, we treat the self-join R ./ R
as (R as R 1 (R as R 2 ), and we consider R 1 and R 2 to be two tables in D. This approach
allows view definitions to be expressed using an algebra tree instead of a graph, while not limiting
the views we can handle.
Any view definition in our language can be expressed using a query tree, with base tables as
the leaf nodes and operators as inner nodes. Figures 4 and 7 are examples of query trees.
4.2 Tuple Derivations for Operators
To define the concept of derivation, we assume logically that the view contents are computed by
evaluating the view definition query tree bottom-up. Each operator in the tree generates its result
table based on the results of its children nodes, and passes it upwards. We begin by focusing on
each operator, defining derivations of the operator's result tuples based on its input tuples.
According to relational semantics, each operator generates its result tuple-by-tuple based on
its operand tables. Intuitively, given a tuple t in the result of operator Op, only a subset of the
input tuples produce t. We say that the tuples in this subset contribute to t, and we call the entire
subset the derivation of t. Input tuples not in t's derivation either contribute to nothing, or only
contribute to result tuples other than t.
Figure
illustrates the derivation of a view tuple. In the figure, operator Op is applied to
tables T 1 and T 2 , which may be base tables or temporary results from other operators. (In general,
we use R's to denote base tables and T 's to denote tables that may be base or derived.) Table T
is the operation result. Given tuple t in T , only subsets T
2 of T 1 and T 2 contribute to t.
called t's derivation. The formal definition of tuple derivation for an operator is given
next, followed by additional explanation.
Definition 4.1 (Tuple Derivation for an Operator) Let Op be any of our relational operator
(oe, -, ./, ff) over tables T be the table that results
from applying Op to T Given a tuple t 2 T , we define t's derivation in T
according to Op to be Op \Gamma1
are maximal subsets
of
Op
T1* T2*
Figure
10: Derivation of tuple t
(a) Op(T
(b) 8T
Also, we say that Op \Gamma1
i is t's derivation in T i , and each tuple t in T
i contributes to t,
can be extended to represent the derivations of a set of tuples:
where
S represents the multi-way union of relation lists. 1
In Definition 4.1, requirement (a) says that the derivation tuple sets (the T i
's) derive exactly
t. From relational semantics, we know that for any result tuple t, there must exist such tuple
sets. Requirement (b) says that each tuple in the derivation does in fact contribute something
to t. For example, with requirement (b) and given base tuples that do not satisfy the
selection condition C and therefore make no contribution to any view tuple will not appear in any
view tuple's derivation. By defining the T i \Lambda 's to be the maximal subsets that satisfy requirements
(a) and (b), we make sure that the derivation contains exactly all the tuples that contribute to
t. Thus, the derivation fully explains why a tuple exists in the view. Theorem 4.2 shows that
there is a unique derivation for any given view tuple. Recall that all proofs are provided in the
Appendix
.
Theorem 4.2 (Derivation Uniqueness) Given t 2 Op(T is a tuple in the
result of applying operator Op to tables T there exists a unique derivation of t in
according to Op.
Example 4.3 (Tuple Derivation for Aggregation) Given table R in Figure 11(a), and tuple
Figure 11(b), the derivation of t is
R
sum(Y)1(b) a
(a) R X, sum(Y) (R)
R
(c) a -1 (<2, 8>
R
Figure
Tuple derivations for aggregation
shown in Figure 11(c). Notice that R's subset fh2; 3i, h2; 5ig also satisfies requirements (a) and
(b) in Definition 4.1, but it is not maximal. Intuitively, h2; 0i also contributes to the result tuple,
since computed by adding the Y attributes of h2; 3i, h2; 5i, and h2; 0i
in R.
From Definition 4.1 and the semantics of the operators in Section 4.1, we now specify the
actual tuple derivations for each of our operators.
Theorem 4.4 (Tuple Derivations for Operators) Let be tables and t be a result
tuple.
-A
4.3 Tuple Derivations for Views
Now that we have defined tuple derivations for the operators, we proceed to define tuple derivations
for views. As mentioned earlier, a view definition can be expressed as a query tree evaluated
bottom-up. Intuitively, if a base tuple t contributes to a tuple t 0 in the intermediate result of a
view evaluation, and t 0 further contributes to a view tuple t, then t contributes to t. We define
a view tuple's derivation to be the set of all base tuples that contribute to the view tuple. The
specific process through which the view tuple is derived can be illustrated by applying the view
query tree to the derivation tuple sets, and presenting the intermediate results for each operator
in the evaluation.
Definition 4.5 (Tuple Derivation for a View) Let D be a database with base tables
be a view over D. Consider a tuple t 2 V .
1. contributes to itself in V .
2. is a view definition over D,
contributes to t according to the operator Op (by Definition 4.1), and t 2 R i contributes to
according to the view v j (by this definition recursively). Then t contributes to t according
to v.
R
Y
(b)2
Y
a
(c)
s
(R)
R
(a) 0
Figure
12: Tuple derivation for a view
We define t's derivation in D according to v to be v \Gamma1
are
subsets of R Rm such that t 2 R
contributes to t according to v, for
Also, we call R
derivation in R i according to v, denoted as v Finally, the derivation
of a view tuple set T contains all base tuples that contribute to any view tuple in the set T :
(v
Theorem 4.2 can be applied inductively in the obvious way to show that a view tuple's derivation
is unique.
Example 4.6 (Tuple Derivation for a View) Given base table R in Figure 12(a), view
Figure 12(c), and tuple is easy to see that tuples h2; 3i
and h2; 5i in R contribute to h2; 3i and h2; 5i in oe Y 6=0 (R) in Figure 12(b), and further contribute
to h2; 8i in V . The derivation of t is v \Gamma1
R as shown in Figure 12(d).
We now state some properties of tuple derivations to provide the groundwork for our derivation
tracing algorithms.
Theorem 4.7 (Derivation Transitivity) Let D be a database with base tables R
and let be a view over D. Suppose that v can also be represented as
(D) is an intermediate view over D, for
derivation in V j according to v 0 . Then t's derivation in D according to v is the concatenation of
derivations in D according to v j ,
(v
where
represents the multi-way concatenation of relation lists. 2
Theorem 4.7 is a result of Definition 4.5. It shows that given a view V with a complex definition
tree, we can break down its definition query tree into intermediate views, and compute a tuple's
derivation by recursively tracing the hierarchy of intermediate views.
2 The concatenation of two relation lists relations
are renamed so that the same relation never appears twice.
Since we define tuple derivations inductively based on the view query tree, an interesting
question arises: Are the derivations of two equivalent views also equivalent? Two view definitions
(or query trees) v 1 and v 2 are equivalent iff 8D: v 1 We prove in Theorem 4.8
that given any two equivalent Select-Project-Join (SPJ) views, their tuple derivations are also
equivalent.
Theorem 4.8 (Derivation Equivalence after SPJ Transformation) Tuple derivations of
equivalent SPJ views are equivalent. In other words, given equivalent SPJ views v 1 and v 2 ,
According to Theorem 4.8, we can transform an SPJ view to a simple canonical form before
tracing tuple derivations. Unfortunately, views with aggregation do not have this nice property,
as shown in the following example.
Example 4.9 (Tuple Derivations for Equivalent Views with Aggregation) Let
are equivalent, since
Given base table R in Figures 11(a) and 12(a), Figures 11(b) and 12(c) show
that the contents of the two views are the same. However, the derivation of tuple
according to v 1 (shown in Figure 11(c)) is different from that according to v 2 (shown in Figure
12(d)).
Given Definition 4.5, a straightforward way to compute a view tuple derivation is to compute
the intermediate results for all operators in the view definition and store them as temporary
tables, then trace the tuple's derivation in the temporary tables recursively, until reaching the
base tables. Obviously, this approach is impractical due to the computation and storage required
for all the intermediate results. In the next two sections, we separately consider SPJ views and
views with aggregation (ASPJ views). We show in Section 5 that one relational query over the
base tables suffices to compute tuple derivations for SPJ views. A recursive algorithm for ASPJ
view derivation tracing that requires a modest amount of auxiliary information is described in
Section 6.
5 SPJ View Derivation Tracing
Derivations for tuples in SPJ views can be computed using a single relational query over the base
data. In this section, we first define the general concept of a derivation tracing query, which
can be applied directly to the base tables to compute a view tuple's derivation. We then specify
tracing queries for SPJ views, and discuss optimization issues for tracing queries.
5.1 Derivation Tracing Queries
Sometimes, we can write a query for a specific view definition v and view tuple t, such that if we
apply the query to the database D it returns t's derivation in D (based on Definition 4.5). We
call such a query a derivation tracing query (or tracing query) for t and v. More formally, we
Definition 5.1 (Derivation Tracing Query) Let D be a database with base tables
Given view definition v over R
a derivation tracing query for t and v iff:
D (t) is t's derivation over D according to v, and TQ t;v is independent of database
instance D. We can similarly define the tracing query for a view tuple set T , and denote it as
TQ T;v (D).
5.2 Tracing Queries for SPJ Views
All SPJ views can be transformed into the form -A (oe C (R 1 ./ using a sequence of SPJ
algebraic transformations [Ull89]. We call this form the SPJ canonical form. From Theorem 4.8,
we know that SPJ transformations do not affect view tuple derivations. Thus, given an SPJ
view, we first transform it into SPJ canonical form, so that its tuple derivations can be computed
systematically using a single query. We first introduce an additional operator used in tracing
queries for SPJ views.
Definition 5.2 (Split Operator) Let T be a table with schema T. The operator Split breaks
T into a list of tables; each table in the list is a projection of T onto a set of attributes A i ' T,
h- A 1
Theorem 5.3 (Derivation Tracing Query for an SPJ View) Let D be a database with
base tables R be an SPJ view over D. Given
derivation in D according to v can be computed by applying the following
query to the base tables:
Given a tuple set T ' v(D), T 's derivation tracing query is:
where n is the semi-join operator.
Example 5.4 (Tracing Query for Calif) Recall Q1 over view Calif in Example 3.1, where
we asked about the derivation of tuple !Target, pencil, 3000?. Figure 13(a) shows the tracing
query for !Target, pencil, 3000? in Calif according to Theorem 5.3. The reader may verify
that by applying the tracing query to the source tables in Figures 1, 2, and 3, we obtain the
derivation result in Figure 6.
5.3 Tracing Query Optimization
The derivation tracing queries in Section 5.2 clearly can be optimized for better performance.
For example, the simple technique of pushing selection conditions below the join operator is
especially applicable in tracing queries, and can significantly reduce query cost. Figure 13(b)
shows the optimized tracing query for the Calif tuple.
If sufficient key information is present, the tracing query is even simpler:
store sales store sales
item item
store, item, sales
s
TQ <Target, pencil, 3000>, Calif
s
store, item, sales
<Target, pencil, 3000>, Calif
s
(b) Optimized
(a) Unoptimized
Figure
13: Derivation tracing query for !Target, pencil, 3000?
Theorem 5.5 (Derivation Tracing using Key Information) Let R i be a base table with
attributes K i include the base keys (i.e.,
derivation is hoe K 1
(R
According to Theorem 5.5, we can use key information to fetch the derivation of a tuple directly
from the base tables, without performing a join. The query complexity is reduced from O(n m ) to
O(mn), where n is the maximum size of the base tables, and m is the number of base tables on
which v is defined.
We have shown that tuple derivations for SPJ views can be traced efficiently. For more
complex views with aggregations we cannot compute tuple derivations by a single query over the
base tables. In the next section, we present a recursive tracing algorithm for these views.
6 Derivation Tracing Algorithm for ASPJ Views
In this section, we consider SPJ views with aggregation (ASPJ views). Although we have shown
that no intermediate results are required for SPJ view derivation tracing, some ASPJ views are
not traceable without storing certain intermediate results. For example, in Q2 in Example 3.2
the user asks for the derivation of tuple in the view Clothing. It is not possible to
compute t's derivation directly from store, item, and sales, because total is the only column
of view Clothing, and it is not contained in the base tables at all. Therefore, we cannot find
t's derivation by knowing only that t:total = 5400. In order to trace the derivation correctly,
we need tuple hMacy's, 5400i in the intermediate aggregation result to serve as a ``bridge'' that
connects the base tables and the view table.
We introduce a canonical form for ASPJ views in Section 6.1. In Section 6.2, we specify the
derivation tracing query for a simple one-level ASPJ view. We then develop a recursive tracing
algorithm for complex ASPJ views and justify its correctness in Section 6.3. As mentioned above,
intermediate (aggregation) results in the view evaluation are needed for derivation tracing. These
intermediate results can either be recomputed from the base tables when needed, or they can be
stored as materialized auxiliary views in a warehouse; this issue is further discussed in Section 7.1.
In the remainder of this section we simply assume that all intermediate aggregation results are
available.
6.1 ASPJ Canonical Form
Unlike SPJ views, ASPJ views do not have a simple canonical form, because in an ASPJ view definition
some selection, projection, and join operators cannot be pushed above or below the aggregation
operators. View Clothing in Figure 7 is such an example, where the selection total ? 5000
cannot be pushed below the aggregation, and the selection category = "clothing" cannot be
pulled above the aggregation. However, by commuting and combining some SPJ operators [Ull89],
it is possible to transform a general ASPJ view query tree into a form composed of ff-oe ./ operator
sequences, which we call ASPJ segments. Each segment in the query tree except the topmost
must include a non-trivial aggregation operator. We call this form the ASPJ canonical form.
Definition 6.1 (ASPJ Canonical Form) Let v be an ASPJ view definition over database D.
1. R is a base table in D, is in ASPJ canonical form.
2. is in ASPJ canonical form if v j is an ASPJ view in
ASPJ canonical form,
6.2 Derivation Tracing Queries for One-level ASPJ Views
A view defined by one ASPJ segment is called a one-level ASPJ view. Similar to SPJ views, we
can use one query to compute a tuple derivation for a one-level ASPJ view.
Theorem 6.2 (Derivation Tracing Query for a One-Level ASPJ View) Given a one-level
derivation in T according to v can be computed by applying the following
query to the base tables:
Given tuple set derivation tracing query is:
Here too, evaluation of the actual tracing query can be optimized in various ways as discussed in
Section 5.3.
6.3 Recursive Derivation Tracing Algorithm for Multi-level ASPJ Views
Given a general ASPJ view definition, we first transform the view into ASPJ canonical form,
divide it into a set of ASPJ segments, and define an intermediate view for each segment.
Example 6.3 (ASPJ Segments and Intermediate Views for Clothing) Recall the view
Clothing in Example 3.2. We can rewrite its definition in ASPJ canonical form with two seg-
ments, and introduce an intermediate view AllClothing as shown in Figure 14.
We then trace a tuple's derivation by recursively tracing through the hierarchy of intermediate
views top-down. At each level, we use the tracing query for a one-level ASPJ view to compute
derivations for the current tracing tuples with respect to the view or base tables at the next level
below. Details follow.
Clothing
a store_name, sum(num_sold) as total
AllClothing
s total > 5000
total
segment 1
segment 2
item
store sales
Figure
14: ASPJ segments and intermediate views for Clothing
6.3.1 Algorithm
Figure
presents our recursive derivation tracing algorithm for a general ASPJ view.
Given a view definition v in ASPJ canonical form, and tuple t 2 v(D), procedure
TupleDerivation(t; v; D) computes the derivation of tuple t according to v over D. The main al-
gorithm, Procedure TableDerivation(T ; v; D), computes the derivation of a tuple set T ' v(D)
according to v over D. As discussed earlier, we assume that
a one-level ASPJ view, and available as a base table or an intermediate view,
1::k. The procedure first computes T 's derivation hV
using the
one-level ASPJ view tracing query TQ(T i) from Theorem 6.2. It then calls procedure
which computes (recursively) the
derivation of each tuple set V
j according to v j , concatenates the results to form the
derivation of the entire list of view tuple sets.
Example 6.4 (Recursive Derivation Tracing) We divided the view Clothing into two segments
in Example 6.3. We assume that the contents of the intermediate view AllClothing are
available (shown in Figure 16). According to our algorithm, we first compute the derivation T of !5400? in AllClothing to obtain T
trace T
's derivation to the
base tables to obtain the derivation result in Figure 9.
Note that we do not necessarily materialize complete intermediate aggregation views such as
AllClothing. In fact, there are many choices of what (if anything) to store. The issue of storing
versus recomputing the intermediate information needed for derivation tracing is discussed in
Section 7.1.
6.3.2 Correctness
To justify the correctness of our algorithm, we claim the following:
1. Transforming a view into ASPJ canonical form does not affect its derivation.
We can "canonicalize" an ASPJ view by transforming each segment between adjacent aggregation
operators into its SPJ canonical form. The process consists only of SPJ transformations
[Ull89]. Theorem 4.8 shows that derivations are unchanged by SPJ transformations.
procedure TupleDerivation(t; v; D)
begin
return (TableDerivation(ftg;v; D));
procedure TableDerivation(T; v; D)
begin
is a one-level ASPJ view,
(D) is an intermediate view or a base table,
return (TableListDerivation(hV
procedure
begin
do
return (D
Figure
15: Algorithm for ASPJ view tuple derivation tracing
store name total
Target 1400
Macy's 5400
Figure
AllClothing table
2. It is correct to trace derivations recursively down the view definition tree.
From Theorem 4.7, we know that derivations are transitive through levels of the view definition
tree. Thus, when tracing tuple derivations for a canonicalized ASPJ view, we can
first divide its definition into one-level ASPJ views, and then compute derivations for the
intermediate views in a top-down manner.
Our recursive derivation tracing algorithm can be used to trace the derivation of any tuple
in any ASPJ view in a conventional database. However, certain additional issues arise when
performing derivation tracing in a multi-source warehousing environment, which we proceed to
discuss in the next section.
7 Derivation Tracing in a Warehousing Environment
In Sections 5 and 6 we presented algorithms to trace view tuple derivations. Our algorithms
assume that all of the base tables as well as the intermediate views are accessible, and that they
are consistent with the view being traced. These assumptions may not hold when we are tracing
derivations for a warehouse view defined on remote distributed sources. The following problems
may arise:
1. Efficiency problem: Querying remote sources and performing selections and joins (in the
tracing queries) over them for each tuple derivation trace can be very inefficient. Also,
recomputing intermediate views for tracing multi-level ASPJ views can be expensive.
2. Consistency problem: The warehouse may not refresh its views in real time, which means
that warehouse views can become out of date. Thus, we cannot always compute the derivation
of tuples in the "old" view from the "new" source base tables. For example, if a base
tuple in the derivation of a view tuple has just been deleted from the source, but the change
has not yet been propagated to the view, then the user sees the view tuple but cannot
correctly trace its derivation since a relevant base tuple is gone.
3. Legacy source problem: Views defined on inaccessible legacy sources are not traceable because
the base tables are not available.
Storing auxiliary views in the warehouse to reduce computation cost and to avoid querying
the sources [QGMW96] is a solution that solves all three of the above problems. The price
is extra storage and view maintenance costs. In Section 7.1, we consider the trade-offs between
materializing and recomputing intermediate results, and propose to store intermediate aggregation
results to improve overall performance. In Section 7.2, we introduce derivation views, which store
information about sources so that view derivations always can be computed without querying the
sources.
7.1 Materializing vs. Recomputing Intermediate Aggregate Results
In Section 6, we saw that intermediate aggregation results are needed for derivation tracing
in the general case. There are two ways to obtain such information. One is to recompute the
intermediate result during the tracing process. This approach requires no permanent extra storage,
but the tracing process takes longer, especially when the recomputation may require querying the
sources. The other way is to maintain materialized auxiliary views containing the intermediate
results. In this case, less computation is required at tracing time, but the auxiliary views must be
stored and kept up-to-date. Due to the characteristics of warehousing environments, we suggest
warehouses maintain the intermediate aggregation results as materialized auxiliary views rather
than recomputing them [QGMW96].
Example 7.1 (Materialized View for AllClothing) To improve the efficiency of the tracing
process in Example 6.4, we materialize auxiliary view AllClothing (in Figure 14) with the contents
shown in Figure 16.
Note that when materializing AllClothing, tuple hTarget, 1400i is not used when tracing
tuple derivations for Clothing. In fact it is filtered out by the selection condition oe total?5000
in Clothing's definition. In this case, materializing the result of V
instead of storing AllClothing seems to be a better choice. However, notice that V 0 is not incrementally
maintainable without storing AllClothing. Thus we would either need to recompute
for each relevant update, which would incur a high maintenance cost, or we need to store
AllClothing in any case in order to maintain V 0 . Therefore, materializing V 0 is not actually
likely to be an improvement. In general, given a view definition tree where selections are pushed
down as far as possible, all selection conditions above an aggregate must be on the summary at-
tributes, and therefore are not incrementally maintainable without storing the entire aggregation
results.
7.2 Storing Derivation Views in a Multi-Source Warehouse
The problems described earlier (e.g., inefficiency, inconsistency) in warehouse view derivation
tracing arise when we apply our tracing queries to remote sources. We therefore may prefer to store
auxiliary information about the sources in the warehouse, in order to avoid source queries during
each derivation trace. Note that these auxiliary views may be in addition to the intermediate
views for aggregate view tracing discussed in Section 7.1.
There are various strategies for storing such auxiliary views. A simple extreme solution is to
store a copy of each source table in the warehouse. Our algorithm will then query the base views
as if they are the sources. However, this solution can be costly and wasteful if a source is large,
especially if much of its data does not contribute to the view. Also, computing selections and joins
over large base views each time a tuple's derivation is traced can be expensive. Other solutions
can in some cases store much less information and still enable derivation tracing without accessing
the sources, but the maintenance cost is much higher. We propose an intermediate scheme that
achieves low tracing query cost with modest extra storage and maintenance cost.
After adding auxiliary views as described in Section 7.1, the view definition is broken down
into multiple ASPJ segments. Only views defined by the lowest-level segments are directly over
the source tables, and it is these views that are of concern. Let be such a
view. Based on V , we introduce an auxiliary view, called the derivation view for V . It contains
information about the derivation of each tuple in V over R as specified in Definition 7.2
given next. Theorem 7.3 then shows that any V tuple's derivation in R can be computed
with a simple selection and split operation over the derivation view.
Definition 7.2 (Derivation View) Let
Rm ))) be a one-level ASPJ view over the source tables. The derivation view for V , denoted as
DV (v), is
Theorem 7.3 (Derivation Tracing using the Derivation View) Let V be a one-level
ASPJ view over base tables:
DV (v) be v's derivation view as defined in Definition 7.2. Given a tuple t derivation can
be computed using DV (v) as follows:
Given tuple set derivation is:
Example 7.4 (Derivation View for AllClothing) The view AllClothing in Example 7.1 is
defined on base tables store, item, and sales. Suppose these base tables are located in remote
sources that we cannot or do not wish to access. In order to trace tuple derivations for
AllClothing, we maintain a derivation view DV AllClothing. Figure 17 shows the derivation
view definition, and Figure shows its contents. The derivation tracer need only to query
DV AllCothing to compute view AllClothing's tuple derivations, as shown in Figure 17.
Using known techniques, the auxiliary intermediate views and derivation views can be maintained
consistently with other views in the warehouse [ZGMW96, ZWGM97]. Note that in cases of
warehousing environments where the sources are inaccessible, the auxiliary views themselves need
to be made self-maintainable. Known techniques can be used here as well [GJM96, QGMW96].
s
a store_name,
s
sum(num_sold) as total
DV_AllClothing
AllClothing
Figure
17: View definition for DV AllClothing
store id store name city state item id item name category price num sold
001 Target Palo Alto CA 0004 pants clothing
Albany NY 0004 pants clothing 35 800
003 Macy's San Francisco CA 0003 shirt clothing 45 1500
003 Macy's San Francisco CA 0004 pants clothing
004 Macy's New York City NY 0003 shirt clothing 50 2100
004 Macy's New York City NY 0004 pants clothing 70 1200
Figure
AllClothing table
7.3 A Warehousing System Supporting Derivation Tracing
Figure
19 illustrates an overall warehouse structure that supports tuple derivation tracing with
materialized intermediate aggregation results and derivation views. Recall question Q2 from Example
3.2. The query tree on the left side of Figure 19 is the original definition of view Clothing.
In order to trace Clothing's tuple derivations, an auxiliary view AllClothing is maintained to
record the intermediate aggregation results (as discussed in Section 7.1). Furthermore, to trace tuples
in AllClothing, derivation view DV AllClothing is maintained (as discussed in Section 7.2).
The final set of materialized views is:
AllClothing = ff store name, sum(num sold) as total
(DV AllClothing)
Each view can be computed and maintained based on the views (or base tables) directly
beneath it using warehouse view maintenance techniques [QGMW96, ZWGM97]. Solid arrows
on the right side of Figure 19 show the query and answer data flow. Ordinary view queries are
sent to the view Clothing, while derivation queries are sent to the Derivation Tracer module.
The tracer takes a request for the derivation of a tuple t in Clothing and queries auxiliary view
AllClothing for t's derivation T 1 in AllClothing as specified in Theorem 5.3. The tracer then
queries DV AllClothing for the derivation T 2 of T 1 over D as specified in Theorem 7.3. T 2 is t's
derivation over D.
There are alternative derivation views to the one we described that trade tracing query cost for
store item sales Sources
Warehouse
Clothing
User
s
store item sales
AllClothing
derivation over D
derivation
over AllClothing
Clothing
DV_AllClothing
Derivation Queries:
View Queries
Derivation Tracer
s total > 5000
store_name,
sum(num_sold) as total
a
total total
total > 5000
store_name,
sum(num_sold) as total
a
Figure
19: Derivation tracing in the warehouse system
storage or maintenance cost. One simple option is to split the derivation view into separate tables
that contain the base tuples of each source relation that contribute to the view. This scheme may
reduce the storage requirement, but tracing queries must then recompute the join. Of course if
accessing the sources is cheap and reliable, then it may be preferable to query the sources directly.
However, a compensation log [HZ96] may be needed to keep the tracing result consistent with the
warehouse views. Determining whether it is better to materialize the necessary information for
derivation tracing or query the sources and recompute information during tracing time in a given
setting (based on query cost, update cost, storage constraints, source availability, etc.) is an
interesting question left open for future work, and is closely related to results in [Gup97, LQA97].
8 Related Work Revisited
In this section, we revisit some related topics, including top-down Datalog query processing and
the view update problem, and examine the differences between those problems and ours. We also
show how lineage tracing can be applied to the view update problem.
8.1 Top-down Datalog Query Processing
In Datalog, relations are represented as predicates, tuples are atoms (or facts), and queries or
views are represented by logical rules. Each rule contains a head (or goal) and a body with some
subgoals that can (possibly recursively) derive the head [Ull89].
There are two modes of reasoning in Datalog: the bottom-up (or forward-chaining) mode and
the top-down (or backward-chaining) mode. The top-down mode proves a goal by constructing a
rule-goal graph with the goal as the top node, scanning the graph top-down, and recursively applying
rule-goal unification and atom matching until finding an instantiation of all of the subgoals
in the base data. Backtracking is used if a dead-end is met in the searching (proving) process.
Top-down evaluation of a Datalog goal thus provides information about the facts in the base
data that yield the goal; in other words, it provides the lineage of the goal tuple. Our approach
to tracing tuple lineage is obviously different from Datalog top-down processing. Instead of
performing rule-goal unification and atom matching one tuple at a time, we generate a single
query to retrieve all lineage tuples of a given tuple (or tuple set) in an SPJ or one-level ASPJ
view. Our approach is better suited to tracing query optimization (as described in Section 5.3),
and we support lineage tracing for aggregation views, which are not handled in Datalog. We do
not handle recursion in this paper, although we believe our approach can be extended to recursive
views while maintaining efficiency.
8.2 View Update Problem
The well-known view update problem is to transform updates on views into updates against the
base tables, so that the new base tables will continue to derive the updated view. The problem
was first formulated in [DB78].
A view update can be an insertion, deletion, or modification of a view tuple. [Mas84] deduced a
set of view update translation rules for different view update commands against select, project, and
join views. Given a view update command, in some cases more than one set of base table updates
can achieve the same view update effect. Much effort has been focused on finding appropriate
translations for specific cases [BS81, CM89, Kel86, LS91]. In general, extra semantic information
is needed to choose a translation.
View update algorithms cannot be used directly to compute view tuple derivations. First,
none of the algorithms we know of consider aggregation. Second, although the algorithms do
identify a set of base tuples that can affect a view tuple, the base tuples identified may not even
derive the view tuple, and therefore do not satisfy our Definition 4.5 of a view tuple derivation.
In general, the view update approach and the derivation tracing approach answer two different
questions: "Which base tuples can affect a view tuple?" and "Which base tuples exactly derive
a view tuple?", respectively. The two questions are not equivalent, but they are related to each
other. Our derivation tracing algorithms can be used to guide the view update process to find an
appropriate view update translation in some cases. For deletions and modifications, our derivation
tracing algorithms can directly identify an appropriate set of base relation tuples to modify, as
shown in the following example.
Example 8.1 (View Update: Deletion) In Example 4.6 (Figure 12), we illustrated the
derivation for h2; 8i in view (R)). When the view update command "delete
h2; 8i from V " is issued, we can use the tuple derivation to determine that h2; 3i and h2; 5i
should be deleted, and these should be the only changes. The updated base table will be
which derives the updated view fh1; 6ig. Note that without using tuple
derivation tracing, a more naive algorithm might choose also to delete h2; 0i, which maintains
"correctness" but deletes more than necessary.
For insertions, the problem is harder, since the view tuple being inserted as well as its derivation
do not currently exist. Our derivation tracing algorithms can be adapted to identify some
components of the possible derivations of a view tuple being inserted, thereby guiding the base
tuple insertions. Any attribute that is not projected into the view must be guessed using extra
semantics, such as user instructions or base table constraints, or left null. Even here, derivation
tracing can help in the "guessing" process in certain cases, as shown in the following example.
Example 8.2 (View Update: Insertion) Suppose view update "insert h3; 2i into V " is issued
to the view in Example 4.6 (Figure 12). Since h3; 2i is not in view V , we cannot ask for its
current derivation. We only know that after we update R, R should produce a new view with h3; 2i
in it. According to the tracing query for V (as specified in Theorem 6.2), we can guess that after
the update, the derivation R of h3; 2i must satisfy the condition: 8t 2 R
Assuming a constraint that R:Y attributes are positive integers, and considering the requirement
sum(R :Y 2, we can also assert that 8t 2 R 2. By further assuming a constraint that
R has no duplicates, we can assert that Putting all these assertions
together, the only potential derivation of h3; 2i is h3; 2i, so an appropriate base table update is to
insert h3; 2i into R.
Notice that the inserted tuple in Example 8.2 was carefully chosen. If h3; 8i were inserted
instead, we would have to randomly pick a translation from the reasonable ones or ask the user
to choose. Even in this case, lineage tracing techniques incorporated with base table constraints
can be very useful in reducing the number of possible translations.
9 Conclusions and Future Work
We formulated the view data lineage problem and presented lineage tracing algorithms for relational
views with aggregation. Our algorithms identify the exact subset of base data that produced
a given view data item. We also presented techniques for efficient and consistent lineage tracing in
a multi-source warehousing system. Our results can form the basis of a tool by which an analyst
can browse warehouse data, then "drill-down" to the source data that produced warehouse data
of interest. Follow-on and future work includes the following.
ffl We have extended the results in this paper to view definitions that use bag (instead of set)
semantics, and to additional relational operators including [ and \Gamma. Please see [CWW98].
We also plan to extend our work to handle recursive views.
ffl Tuple derivations as defined in this paper explain how certain base relation tuples cause
certain view tuples to exist. As such, derivation tracing is a useful technique for investigating
the origins of potentially erroneous view data. However, in some cases a view tuple may
be erroneous not (only) because the base tuples that derive it are erroneous, but because
base relation tuples that should appear in the derivation are missing. For example, a base
tuple may contribute to the wrong group in an aggregate view because its grouping value is
incorrect. We plan to explore how this "missing derivation data" problem can be addressed
in our lineage framework.
ffl In Sections 7.1 and 7.2 we discussed trade-offs associated with materializing versus recomputing
intermediate and derivation views, and we mentioned briefly self-maintainability of
auxiliary views. We are in the process of conducting a comparative performance study of
the various options.
ffl We will apply our derivation tracing techniques to the view schema evolution problem, as
motivated in Section 1.
ffl We will further study how lineage tracing can be incorporated with existing techniques to
help solve the view update problem. As seen in Example 8.2, one interesting problem is to
extend our derivation tracing algorithms to handle tuples not yet in the view.
Most importantly, we are implementing a lineage tracing package within the WHIPS data
warehousing prototype at Stanford[WGL + 96]. Once the basic algorithms are completed,
we plan to experiment with appropriate user interface tools through which an analyst can
obtain and browse derivation information. For example, the analyst may wish to see not
only the base derivation data itself, but also a representation of the process by which the
view data item was derived.
In summary, data lineage is a rich problem with many interesting applications. In this paper
we provide an initial practical solution for lineage tracing in data warehouses, and we plan to
extend our work in the many directions outlined above.
Acknowledgements
We are grateful to Sudarshan Chawathe, Himanshu Gupta, Jeff Ullman, Vasilis Vassalos, Yue
Zhuge, and all of our WHIPS group colleagues for helpful and enlightening discussions.
--R
Update semantics of relational views.
An overview of data warehousing and OLAP technology.
Derived data update in semantic databases.
A complete solution for tracing the lineage of relational view data.
On the updatability of relational views.
Recovering information from summary data.
Data cube: A relational aggregation operator generalizing group-by
Data integration using self-maintainable views
Maintenance of materialized views: Problems
Maintaining views incrementally.
Selection of views to materialize in a data warehouse.
Issues for on-line analytical mining of data warehouses
Managing derived data in the Gaea scientific DBMS.
Implementing data cubes effi- ciently
A framework for supporting data integration using the materialized and virtual approaches.
Choosing a view update translator by Dialog at view definition time.
Physical database design for data ware- housing
Updating relational views using knowledge at view definition and view update time.
A relational database view update translation mechanism.
Maintenance of data cubes and summary tables in a warehouse.
Making views self-maintainable for data warehousing
Maintenance expressions for views with aggregation.
Incremental recomputation of active relational expres- sions
Database and Knowledge-base Systems (Vol <Volume>2</Volume>)
Research issues in data warehousing.
A system prototype for warehouse view maintenance.
Research problems in data warehousing.
Supporting fine-grained data lineage in a database visualization environment
The Strobe algorithms for multi-source warehouse consistency
Multiple view consistency for data warehousing.
--TR
Principles of database and knowledge-base systems, Vol. I
Research problems in data warehousing
A framework for supporting data integration using the materialized and virtual approaches
An overview of data warehousing and OLAP technology
Update semantics of relational views
The Strobe algorithms for multi-source warehouse consistency
Making views self-maintainable for data warehousing
Implementation of integrity constraints and views by query modification
Data Integration using Self-Maintainable Views
Supporting Fine-grained Data Lineage in a Database Visualization Environment
Multiple View Consistency for Data Warehousing
Physical Database Design for Data Warehouses
Data Cube
Concurrency Control Theory for Deferred Materialized Views
Selection of Views to Materialize in a Data Warehouse
Managing Derived Data in the Gaea Scientific DBMS
Aggregate-Query Processing in Data Warehousing Environments
Recovering Information from Summary Data
--CTR
Ling Wang , Elke A. Rundensteiner , Murali Mani , Ming Jiang, HUX: a schemacentric approach for updating XML views, Proceedings of the 15th ACM international conference on Information and knowledge management, November 06-11, 2006, Arlington, Virginia, USA
Hao Fan , Alexandra Poulovassilis, Using AutoMed metadata in data warehousing environments, Proceedings of the 6th ACM international workshop on Data warehousing and OLAP, November 07-07, 2003, New Orleans, Louisiana, USA
Laura Chiticariu , Wang-Chiew Tan, Debugging schema mappings with routes, Proceedings of the 32nd international conference on Very large data bases, September 12-15, 2006, Seoul, Korea
Deepavali Bhagwat , Laura Chiticariu , Wang-Chiew Tan , Gaurav Vijayvargiya, An annotation management system for relational databases, Proceedings of the Thirtieth international conference on Very large data bases, p.900-911, August 31-September 03, 2004, Toronto, Canada
Yingwei Cui , Jennifer Widom, Lineage Tracing for General Data Warehouse Transformations, Proceedings of the 27th International Conference on Very Large Data Bases, p.471-480, September 11-14, 2001
Gao Cong , Wenfei Fan , Floris Geerts, Annotation propagation revisited for key preserving views, Proceedings of the 15th ACM international conference on Information and knowledge management, November 06-11, 2006, Arlington, Virginia, USA
Y. Cui , J. Widom, Lineage tracing for general data warehouse transformations, The VLDB Journal The International Journal on Very Large Data Bases, v.12 n.1, p.41-58, May
Peter Buneman , Wang-Chiew Tan, Provenance in databases, Proceedings of the 2007 ACM SIGMOD international conference on Management of data, June 11-14, 2007, Beijing, China
Todd J. Green , Grigoris Karvounarakis , Val Tannen, Provenance semirings, Proceedings of the twenty-sixth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, June 11-13, 2007, Beijing, China
James Annis , Yong Zhao , Jens Voeckler , Michael Wilde , Steve Kent , Ian Foster, Applying Chimera virtual data concepts to cluster finding in the Sloan Sky Survey, Proceedings of the 2002 ACM/IEEE conference on Supercomputing, p.1-14, November 16, 2002, Baltimore, Maryland
Peter Buneman , Sanjeev Khanna , Wang-Chiew Tan, On propagation of deletions and annotations through views, Proceedings of the twenty-first ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, June 03-05, 2002, Madison, Wisconsin
Alon Halevy , Michael Franklin , David Maier, Principles of dataspace systems, Proceedings of the twenty-fifth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, p.1-9, June 26-28, 2006, Chicago, IL, USA
Michael Bhlen , Johann Gamper , Christian S. Jensen, An algebraic framework for temporal attribute characteristics, Annals of Mathematics and Artificial Intelligence, v.46 n.3, p.349-374, March 2006
Omar Benjelloun , Anish Das Sarma , Alon Halevy , Jennifer Widom, ULDBs: databases with uncertainty and lineage, Proceedings of the 32nd international conference on Very large data bases, September 12-15, 2006, Seoul, Korea
Rajendra Bose , James Frew, Lineage retrieval for scientific data processing: a survey, ACM Computing Surveys (CSUR), v.37 n.1, p.1-28, March 2005
David T. Liu , Michael J. Franklin, GridDB: a data-centric overlay for scientific grids, Proceedings of the Thirtieth international conference on Very large data bases, p.600-611, August 31-September 03, 2004, Toronto, Canada
S. B. Davidson , J. Crabtree , B. P. Brunk , J. Schug , V. Tannen , G. C. Overton , C. J. Stoeckert, Jr., K2/Kleisli and GUS: experiments in integrated access to genomic data sources, IBM Systems Journal, v.40 n.2, p.512-531, February 2001 | lineage;materialized views;data warehouse;derviation |
358017 | A method for obtaining digital signatures and public-key cryptosystems. | An encryption method is presented with the novel property that publicly revealing an encryption key does not thereby reveal the corresponding decryption key. This has two important consequences: Couriers or other secure means are not needed to transmit keys, since a message can be enciphered using an encryption key publicly revealed by the intended recipient. Only he can decipher the message, since only he knows the corresponding decryption key. A message can be signed using a privately held decryption key. Anyone can verify this signature using the corresponding publicly revealed encryption key. Signatures cannot be forged, and a signer cannot later deny the validity of his signature. This has obvious applications in electronic mail and electronic funds transfer systems. A message is encrypted by representing it as a number M, raising M to a publicly specified power e, and then taking the remainder when the result is divided by the publicly specified product, n, of two large secret prime numbers p and q. Decryption is similar; only a different, secret, power d is used, where e * 1)). The security of the system rests in part on the difficulty of factoring the published divisor, n. | Introduction
The era of "electronic mail" [10] may soon be upon us; we must ensure that two
important properties of the current "paper mail" system are preserved: (a) messages
are private, and (b) messages can be signed . We demonstrate in this paper how to
build these capabilities into an electronic mail system.
At the heart of our proposal is a new encryption method. This method provides
an implementation of a "public-key cryptosystem," an elegant concept invented by
Diffie and Hellman [1]. Their article motivated our research, since they presented
the concept but not any practical implementation of such a system. Readers familiar
with [1] may wish to skip directly to Section V for a description of our method.
II Public-Key Cryptosystems
In a "public key cryptosystem" each user places in a public file an encryption procedure
That is, the public file is a directory giving the encryption procedure of each
user. The user keeps secret the details of his corresponding decryption procedure D.
These procedures have the following four properties:
(a) Deciphering the enciphered form of a message M yields M . Formally,
(b) Both E and D are easy to compute.
(c) By publicly revealing E the user does not reveal an easy way to compute D.
This means that in practice only he can decrypt messages encrypted with E, or
compute D efficiently.
(d) If a message M is first deciphered and then enciphered, M is the result. Formally
An encryption (or decryption) procedure typically consists of a general method
and an encryption key. The general method, under control of the key, enciphers a
message M to obtain the enciphered form of the message, called the ciphertext C.
Everyone can use the same general method; the security of a given procedure will rest
on the security of the key. Revealing an encryption algorithm then means revealing
the key.
When the user reveals E he reveals a very inefficient method of computing D(C):
testing all possible messages M until one such that E(M) = C is found. If property
(c) is satisfied the number of such messages to test will be so large that this approach
is impractical.
A function E satisfying (a)-(c) is a "trap-door one-way function;" if it also satisfies
(d) it is a "trap-door one-way permutation." Diffie and Hellman [1] introduced the
concept of trap-door one-way functions but did not present any examples. These
functions are called "one-way" because they are easy to compute in one direction but
(apparently) very difficult to compute in the other direction. They are called "trap-
door" functions since the inverse functions are in fact easy to compute once certain
private "trap-door" information is known. A trap-door one-way function which also
satisfies (d) must be a permutation: every message is the cipertext for some other
message and every ciphertext is itself a permissible message. (The mapping is "one-
to-one" and "onto"). Property (d) is needed only to implement "signatures."
The reader is encouraged to read Diffie and Hellman's excellent article [1] for
further background, for elaboration of the concept of a public-key cryptosystem, and
for a discussion of other problems in the area of cryptography. The ways in which
a public-key cryptosystem can ensure privacy and enable "signatures" (described in
Sections III and IV below) are also due to Diffie and Hellman.
For our scenarios we suppose that A and B (also known as Alice and Bob) are
two users of a public-key cryptosystem. We will distinguish their encryption and
decryption procedures with subscripts: EA
III Privacy
Encryption is the standard means of rendering a communication private. The sender
enciphers each message before transmitting it to the receiver. The receiver (but no
unauthorized person) knows the appropriate deciphering function to apply to the
received message to obtain the original message. An eavesdropper who hears the
transmitted message hears only "garbage" (the ciphertext) which makes no sense to
him since he does not know how to decrypt it.
The large volume of personal and sensitive information currently held in computerized
data banks and transmitted over telephone lines makes encryption increasingly
important. In recognition of the fact that efficient, high-quality encryption techniques
are very much needed but are in short supply, the National Bureau of Standards has
recently adopted a "Data Encryption Standard" [13, 14], developed at IBM. The new
standard does not have property (c), needed to implement a public-key cryptosystem.
All classical encryption methods (including the NBS standard) suffer from the
"key distribution problem." The problem is that before a private communication can
begin, another private transaction is necessary to distribute corresponding encryption
and decryption keys to the sender and receiver, respectively. Typically a private
courier is used to carry a key from the sender to the receiver. Such a practice is not
feasible if an electronic mail system is to be rapid and inexpensive. A public-key
cryptosystem needs no private couriers; the keys can be distributed over the insecure
communications channel.
How can Bob send a private message M to Alice in a public-key cryptosystem?
First, he retrieves EA from the public file. Then he sends her the enciphered message
EA (M ). Alice deciphers the message by computing DA (EA . By property
(c) of the public-key cryptosystem only she can decipher EA (M ). She can encipher a
private response with EB , also available in the public file.
Observe that no private transactions between Alice and Bob are needed to establish
private communication. The only "setup" required is that each user who wishes
to receive private communications must place his enciphering algorithm in the public
file.
Two users can also establish private communication over an insecure communications
channel without consulting a public file. Each user sends his encryption key
to the other. Afterwards all messages are enciphered with the encryption key of the
recipient, as in the public-key system. An intruder listening in on the channel cannot
decipher any messages, since it is not possible to derive the decryption keys from the
encryption keys. (We assume that the intruder cannot modify or insert messages into
the channel.) Ralph Merkle has developed another solution [5] to this problem.
A public-key cryptosystem can be used to "bootstrap" into a standard encryption
scheme such as the NBS method. Once secure communications have been established,
the first message transmitted can be a key to use in the NBS scheme to encode all
following messages. This may be desirable if encryption with our method is slower
than with the standard scheme. (The NBS scheme is probably somewhat faster if
special-purpose hardware encryption devices are used; our scheme may be faster on
a general-purpose computer since multiprecision arithmetic operations are simpler to
implement than complicated bit manipulations.)
IV Signatures
If electronic mail systems are to replace the existing paper mail system for business
transactions, "signing" an electronic message must be possible. The recipient of a
signed message has proof that the message originated from the sender. This quality
is stronger than mere authentication (where the recipient can verify that the message
came from the sender); the recipient can convince a "judge" that the signer sent the
message. To do so, he must convince the judge that he did not forge the signed
message himself! In an authentication problem the recipient does not worry about
this possibility, since he only wants to satisfy himself that the message came from the
sender.
An electronic signature must be message-dependent, as well as signer-dependent.
Otherwise the recipient could modify the message before showing the message-signature
pair to a judge. Or he could attach the signature to any message whatsoever, since
it is impossible to detect electronic "cutting and pasting."
To implement signatures the public-key cryptosystem must be implemented with
trap-door one-way permutations (i.e. have property (d)), since the decryption algorithm
will be applied to unenciphered messages.
How can user Bob send Alice a "signed" message M in a public-key cryptosystem?
He first computes his "signature" S for the message M using DB :
(Deciphering an unenciphered message "makes sense" by property (d) of a public-key
cryptosystem: each message is the ciphertext for some other message.) He then
encrypts S using EA (for privacy), and sends the result EA (S) to Alice. He need not
send M as well; it can be computed from S.
Alice first decrypts the ciphertext with DA to obtain S. She knows who is the
presumed sender of the signature (in this case, Bob); this can be given if necessary in
plain text attached to S. She then extracts the message with the encryption procedure
of the sender, in this case EB (available on the public file):
She now possesses a message-signature pair (M;S) with properties similar to those
of a signed paper document.
Bob cannot later deny having sent Alice this message, since no one else could have
created Alice can convince a "judge" that EB so she has proof
that Bob signed the document.
Alice cannot modify M to a different version M 0 , since then she would
have to create the corresponding signature S
Therefore Alice has received a message "signed" by Bob, which she can "prove"
that he sent, but which she cannot modify. (Nor can she forge his signature for any
other message.)
An electronic checking system could be based on a signature system such as the
above. It is easy to imagine an encryption device in your home terminal allowing
you to sign checks that get sent by electronic mail to the payee. It would only be
necessary to include a unique check number in each check so that even if the payee
copies the check the bank will only honor the first version it sees.
Another possibility arises if encryption devices can be made fast enough: it will
be possible to have a telephone conversation in which every word spoken is signed by
the encryption device before transmission.
When encryption is used for signatures as above, it is important that the encryption
device not be "wired in" between the terminal (or computer) and the communications
channel, since a message may have to be successively enciphered with
several keys. It is perhaps more natural to view the encryption device as a "hardware
subroutine" that can be executed as needed.
We have assumed above that each user can always access the public file reliably.
In a "computer network" this might be difficult; an "intruder" might forge messages
purporting to be from the public file. The user would like to be sure that he actually
obtains the encryption procedure of his desired correspondent and not, say, the encryption
procedure of the intruder. This danger disappears if the public file "signs"
each message it sends to a user. The user can check the signature with the public file's
encryption algorithm E PF . The problem of "looking up" E PF itself in the public file
is avoided by giving each user a description of E PF when he first shows up (in person)
to join the public-key cryptosystem and to deposit his public encryption procedure.
He then stores this description rather than ever looking it up again. The need for a
courier between every pair of users has thus been replaced by the requirement for a
single secure meeting between each user and the public file manager when the user
joins the system. Another solution is to give each user, when he signs up, a book
(like a telephone directory) containing all the encryption keys of users in the system.
Our Encryption and Decryption Methods
To encrypt a message M with our method, using a public encryption key (e; n),
proceed as follows. (Here e and n are a pair of positive integers.)
First, represent the message as an integer between 0 and n \Gamma 1. (Break a long
message into a series of blocks, and represent each block as such an integer.) Use any
standard representation. The purpose here is not to encrypt the message but only to
get it into the numeric form necessary for encryption.
Then, encrypt the message by raising it to the eth power modulo n. That is, the
result (the ciphertext C) is the remainder when M e is divided by n.
To decrypt the ciphertext, raise it to another power d, again modulo n. The
encryption and decryption algorithms E and D are thus:
Note that encryption does not increase the size of a message; both the message
and the ciphertext are integers in the range 0 to n \Gamma 1.
The encryption key is thus the pair of positive integers (e; n). Similarly, the
decryption key is the pair of positive integers (d; n). Each user makes his encryption
public, and keeps the corresponding decryption key private. (These integers
should properly be subscripted as in nA ; e A , and dA , since each user has his own set.
However, we will only consider a typical set, and will omit the subscripts.)
How should you choose your encryption and decryption keys, if you want to use
our method?
You first compute n as the product of two primes p and q:
These primes are very large, "random" primes. Although you will make n public,
the factors p and q will be effectively hidden from everyone else due to the enormous
difficulty of factoring n. This also hides the way d can be derived from e.
You then pick the integer d to be a large, random integer which is relatively prime
to 1). That is, check that d satisfies:
("gcd" means "greatest common divisor").
The integer e is finally computed from p; q, and d to be the "multiplicative inverse"
of d, modulo (p 1). Thus we have
e
We prove in the next section that this guarantees that (1) and (2) hold, i.e. that E
and D are inverse permutations. Section VII shows how each of the above operations
can be done efficiently.
The aforementioned method should not be confused with the "exponentiation"
technique presented by Diffie and Hellman [1] to solve the key distribution problem.
Their technique permits two users to determine a key in common to be used in a
normal cryptographic system. It is not based on a trap-door one-way permutation.
Pohlig and Hellman [8] study a scheme related to ours, where exponentiation is done
modulo a prime number.
VI The Underlying Mathematics
We demonstrate the correctness of the deciphering algorithm using an identity due
to Euler and Fermat [7]: for any integer (message) M which is relatively prime to n,
Here OE(n) is the Euler totient function giving number of positive integers less than n
which are relatively prime to n. For prime numbers p,
In our case, we have by elementary properties of the totient function [7]:
Since d is relatively prime to OE(n), it has a multiplicative inverse e in the ring of
integers modulo OE(n):
e
We now prove that equations (1) and (2) hold (that is, that deciphering works
correctly if e and d are chosen as above). Now
and
From (3) we see that for all M such that p does not divide M
and since (p \Gamma 1) divides OE(n)
This is trivially true when so that this equality actually holds for
all M . Arguing similarly for q yields
Together these last two equations imply that for all M ,
This implies (1) and (2) for all M; are inverse
permutations. (We thank Rich Schroeppel for suggesting the above improved version
of the authors' previous proof.)
VII Algorithms
To show that our method is practical, we describe an efficient algorithm for each
required operation.
A How to Encrypt and Decrypt Efficiently
Computing M e (mod n) requires at most 2 \Delta log 2 (e) multiplications and 2 \Delta log 2 (e)
divisions using the following procedure (decryption can be performed similarly using
d instead of e):
Step 1. Let e k e be the binary representation of e.
Step 2. Set the variable C to 1.
Step 3. Repeat steps 3a and 3b for
Step 3a. Set C to the remainder of C 2 when divided by n.
Step 3b. If e set C to the remainder of C \Delta M when divided by n.
Step 4. Halt. Now C is the encrypted form of M .
This procedure is called "exponentiation by repeated squaring and multiplication."
This procedure is half as good as the best; more efficient procedures are known.
Knuth [3] studies this problem in detail.
The fact that the enciphering and deciphering are identical leads to a simple
implementation. (The whole operation can be implemented on a few special-purpose
integrated circuit chips.)
A high-speed computer can encrypt a 200-digit message M in a few seconds;
special-purpose hardware would be much faster. The encryption time per block increases
no faster than the cube of the number of digits in n.
B How to Find Large Prime Numbers
Each user must (privately) choose two large random numbers p and q to create his
own encryption and decryption keys. These numbers must be large so that it is not
computationally feasible for anyone to factor (Remember that n, but not
or q, will be in the public file.) We recommend using 100-digit (decimal) prime
numbers p and q, so that n has 200 digits.
To find a 100-digit "random" prime number, generate (odd) 100-digit random
numbers until a prime number is found. By the prime number theorem [7], about
will be tested before a prime is found.
To test a large number b for primality we recommend the elegant "probabilistic"
algorithm due to Solovay and Strassen [12]. It picks a random number a from a
uniform distribution on tests whether
a (b\Gamma1)=2 (mod b); (6)
where J(a; b) is the Jacobi symbol [7]. If b is prime (6) is always true. If b is composite
will be false with probability at least 1=2. If (6) holds for 100 randomly
chosen values of a then b is almost certainly prime; there is a (negligible) chance of
one in 2 100 that b is composite. Even if a composite were accidentally used in our
system, the receiver would probably detect this by noticing that decryption didn't
work correctly. When b is odd, a - b, and gcd(a; 1, the Jacobi symbol J(a; b)
has a value in f\Gamma1; 1g and can be efficiently computed by the program:
if a is even then J(a=2; b) \Delta (\Gamma1) (b 2 \Gamma1)=8
else J(b (mod a); a) \Delta (\Gamma1) (a\Gamma1)\Delta(b\Gamma1)=4
(The computations of J(a; b) and gcd(a; b) can be nicely combined, too.) Note that
this algorithm does not test a number for primality by trying to factor it. Other
efficient procedures for testing a large number for primality are given in [6,9,11].
To gain additional protection against sophisticated factoring algorithms, p and q
should differ in length by a few digits, both (p \Gamma 1) and (q \Gamma 1) should contain large
prime factors, and gcd(p \Gamma should be small. The latter condition is easily
checked.
To find a prime number p such that (p \Gamma 1) has a large prime factor, generate a
large random prime number u, then let p be the first prime in the sequence i \Delta
Additional security is provided by
ensuring that has a large prime factor.
A high-speed computer can determine in several seconds whether a 100-digit number
is prime, and can find the first prime after a given point in a minute or two.
Another approach to finding large prime numbers is to take a number of known
factorization, add one to it, and test the result for primality. If a prime p is found
it is possible to prove that it really is prime by using the factorization of p \Gamma 1. We
omit a discussion of this since the probabilistic method is adequate.
C How to Choose d
It is very easy to choose a number d which is relatively prime to OE(n). For example,
any prime number greater than max(p; q) will do. It is important that d should be
chosen from a large enough set so that a cryptanalyst cannot find it by direct search.
D How to Compute e from d and OE(n)
To compute e, use the following variation of Euclid's algorithm for computing the
greatest common divisor of OE(n) and d. (See exercise 4.5.2.15 in [3].) Calculate
d) by computing a series x
x until an x k equal to 0 is found. Then gcd(x
for each x i numbers a i and b i such that x
is the multiplicative inverse of x 1 (mod x 0 ). Since k will be less than 2 log 2 (n), this
computation is very rapid.
If e turns out to be less than log 2 (n), start over by choosing another value of d.
This guarantees that every encrypted message (except
some "wrap-around" (reduction modulo n) .
VIII A Small Example
Consider the case
can be computed as follows:
Therefore the multiplicative inverse (mod 2668) of
2773 we can encode two letters per block, substituting a two-digit number
for each letter: 26. Thus the message
(Julius Caesar, I, ii, 288, paraphrased) is encoded:
in binary, the first block
The whole message is enciphered as:
0948 2342 1084 1444 2663 2390 0778 0774 0219 1655 .
The reader can check that deciphering works: 948 157 j 920 (mod 2773), etc.
IX Security of the Method: Cryptanalytic Ap-
proaches
Since no techniques exist to prove that an encryption scheme is secure, the only test
available is to see whether anyone can think of a way to break it. The NBS standard
was "certified" this way; seventeen man-years at IBM were spent fruitlessly trying to
break that scheme. Once a method has successfully resisted such a concerted attack it
may for practical purposes be considered secure. (Actually there is some controversy
concerning the security of the NBS method [2].)
We show in the next sections that all the obvious approaches for breaking our
system are at least as difficult as factoring n. While factoring large numbers is not
provably difficult, it is a well-known problem that has been worked on for the last three
hundred years by many famous mathematicians. Fermat (1601?-1665) and Legendre
developed factoring algorithms; some of today's more efficient algorithms
are based on the work of Legendre. As we shall see in the next section, however, no
one has yet found an algorithm which can factor a 200-digit number in a reasonable
amount of time. We conclude that our system has already been partially "certified"
by these previous efforts to find efficient factoring algorithms.
In the following sections we consider ways a cryptanalyst might try to determine
the secret decryption key from the publicly revealed encryption key. We do not
consider ways of protecting the decryption key from theft; the usual physical security
methods should suffice. (For example, the encryption device could be a separate
device which could also be used to generate the encryption and decryption keys, such
that the decryption key is never printed out (even for its owner) but only used to
decrypt messages. The device could erase the decryption key if it was tampered with.)
A Factoring n
Factoring n would enable an enemy cryptanalyst to "break" our method. The factors
of n enable him to compute OE(n) and thus d. Fortunately, factoring a number seems
to be much more difficult than determining whether it is prime or composite.
A large number of factoring algorithms exist. Knuth [3, Section 4.5.4] gives an
excellent presentation of many of them. Pollard [9] presents an algorithm which
factors a number n in time O(n 1=4 ).
The fastest factoring algorithm known to the authors is due to Richard Schroeppel
(unpublished); it can factor n in approximately
exp
steps (here ln denotes the natural logarithm function). Table 1 gives the number of
operations needed to factor n with Schroeppel's method, and the time required if
each operation uses one microsecond, for various lengths of the number n (in decimal
digits).
Table
Digits Number of operations Time
hours
We recommend that n be about 200 digits long. Longer or shorter lengths can
be used depending on the relative importance of encryption speed and security in
the application at hand. An 80-digit n provides moderate security against an attack
using current technology; using 200 digits provides a margin of safety against future
developments. This flexibility to choose a key-length (and thus a level of security) to
suit a particular application is a feature not found in many of the previous encryption
schemes (such as the NBS scheme).
Computing OE(n) Without Factoring n
If a cryptanalyst could compute OE(n) then he could break the system by computing d
as the multiplicative inverse of e modulo OE(n) (using the procedure of Section VII D).
We argue that this approach is no easier than factoring n since it enables the
cryptanalyst to easily factor n using OE(n). This approach to factoring n has not
turned out to be practical.
How can n be factored using OE(n)? First, (p + q) is obtained from n and
is the square root of (p Finally, q is half the
difference of (p
Therefore breaking our system by computing OE(n) is no easier than breaking our
system by factoring n. (This is why n must be composite; OE(n) is trivial to compute
if n is prime.)
C Determining d Without Factoring n or Computing OE(n).
Of course, d should be chosen from a large enough set so that a direct search for it is
unfeasible.
We argue that computing d is no easier for a cryptanalyst than factoring n, since
once d is known n could be factored easily. This approach to factoring has also not
turned out to be fruitful.
A knowledge of d enables n to be factored as follows. Once a cryptanalyst knows d
he can calculate e which is a multiple of OE(n). Miller [6] has shown that n can
be factored using any multiple of OE(n). Therefore if n is large a cryptanalyst should
not be able to determine d any easier than he can factor n.
A cryptanalyst may hope to find a d 0 which is equivalent to the d secretly held by
a user of the public-key cryptosystem. If such values d 0 were common then a brute-force
search could break the system. However, all such d 0 differ by the least common
multiple of (p \Gamma 1) and (q \Gamma 1), and finding one enables n to be factored. (In (3) and
(5), OE(n) can be replaced by lcm(p \Gamma Finding any such d 0 is therefore as
difficult as factoring n.
Computing D in Some Other Way
Although this problem of "computing e-th roots modulo n without factoring n" is
not a well-known difficult problem like factoring, we feel reasonably confident that it
is computationally intractable. It may be possible to prove that any general method
of breaking our scheme yields an efficient factoring algorithm. This would establish
that any way of breaking our scheme must be as difficult as factoring. We have not
been able to prove this conjecture, however.
Our method should be certified by having the above conjecture of intractability
withstand a concerted attempt to disprove it. The reader is challenged to find a way
to "break" our method.
Avoiding "Reblocking" When Encrypting A Signed
Message
A signed message may have to be "reblocked" for encryption since the signature n may
be larger than the encryption n (every user has his own n). This can be avoided as
follows. A threshold value h is chosen (say for the public-key cryptosystem.
Every user maintains two public (e; n) pairs, one for enciphering and one for signature-
verification, where every signature n is less than h, and every enciphering n is greater
than h. Reblocking to encipher a signed message is then unnecessary; the message is
blocked according to the transmitter's signature n.
Another solution uses a technique given in [4]. Each user has a single (e; n) pair
where n is between h and 2h, where h is a threshold as above. A message is encoded
as a number less than h and enciphered as before, except that if the ciphertext is
greater than h, it is repeatedly re-enciphered until it is less than h. Similarly for
decryption the ciphertext is repeatedly deciphered to obtain a value less than h. If n
is near h re-enciphering will be infrequent. (Infinite looping is not possible, since at
worst a message is enciphered as itself.)
XI Conclusions
We have proposed a method for implementing a public-key cryptosystem whose security
rests in part on the difficulty of factoring large numbers. If the security of our
method proves to be adequate, it permits secure communications to be established
without the use of couriers to carry keys, and it also permits one to "sign" digitized
documents.
The security of this system needs to be examined in more detail. In particular,
the difficulty of factoring large numbers should be examined very closely. The reader
is urged to find a way to "break" the system. Once the method has withstood all
attacks for a sufficient length of time it may be used with a reasonable amount of
confidence.
Our encryption function is the only candidate for a "trap-door one-way permuta-
tion" known to the authors. It might be desirable to find other examples, to provide
alternative implementations should the security of our system turn out someday to be
inadequate. There are surely also many new applications to be discovered for these
functions.
Acknowledgments
. We thank Martin Hellman, Richard Schroeppel, Abraham
Lempel, and Roger Needham for helpful discussions, and Wendy Glasser for her
assistance in preparing the initial manuscript. Xerox PARC provided support and
some marvelous text-editing facilities for preparing the final manuscript.
Received April 4, 1977; revised September 1, 1977.
--R
New directions in cryptography.
Exhaustive cryptanalysis of the NBS data encryption standard.
The Art of Computer Programming
Some cryptographic applications of permutation polynomials.
Secure communications over an insecure channel.
Riemann's hypothesis and tests for primality.
An Introduction to the Theory of Numbers.
An improved algorithm for computing logarithms over GF (p) and its cryptographic significance.
Theorems on factorization and primality testing.
Electronic mail.
Probabilistic algorithms.
A Fast Monte-Carlo test for primality
Federal Register
Federal Register
--TR
The art of computer programming, volume 2 (3rd ed.)
Secure communications over insecure channels
Riemann''s Hypothesis and tests for primality
--CTR
W. Kchlin, Public key encryption, ACM SIGSAM Bulletin, v.21 n.3, p.69-73, Aug. 1987
Reena Bhaindarkar , Sandhya Suman , Reema Raghava , T. J. Mathew, Design of RSRM protocol and analysis of the computational complexity of RLR algorithm for communication over the internet, Proceedings of the 15th international conference on Computer communication, p.1050-1060, August 12-14, 2002, Mumbai, Maharashtra, India
G. C. Meletiou , D. K. Tasoulis , M. N. Vrahatis, Transformations of two cryptographic problems in terms of matrices, ACM SIGSAM Bulletin, v.39 n.4, December 2005
Mihir Bellare , Silvio Micali, How to sign given any trapdoor permutation, Journal of the ACM (JACM), v.39 n.1, p.214-233, Jan. 1992
Ray Bird , Inder Gopal , Amir Herzberg , Phil Janson , Shay Kutten , Refik Molva , Moti Yung, The KryptoKnight family of light-weight protocols for authentication and key distribution, IEEE/ACM Transactions on Networking (TON), v.3 n.1, p.31-41, Feb. 1995
Amitanand S. Aiyer , Lorenzo Alvisi , Allen Clement , Mike Dahlin , Jean-Philippe Martin , Carl Porth, BAR fault tolerance for cooperative services, ACM SIGOPS Operating Systems Review, v.39 n.5, December 2005 | electronic funds transfer;privacy;digital signatures;public-key cryptosystems;cryptography;electronic mail;factorization;authentication;prime number;security;message-passing |
358684 | Efficient Address Generation for Affine Subscripts in Data-Parallel Programs. | Address generation for compiling programs, written in HPF, to executable SPMD code is an important and necessary phase in a parallelizing compiler. This paper presents an efficient compilation technique to generate the local memory access sequences for block-cyclically distributed array references with affine subscripts in data-parallel programs. For the memory accesses of an array reference with affine subscript within a two-nested loop, there exist repetitive patterns both at the outer and inner loops. We use tables to record the memory accesses of repetitive patterns. According to these tables, a new start-computation algorithm is proposed to compute the starting elements on a processor for each outer loop iteration. The complexities of the table constructions are O(k+s2), where k is the distribution block size and s2 is the access stride for the inner loop. After tables are constructed, generating each starting element for each outer loop iteration can run in O(1) time. Moreover, we also show that the repetitive iterations for outer loop are Pk/gcd(Pk,s1), where P is the number of processors and s1 is the access stride for the outer loop. Therefore, the total complexity to generate the local memory access sequences for a block-cyclically distributed array with affine subscript in a two-nested loop is O(Pk/gcd(Pk,s1)+k+s2). | Introduction
Generally speaking, data-parallel languages support
three regular data distributions: block, cyclic, and
block-cyclic data distributions. The block distribution
is to distribute contiguous array elements evenly
onto processors. The cyclic distribution is to distribute
each array element onto processors one at a
time and in a round-robin fashion. The distribution
that blocks of size k are distributed onto processors
in a round-robin fashion is the block-cyclic
distribution and is denoted as cyclic(k). The block-cyclic
distribution is known to be the most general
data distribution. The block and cyclic distributions
can be represented by the block-cyclic distribution as
cyclic(d NA
e) and cyclic(1), respectively, where NA is
the number of array elements and P is the number
of processors.
The address generation problems for compiling
array references with block or cyclic distributions have
been studied thoroughly [5, 12, 13, 21]. The more
general problems for compiling array references with
block-cyclic distribution also have been studied extensively
[3, 7, 9, 11, 11, 14, 16, 19, 20, 22]. A finite
state machine (FSM) approach is proposed to traverse
the local memory access sequence of each processor
[3]. The method is a table-based approach.
The table construction needs to solve k linear Diophantine
equations and incurs a sorting operation.
The work improving the FSM approach [3] is proposed
in [10, 11, 20]. Efficient FSM table generation
is proposed. The improved work enumerates the local
memory access sequences by viewing the accessed
elements an integer lattice. The sorting step in [3]
is avoided in the improved work. In [7], the authors
use the virtual processors to generate communication
sets for each processor. From different viewpoint
of a block-cyclic distribution, the virtual processor
approach actually contains two approaches, one is
termed the virtual block approach and the other is
termed the virtual cyclic approach. The virtual block
approach views a block-cyclic distribution as a block
distribution on a set of virtual processors, which are
then cyclically mapped to processors. On the con-
trary, the virtual cyclic approach views a block-cyclic
distribution as a cyclic distribution on a set of virtual
processors, which are then block-wise mapped
to processors. The other approaches are similar to
either FSM approach or virtual processor approach
except some modifications. However, most of them
consider the simple array subscript. That is, the array
subscripts contain only one induction variable.
Recently, several efforts on compiling array references
with affine array subscripts are proposed [1,
10, 11, 15, 17, 22]. Affine array subscript means the
array subscript is a linear combination of multiple
induction variables (MIVs). In [1], the authors use
a linear algebra framework to generate communication
sets for affine array subscripts. Complex loop
bounds and local array subscripts of the generated
code will incur significant overhead. A table-based
approach is proposed in [22]. The authors classify
all blocks into classes and use a class table to record
the memory accesses of the first repetitive pattern.
By using the class table, they derived the communication
sets for each processor. Both [1] and [22] are
addressing the compilation of array references with
affine subscripts within a multi-nested loop. How-
ever, the proposed methods are not efficient enough
for dealing with some special case.
For compiling array references with affine sub-
scripts, some researchers pay their attention on the
array reference enclosed within a two-nested loop
to find a better result [10, 11, 17]. Based on FSM
approach [3], Kennedy et al. proposed another approach
to solving the compilation of array references
with affine subscripts within a two-nested loop [10,
11]. For the memory accesses of an array reference
with affine subscript within a two-nested loop, there
exist repetitive patterns both at the outer and inner
loops. Moreover, to fix each iteration of the
outer loop, the affine subscript is reduced to a simple
subscript. Therefore, for each iteration in the outer
repetitive pattern, they use FSM approach to generate
the local memory access sequences for the inner
loop. To use FSM approach at the inner loop, start-
computations to decide the initial state of FSM for
each iteration of the outer loop are necessary. They
proposed an O(Pk) algorithm for a start-computation,
where P is the number of processors and k is the
distribution block size. For the outer loop, they
found that the repetitive iterations are Pk iterations.
Hence, the total complexity to generate the local
memory access sequence for an array reference with
affine subscript within a two-nested loop is O(P
Ramanujam et al. proposed an improved work to
find the local starting elements on each processor
[17]. They first find a factor of basis vectors to jump
from a global start to a processor's space and then
traverse the lattice until hitting the starting element.
ENDDO
ENDDO
Fig. 1: HPF-like program model considered in the
paper.
Since a traverse step is incurred, the complexity of
their start-computation algorithm is O(k). Thus the
total complexity of Ramanujam's algorithm is turned
out to be O(Pk 2 ).
In this paper, we also propose another new start-
computation algorithm. A preprocessing step is required
before we compute the starting elements. The
complexity of the preprocessing step is O(k
where s 2 is the access stride for the inner loop. After
preprocessing is done, the time complexity to generate
each starting element on a processor is O(1).
In addition, we also discover that the outer loop
repetitive iterations are Pk= gcd(Pk; s 1 ) instead of
Pk, where s 1 is the access stride for the outer loop.
Therefore, the total complexity of our proposed approach
is O(Pk= gcd(Pk; s 1
which is asymptotical
to O(Pk). The proposed approach is not only
correct but also efficient.
The paper is organized as follows. Section 2 formulates
the problem and describes the traditionally
techniques to generate local memory access sequences
for compiling the array references with affine subscripts
within a two-nested loop. An efficient approach
to finding the starting elements from a global
start is proposed in Section 3. The performance
analyses and comparisons with the related work are
demonstrated in Section 4. Section 5 concludes the
paper.
Address Generation for Affine
Subscripts
2.1 Problem Formulation
Specifically, Fig. 1 illustrates the program model considered
in the paper. Array A is distributed onto
processors with cyclic(k) distribution. The array
reference contains two induction variables i 1 and i 2 .
The access strides of the array reference with respect
to i 1 and i 2 are s 1 and s 2 , respectively. The access
offset of the array reference is o. Fig. 2 is an example
of the program model shown in Fig. 1, where
Fig. 2(a) is the
layout of array elements on processors. The colored
elements are the array elements accessed by the array
reference in the two-nested loop. Fig. 2(b) shows
the global addresses of array elements accessed by
every processor. However, data distribution transfers
the global addressing space to processors' local
spaces. Therefore, what we care is to generate the
local addresses on a processor for the accessed ele-
ments. Thus the MIV address generation problem
is to generate the local addresses of array elements
accessed by each processor, just like Fig. 2(c) shows.
2.2
Table
-Based Address Generation
for Affine Subscripts
For the case of the array reference containing single
induction variable (SIV), as well-known, the memory
accesses on processors have a repetitive pattern. In
[3], a finite state machine (FSM) is built to orderly
iterate the local memory access sequence on a pro-
cessor. Similarly, for the array reference containing
multiple induction variables (MIVs), we can extend
the technique used for SIV to orderly enumerate the
local memory access sequences on a processor.
Consider the program model shown in Fig. 1.
For each outer iteration, the MIV address generation
problem can be reduced to an SIV problem. Thus we
can utilize the FSM approach to generate the local
memory access sequence for an SIV problem. Generating
the local memory access sequence for an MIV
problem can, therefore, be easily solved by enumerating
the sequence for each outer loop iteration until
reaching the outer loop bound. For example, consider
the example illustrated in Fig. 2. Suppose the
outer loop iteration i 1
0. The two-nested loop is
reduced to a single nested loop and the array reference
turns out to be A(2i 2
). Thus a finite state
machine (FSM) can be built to enumerate the local
memory access sequences for the SIV problem. Fig. 3
illustrates the FSM to generate the local memory access
sequences when the array reference contains a
single induction variable and the access stride is 2.
Fig. 3(a) shows the FSM table, where Next records
the next transition state and \DeltaM records the local
memory gaps of successive array elements from current
state transmitting to the next state. Fig. 3(b)
is the transition diagram of the FSM.
The initial state of the FSM depends on the position
of the starting array element in a block on the
processor. For instance, when the starting
element on processor p 0 is 0 and its position in a
(a)
(b)
(c)
Fig. 2: An MIV address generation example, where
Layout of array elements on processors. (b). Global
addresses of array elements accessed by every proces-
sor. (c). Local addresses of array elements accessed
by every processor.
State Next \DeltaM
(a)
(b)
Fig. 3: A finite state machine (FSM) to generate the
local memory access sequences for an SIV problem
with access stride 2.
block is 0, thus the initial state of the FSM for the
case when i 1
0 is at state 0. In addition to the
initial state of the FSM, we also need to know the
local address of the starting element since FSM only
records the local memory gaps between successive array
elements allocated on the processor. FSM has no
enough information to show where to start in terms
of local address. In other words, although we have
the FSM and its initial state, we still do not know
the starting local address in this case. For exam-
ple, besides the initial state of the FSM being state
0 that we should know, we have to figure out the local
address of the starting element. Obviously, when
the local address of the starting element 0 on
processor p 0 is 0. Therefore, when the local
memory access sequence for processor p 0 is 0; 2; 4;
and so on.
Likewise, if the two-nested loop is reduced
to a single nested loop and the array reference is
simplified to 37). The finite state machine
built for the case of i 1
can still be reused since
the memory access stride is still the same (2, for this
example). When i 1
1, the starting element on
processor
is 49 and its position in a block is 1.
Thus the initial state of the FSM for the case of i 1
1 is at state 1. However, where to start in terms
of local address is the key point now. When i 1
1, the starting element on processor p 0
is
its local address on processor p 0
is 13. Therefore,
the local memory access sequence for processor p 0
is 13; 15; and so on. Similarly, it is done likewise
when the outer loop iteration
Accordingly, we can obtain the local memory access
sequences on processors as Fig. 2(c) shows.
Actually, there is no need to iterate all of the
outer loop iterations from 0 to n 1 . We have found out
that iterating the outer loop iterations Pk= gcd(Pk; s 1
times is enough because there is a repetitive pattern
in the outer loop. Having this discovery can save a
lot of time due to the avoidance of recomputation for
repetitive patterns. The following theorem demonstrates
that the repetitive period of the outer loop
is Pk= gcd(Pk; s 1
iterations. Since the space is lim-
ited, the proof of the theorem is omitted in the paper.
One can refer to [18] for more details.
Theorem 1 For the program model shown in Fig. 1,
the memory accesses of the array reference have a
repetitive pattern and the repetitive period in respect
of the outer loop iteration is Pk= gcd(Pk; s 1
tions. 2
According to the above description, evidently, determining
the local address of the starting element
for each outer loop iteration is the primary step to
solve the MIV address generation problem. The problem
to find the local address of starting element for
each outer loop iteration will be described in the next
section. A new approach to generating the local addresses
of the starting elements will be presented in
the next section as well.
Generating Starting Elements
The findings of the starting elements for outer loop
iterations are important for solving the MIV address
generation problem. It is obvious that for a given
outer loop iteration the memory accesses just depend
on the inner loop access stride s 2 . Therefore, in this
section, we use s to indicate the inner loop access
stride s 2 except otherwise notified. The method to
find the starting elements in case of s - k can be
found in [10, 17]. Both of them are O(1) in com-
plexity. However, their methods to find the starting
elements in case of s ? k are O(Pk) and O(k), re-
spectively. We propose a new approach to find the
starting elements in case of s ? k and the time complexity
of the algorithm is O(1). The problem and
its solution are described as follows.
3.1 Problem Description
We have given an overall description of the MIV address
generation problem in Section 2.2. Finding the
starting element on a processor from a given outer
loop iteration plays an important role in dealing with
the MIV address generation problem. The following
we formally describe the induced problem. Let the
accessed element for some fixed outer loop iteration
(a) (b)
Fig. 4: Starting elements on every processor for the
example shown in Fig. 2. (a). Global addresses of
starting elements on every processor. (b). Local addresses
of starting elements on every processor.
be a global start and G be the local address of the
global start. Specifically, given a global start G, the
processor p on which G is allocated and the processor
q which we would like to find its starting element,
the problem is to figure out S q , the local address of
the starting element, for processor q. For example,
consider the example shown in Fig. 2(a). The gray
colored elements are the elements accessed by the
array reference, in which the deep-colored shaded elements
are the global starts corresponding to every
outer loop iterations and the light-colored shaded elements
on each processor are the starting elements
corresponding to every global starts. Suppose a given
global start is 37, which local address is 9 on processor
. The starting elements on processors p 0 ,
and p 3 are 49, 41, and 45, respectively, in terms of
global addresses. The problem is to figure out the
local addresses of these starting elements. That is,
13, 9, and 9, respectively. The starting elements for
every outer loop iteration are shown in Fig. 4. The
global and local addresses of the starting elements
on every processor are listed in Figs. 4(a) and 4(b),
respectively. The goal of the induced problem is to
obtain a table containing the local addresses of the
starting elements on some required processor for every
global start, as Fig. 4(b) shows for that required
processor.
3.2 Preprocessing
Given a global start G, we propose a new approach
to find the local address of the starting element S q
for processor q in case of s ? k. The proposed approach
is a table-based approach. In our approach,
it is necessary to pre-compute a few tables in order
to evaluate the starting elements for a given global
start. In this section, we only describe the character-
Fig. 5: An one-level mapping example to illustrate
the ideas of the tables used in the paper, where it
assumes that array elements are distributed over 4
processors with cyclic(4) distribution and the access
stride is 5.
istics of these tables and how it works in the proposed
approach. The complexities in time and space will
be analyzed in Section 4. For the sake of space limi-
tation, the constructions of the tables are omitted in
the paper. For further details, please refer to [18].
3.2.1 C2P, P2C, and Offset Tables
As well-known, the accessed elements on blocks have
a repetitive pattern. By [22], all blocks can be classified
into s
classes according to the positions of
the accessed elements on a block. Note that blocks
of the same class have the same format. All blocks
can be numbered in terms of class according to the
rule: b mod s
, where b is the block number
of that block. A repetitive pattern contains blocks
from class 0 to class s
1 and which is termed
a class cycle in [22]. In addition, since s ? k, there is
at most one accessed element on a block. Therefore,
we can use a table to record the position of the only
accessed element for every class. Different from [22],
we assume that the accessed element on the block of
class 0 is at the first position, that is, at position 0.
The blocks with no accessed element are recorded by
"-". With the table we can easily and efficiently get
the position of an accessed element on a block if the
class number of the block is given. Therefore, we can
easily deduce S q from G since all blocks have been
classified into classes. We denote the table recording
the position of the accessed element on every class
as C2P table.
As an example, let us suppose that array elements
are distributed over 4 processors with cyclic(4) distribution
and the access stride is 5. The layout of array
elements on processors is illustrated in Fig. 5. In
this figure, the accessed elements are those elements
with a white text on a black background. Obviously,
all blocks are classified into 5 classes. Each class is
colored by the gradations of gray color. The class
number of a block is labeled at the bottom of the
block. The blocks bounded by a dashed line indicate
a repetitive pattern. The accessed elements on class
are at positions 0, 1, 2, and 3, respec-
tively. Therefore, the values of C2P(0), (1), (2), and
(3) are 0, 1, 2, and 3, respectively. Moreover, there
is no accessed element in class 4. So, C2P(4)="-
". Thus we can obtain the C2P table. Fig. 7(a)
illustrates the C2P table for the example shown in
Fig. 5.
We can get the position of an accessed element
on a block according to the class number of a block
by using C2P table. In contrast to C2P table, a
P2C table is to record the class number according
to the position of accessed element on a block. Thus
we can obtain the class number of a block according
to the position of the accessed element on the block.
Generally speaking, we can obtain the class number
of a block according to the position of the accesses
element on the block by using C2P table. However,
it requires a search operation and, in some cases, we
can not recognize the class number of a block according
to the C2P table. For instance, when the number
of classes is smaller than the block size, it is possible
that, for some position, there is no class that its
accessed element is at that position. It would cause
confusion to recognize the class number by that posi-
tion. Therefore, P2C table is necessary. P2C table
can be constructed by C2P table. For example, considering
C2P table shown in Fig. 7(a), one can scan
the table to obtain P2C table. P2C table is illustrated
in Fig. 7(b). Note that, for the above case
that the number of classes is smaller than the block
size, for the position that has no class number to
correspond to, we assume the class number of that
position to be the class number of the previous po-
sition. For example, suppose that the distribution
block size is 4 and the access stride is 6. All blocks
can be classified into 3 classes. The accessed elements
on class 0, 1, and 2 are at the positions of
0, 2, and -, respectively. As a result, positions at 0
and 2 are corresponding to the classes 0 and 1, re-
spectively. We have P2C=(0, -, 1, -). Obviously,
positions at 1 and 3 have no suitable class numbers
to correspond to. With the above assumption, the
class number corresponded by position 1 is 0, the
same as that of the previous position. Similarly, the
class number corresponded by position 3 is 1. Thus,
we have P2C=(0, 0, 1, 1). Basically, C2P and P2C
tables are in some sense like a hash table.
The reason to make the above assumption for
P2C table construction is to solve the offset prob-
lem. The offset problem can be solved by the assumption
in conjunction with another table Offset.
Generally speaking, a global start G can be at any
position in a block. However, as constructing C2P
table, we have assumed that the accessed element on
the block of class 0 should be at position 0, Further-
more, as we construct P2C table, for the position
that has no class number to correspond to, we assign
the class number of the previous value to the current
value. Nevertheless, according to C2P table,
the class number has its real position to correspond
to. Consequently, there is a difference between the
real position and the assumed position if we make
such an assumption. In order to make use of C2P
table in every case, we use another table to record
the difference in order to make up the shortcomings
of C2P table. The table is denoted asOffset table
in the paper.
It has been discussed that when the number of
classes is larger than the block size, each position has
its suitable class number to correspond to. In such a
case, Offset table is of no use. Fig. 5 is an example
under such a condition and the Offset table is
shown in Fig. 7(c). On the other hand, if the number
of classes is smaller than the block size, Follow
the example used in the explanation of P2C table, in
which it assumes that the distribution block size is 4
and the access stride is 6. The C2P and P2C tables
are (0, 2, -) and (0, 0, 1, 1), respectively. Since position
1 has no suitable class number to correspond to,
we assign the class number corresponded by position
0 to position 1. Although, position 1 correspond to
class 0, the real position of accessed element on the
block of class 0 is at position 0 according to C2P ta-
ble. Thus there is a 1 difference between the assumed
value and the real value. As a result, Offset(1)=1.
Similarly, Offset(3)=1. There is no problem on
positions 0 and 2 since they have their suitable class
numbers to correspond to. Consequently, Offset
table is (0, 1, 0, 1).
3.2.2 NextActive and Jump Tables
As previously described, a block contains at most
one accessed element when the access stride is larger
than the block size. Of course, a block may contain
no accessed element at all in such a case. Thus, we
name a block that has elements to be accessed as an
active block; otherwise, it is termed an empty block.
The tables NextAct and Jump that we would like
to introduced are used for jumping over the empty
(a) (b)
Fig. (a) A one group ordered sequence. (b) A
multiple groups ordered sequence.
blocks to an active block. One important observation
here is that, from processor's viewpoint, blocks
on processors have a repetitive pattern in terms of
classes. To explain concretely, let us take a look at
the example shown in Fig. 5. The blocks on processor
are in classes 0, 4, 3, 2, 1, and then repeat
again from class 0. Similar situation also happens
on processors . The sequence of class
numbers on p 1
is 1, 0, 4, 3, 2, and that for p 2
is 2, 1,
0, 4, 3, and that for p 3
is 3, 2, 1, 0, 4. It is interesting
that the sequence of class numbers on each processor
is the same except the initial class number on each
processor. That is, the sequence of class numbers on
each processor can be viewed as the sequence 0, 4,
and the initial class numbers for p 0
and p 3
are 0, 1, 2, and 3, respectively. We use the
notation (0; 4; 3; 2; 1) to denote the ordered sequence.
Clearly, all class numbers have appeared in the ordered
sequence. Thus, we say that there is only one
group in the ordered sequence. Fig. 6(a) illustrates
the one group ordered sequence for this example.
It should be addressed that it is possible that
the sequence of class numbers on each processor may
be different and there may be more than one group
in an ordered sequence. However, groups are mutually
disjoint and a processor can belong to one and
only one group. We give an example to illustrate
the phenomenon. Suppose that array elements are
distributed over 2 processors with cyclic(3) distribution
and the access stride is 12. There are 4 classes
for this example. The sequence of class numbers on
2. and that on p 1 is 1, 3. The ordered sequence
can be represented as (0; 2)(1; 3). Obviously,
the ordered sequence contains two groups. One is
(0; 2) and another is (1; 3). (0; 2) and (1; are mutually
disjoint. Processor p 0
belongs to the group
(0;
belongs to the group (1; 3). Fig. 6(b)
illustrates the multiple groups ordered sequence for
(a)
(c) Offset
(d) NextAct
Fig. 7: Tables used for starting elements findings.
this example.
It is important to have such a discovery since we
can obtain the class number of the next block on a
processor from current block if the class number of
the current block is known. Based on the discovery,
we use one table to record the class number of the
next active block from current block on a processor
and another to record how many empty blocks we
need to skip over. They are named NextAct and
Jump, respectively. The constructions of the two
tables are based on the ordered sequence and C2P
table. If current block is not an empty block, we need
not to jump any block. Thus, the value in NextAct
table for that block is recorded by its class number
and that in Jump table is recorded by 0. Otherwise,
we can traverse the ordered sequence to find an active
block. Then the value in NextAct table for
that block is recorded by the class number of the active
block and that in Jump table is recorded by the
number of blocks that we have traversed. If we can
not find an active block, both the values in Nex-
tAct and Jump tables are recorded by "-". Such
as the example where array elements are distributed
over 4 processors with cyclic(4) distribution and the
access stride is 8, the NextAct and Jump tables are
(0, -) and (0, -), respectively. Although a processor
can belong to one and only one group, NextAct
table is suitable for all processors since the construction
of the table is based on the class number, not
on group. The NextAct and Jump tables for the
example shown in Fig. 5 are illustrated in Fig. 7(d)
and (e), respectively.
3.3 The Algorithm
With these tables we can evaluate the starting element
from a given global start in O(1) time complex-
ity. Fig. 8 illustrates the algorithm to evaluate the
starting element from a given global start. We term
the algorithm Start Computation algorithm. The
basic concept of the Start Computation algorithm
Algorithm: Start Computation algorithm for the
case of s ? k.
Input: G, a global start,
p, the processor where the global start is
allocated,
q, the processor that we would like to find
its starting element, where q 6= p
k, the distribution block size,
P , the number of processors,
s, the access stride,
C, the number of classes, where
C2P, P2C, Offset, NextAct, and
Jump tables.
Output: S q , the starting element on processor q.
Steps:
1. pos
2. dist
3.
4. pos
5. if pos
7. return no starting element on q
8. else pos
Jump(c)\Lambdak
9. endif
10. endif
12. if q ! p then
13.
14. endif
16. return S q
Fig. 8: Start Computation algorithm for the case of
is that, from the viewpoint of the global start, we
try to figure out the distance between the starting
element and the global start. With the distance we
can, therefore, get the local address of the starting
element by adding the distance to the local address
of the global start. The details of the algorithm is
described as follows.
Given G, the local address of a global start, and
where G is allocated, Step 1 is to calculate the
position on a block for the global start. The obtained
value is stored in pos g . Step 2 is to measure the
distance between processors p and q, which is then
stored in dist. In Step 3, P2C(pos g ) can obtain the
class number of the block which the global start is
on. Thus, Step 3 can get the class number of the
block on processor q, which may contain the starting
element. The class number obtained in Step 3 is
represented by c. According to C2P table, C2P(c)
can get the position of the accessed element on the
block of class c, if ever. Therefore, Step 4 can obtain
the position on a block for the starting element on
processor q. The obtained position is represented by
pos s . If pos s does not equal "-", it means that there
is an accessed element on the block. Of course, the
accessed element is the starting element. We can go
direct to Step 11 to evaluate the distance between
the starting element and the global start. If q ! p,
we still need to add a size of a block to the distance
since the starting element must be at one more course
than the global start. That is what Steps 12-14 has
done. As a result, the local address of the starting
element can be obtained, just as Step 15 shows.
On the other hand, if pos means that
there has no accessed element on the block. We can
use NextAct table to obtain the class number of the
next active block. If NextAct(c) = "-", it implies
that there exists no active block on the processor.
Certainly, there is no starting element on the pro-
cessor. Otherwise, which means that we can find an
active block on the processor, we can get the number
of blocks required to jump from the current block
to the next active block and the position of the accessed
element on the active block from Jump and
NextAct tables, respectively. Since pos s in Step
4 represents the position of the starting element on
the block with the same course as the global start,
hence, the distance caused by the number of blocks
required to jump to an active block should be added
to pos s in such a case. Thus, we have Step 8. Steps
after 11 are the same as explained in the previous
paragraph.
Let us take Fig. 9 as an example, where it assumes
that
Given an global start 37, whose local address is 9 on
processor we first to find the starting element for
processor . The input of the Start Computation
algorithm is
. The
tables used for the example are the same as shown
in Fig. 7. Following the Steps from 1 to 4 in the
algorithm we can obtain that pos
2. Since pos s does not equal "-",
we go direct to Step 11 and we obtain that offset =
1. Due to the invalidation of the condition in Step
12, we go direct to Step 15 and we have S 2
which corresponds to the array element 42 in terms
of global address.
On the same input except we take the
finding of the starting element on processor p 0
as
Fig. 9: Layout of array elements on processors for the
case of s 2 ? k, another MIV example, where
another example. After executing the Step 4, we
have pos 4, and pos
Since pos s equals "-", which means that the block
contains no accessed element, we go to Step 6. According
to NextAct and Jump tables, there is an
active block at one block after the empty block on
processor p 0 . By Step 8, we have pos 7. After
Step 11, we have offset = 6. As
needs to add 4, the size of a block. It turns out that
which corresponds to
the array element 67 in terms of global address.
Evidently, the time complexity of Start Computation
algorithm is O(1). The complexity analyses
of the tables used in the algorithm and the performance
comparisons against the existing methods will
be discussed in Section 4.
Performance Analyses and
Comparisons
Performance analyses of the tables used in Start Computation
algorithm are shown in Table 1. Performance
comparisons of our method against the existing
methods are shown in Table 2. For the sake of
space limitation, please refer to [18] for more details.
Conclusions
In this paper, we have presented an efficient approach
to the evaluation of the starting element for some
processor from a given global start, which is a key
step to solve the MIV address generation problem
Table
1: Performance Analyses.
Table
Complexity
Time Space
Offset
NextAct O(C) C
Table
2: Performance Comparisons.
[10]'s [17]'s Ours
start comp. O(1) O(1) O(1)
preprocess O(1) O(1) O(s 2
start comp. O(Pk) O(k) O(1)
outer loop
repetitive Pk Pk Pk
iterations
Total
in data-parallel programs, assuming array is block-
cyclically distributed and its subscript is affine. The
approach is a table-based approach. The constructions
of these tables require O(s
plexity, where s 2 is the access stride of the inner loop.
With these tables, the Start Computation algorithm
can run in O(1) time. In addition, we have discovered
that there exists a repetitive pattern for every
iterations. Therefore, the
MIV address generation problem can be solved in
is the
access stride of the outer loop.
In the future, we would like to apply the address
generation approach to evaluate communication sets.
Furthermore, the address generation and communication
sets evaluation for general affine subscripts are
also under investigation.
--R
A linear algebra framework for static HPF code distribution.
Programming in Vienna Fortran.
Generating local addresses and communication sets for data parallel programs.
Automatic Parallelization for Distributed-Memory Multiprocessing Systems
Concrete Mathematics.
On compiling array expressions for efficient execution on distributed-memory machines
High Performance Fortran Forum.
Efficient address generation for block-cyclic distri- butions
A linear-time algorithm for computing the memory access sequence in data-parallel programs
Compiling global name-space parallel loops for distributed execu- tion
Local iteration set computation for block-cyclic distributions
Computing the local iteration set of a block-cyclically distributed reference with affine subscripts
Optimizing the representation of local iteration sets and access sequences for block-cyclic distributions
Code generation for complex subscripts in data-parallel programs
Generating communication for array state- ments: Design
Efficient computation of address sequences in data parallel programs using closed forms for basis vectors.
An Optimizing Fortran D Compiler for MIMD Distributed-Memory Machines
Compiling array references with affine functions for data-parallel programs
--TR
Concrete mathematics: a foundation for computer science
Compile-time generation of regular communications patterns
Vienna FortranMYAMPERSANDmdash;a Fortran language extension for distributed memory multiprocessors
The high performance Fortran handbook
Generating communication for array statements
Compilation techniques for block-cyclic distributions
An optimizing Fortran D compiler for MIMD distributed-memory machines
Generating local addresses and communication sets for data-parallel programs
A linear-time algorithm for computing the memory access sequence in data-parallel programs
Efficient address generation for block-cyclic distributions
Compiling array expressions for efficient execution on distributed-memory machines
Efficient computation of address sequences in data parallel programs using closed forms for basis vectors
An Empirical Study of Fortran Programs for Parallelizing Compilers
Compiling Global Name-Space Parallel Loops for Distributed Execution
Code Generation for Complex Subscripts in Data-Parallel Programs | affine subscripts;data-parallel languages;data distribution;address generation;multiple induction variables MIVs;distributed-memory multicomputers;single program multiple data SPMD |
358700 | Solving Fundamental Problems on Sparse-Meshes. | AbstractA sparse-mesh, which has PUs on the diagonal of a two-dimensional grid only, is a cost effective distributed memory machine. Variants of this machine have been considered before, but none are as simple and pure as a sparse-mesh. Various fundamental problems (routing, sorting, list ranking) are analyzed, proving that sparse-meshes have great potential. It is shown that on a two-dimensional $n \times n$ sparse-mesh, which has $n$ PUs, for h-relations can be routed in $(h steps. The results are extended for higher dimensional sparse-meshes. On a $d$-dimensional $n \times \cdots \times n$ sparse-mesh, with h-relations are routed in $(6 \cdot (d - 1) / \epsilon - steps. | Introduction
On ordinary two-dimensional meshes we must accept that, due to their small
bisection width, for most problems the maximum achievable speed-up with n 2
processing units (PUs) is only \Theta(n). On the other hand, networks such as hypercubes
impose increasing conditions on the interconnection modules with increasing
network sizes. Cube-connected-cycles do not have this problem, but are
harder to program due to their irregularity. Anyway, because of a basic theorem
from VLSI lay-out [18], all planar architectures have an area that is quadratic
in their bisection-width. But this in turn means that, except for several special
problems with much locality (the Warshall algorithm most notably), we must
accept that the hardware cost is quadratic in the speed-up that we may hope to
obtain (later we will also deal with three-dimensional lay-outs). Once we have
accepted this, we should go for the simplest and cheapest architecture achieving
this, and a good candidate is the sparse-mesh considered in this paper.
Network. A sparse-mesh with n PUs consists of a two-dimensional n \Theta n grid
of buses, with PUs connected to them at the diagonal. The horizontal buses are
called row-buses, the vertical buses are called column-buses. PU i ,
send data along the i-th row-bus and receive data from the i-th column-bus. In
one step, PU i can send one packet of standard length to an arbitrary PU j . The
packet first travels along the i-th row-bus to position (i; j) and then turns into
the j-th column-bus. Bus conflicts are forbidden: the algorithm must guarantee
that in every step a bus is used for the transfer of only one packet. See Figure 1
for an illustration.
In comparison to a mesh, we have reduced the number of PUs from n 2 to
saving substantially on the hardware cost. In comparison with other
Max-Planck-Institut f?r Informatik, Im Stadtwald, Saarbr-ucken, Germany, jopsi@mpi-
sb.mpg.de, http://www.mpi-sb.mpg.de/-jopsi/.
Figure
1: A sparse-mesh with 8 PUs and an 8 \Theta 8 network of buses. The large
circles indicate the PUs with their indices, the smaller circles the connections
between the buses.
networks, we have a very simple interconnection network that can be produced
easily and is scalable without problem. The sparse-mesh is very similar to the
Parallel Alternating Direction Machine, PADAM, considered in [2, 3] and the
coated-mesh considered in [10]. Though similar in inspiration, the sparse-mesh
is simpler. In Section 6 the network is generalized for higher dimensions.
Problems. Parallel computation is possible only provided that the PUs can
exchange data. Among the many communication patterns, patterns in which
each PU sends and receives at most h packets, h-relations, have attracted most
attention. An h-relation is balanced, if every PU sends h=n packets to each PU.
Lemma 1 On a sparse-mesh with n PUs, balanced h-relations can be performed
in h steps.
Unfortunately, not all h-relations are balanced, and thus there is a need for
algorithms that route arbitrary h-relations efficiently. Also for cases that the PUs
have to route less than n packets, the above algorithm does not work. Offline,
all h-relations can be routed in h steps. The algorithm proceeds as follows:
Algorithm offline route
1. Construct a bipartite graph with n vertices on both sides. Add an edge
from Node i on the left to Node j on the right for each packet going from PU i
to PU j .
2. Color this h-regular bipartite graph with h colors.
3. Perform h routing steps. Route all packets whose edges got Color t, 0 -
h, in Step t.
This coloring idea is standard since [1]. Its feasibility is guaranteed by Hall's
theorem. As clearly at least h steps are required for routing an h-relation, the
offline algorithm is optimal. This shows that the h-relation routing problem is
equivalent to the problem of constructing a conflict-free bus-allocation schedule.
In [3] routing h-relations is considered for the PADAM. This network is so
similar to the sparse-mesh, that most results carry over. It was shown that
for h - n, h-relations can be routed in O(h) time. Further it was shown that
1-optimality can be achieved for log log n). Here and in the
remainder we call a routing algorithm c-optimal, if it routes all h-relations in at
most c \Delta
Other fundamental problems that should be solved explicitly on any network,
are sorting and list ranking. With h keys/nodes per PU, their communication
complexity is comparable to that of an h-relation. In [3], the list-ranking problem
was solved asymptotically optimally for log log n).
New Results. In this paper we address the problems of routing, sorting and
list ranking. Applying randomization, it is rather easy to route n ffl -relations,
(2=ffl)-optimally. An intricate deterministic algorithm is even (1=ffl)-
optimal. This might be the best achievable. We present a deterministic sampling
technique, by which all routing algorithms can be turned into sorting algorithms
with essentially the same time consumption. In addition to this, we give an algorithm
for ranking randomized lists, that runs in 6=ffl steps for lists
with nodes per PU. Our generalization of the sparse-mesh to higher-
dimensions is based on the generalized diagonals that were introduced in [6]. At
least for three dimensions this makes practical sense: n 2 PUs are interconnected
by a cubic amount of hardware, which means an asymptotically better ratio than
for two-dimensional sparse-meshes. On higher dimensional sparse-meshes everything
becomes much harder, because it is no longer true that all one-relations
can be routed in a single step. We consider our deterministic routing algorithm
for higher dimensional sparse-meshes to be the most interesting of the paper.
All results constitute considerable improvements over those in [3]. The results
of this paper demonstrate that networks like the sparse-mesh are versatile, not
only in theory, but even with a great practical potential: we show that for realistic
sizes of the network, problems with limited parallel slackness can be solved
efficiently. All proofs are omitted due to a lack of space.
Randomized Routing
Using the idea from [19] it is easy to obtain a randomized 2-optimal algorithm
for large h. The algorithm consists of two randomizations, routings in which the
destinations are randomly distributed. In Round 1, all packets are routed to
randomly chosen intermediate destinations; in Round 2, all packets are routed
to their actual destinations.
Lemma 2 On a sparse-mesh with n PUs, h-relations can be routed in 2 \Delta h+o(h)
steps, for all log n), with high probability.
We show how to route randomizations for
n). First the PUs
are divided into
subsets of PUs, S then we do the following:
Algorithm random route
1. Perform
In Superstep t,
its packets with destination in S (j+t) mod
p n to PU i
in S (j+t) mod
2. Perform
In Superstep t,
n, sends all its packets with destination in PU (i+t) mod
n in S j to
their destinations.
This approach can easily be generalized. For log n), the algorithm
consists of 1=ffl routing rounds. In Round r, 1 - r - 1=ffl, packets are routed to
the subsets, consisting of n 1\Gammaffl\Deltar PUs, in which their destinations lie.
Theorem 1 On a sparse-mesh with n PUs, random route routes randomizations
with log n) in (h probability. Arbitrary
h-relations can be routed in twice as many steps.
List ranking is the problem of determining the rank for every node of a set of
to the final node of its list. h denotes the number of nodes per PU.
The edges of a regular bipartite graph of degree m can be colored by splitting
the graph in two subgraphs log m times, each with half the previous degree [4].
Lev, Pippenger and Valiant [11] have shown that each halving step can be performed
by solving two problems that are very similar to list ranking (determining
the distance to the node with minimal index on a circle). Thus, if list-ranking
is solved in time T , then bipartite graphs of degree m can be colored in time
In [3] list ranking on PADAMs is performed by simulating a work-
optimal PRAM algorithm. For coloring the eges of a bipartite regular graph, the
above idea was used:
Lemma 3 [3] On a sparse-mesh with n PUs, list ranking can be solved in O(h)
steps, for all h - n \Delta log log n. A bipartite regular graph with h \Delta n edges, can be
colored in O(log(h \Delta n) \Delta h) steps.
The algorithms are asymptotically optimal, but the hidden constants are quite
bad, and the range of applicability is limited to large h. In the following we
present a really good and versatile randomized list-ranking algorithm. It is based
on the repeated-halving algorithm from [14]. This algorithm has the unique
property, that not only the number of participating nodes is reduced in every
round, but that the size of the processor network is reduced as well. Generally,
the value of this property is limited, but on the sparse-mesh, where we need a
certain minimal h for efficient routing, this is very nice.
3.1 Repeated-Halving
It is assumed that the nodes of the lists are randomly distributed over the PUs.
If this is not the case, they should be randomized first. The algorithm consists of
log n reduction steps. In each step, first the set of PUs is divided in two halves,
. The nodes in S 0 with current successor in S 1 and vice-versa, are
called masters, the other nodes are non-masters. The current successor of a node
is stored in its crs field. Each PU holds nodes.
Algorithm reduce
1. Each non-master p follows the links until a master or a final node is reached
and sets crs(p) to this node.
2. Each master p asks
3. Each master p asks
In a full algorithm one must also keep track of the distances. After Step 2, each
master in S i , points to the subsequent master in S (i+1) mod 2 . Thus,
after Step 3, each master in S i , points to the subsequent master in S i itself. Now
recursion can be applied on the masters. This does not involve communication
between S 0 and S 1 anymore. Details are provided in [14]. The expected number
of masters in each subset equals h \Delta n=4. Thus, after log n rounds, we have n
subproblems of expected size h=n, that can be solved internally. Hereafter, the
reduction must be reversed:
Algorithm expand
1. Each non-master p asks
For Step 1 of reduce we repeat pointer-jumping rounds as long as necessary.
In every such round, each node p who has not yet reached a master or a final
node, asks and the distance thereto. Normally pointer-
jumping is inefficient, but in this case, because the expected length of the lists
is extremely short, the number of nodes that participates decreases rapidly with
each performed round [16].
Theorem 2 On a sparse-mesh with n PUs, list ranking can be solved in 14 \Delta h+
o(h) steps, for all log n), with high probability.
For smaller h, the average number of participating nodes per PU decreases to
less than 1. This is not really a problem: for such a case, the maximum number
of participating nodes in any PU can easily be estimated on O(log n), and the
routing operations can be performed in O(log 2 n). In comparison to the the
total routing time, the cost of these later reduction rounds is negligible. Using
Theorem 1, this gives
Corollary 1 On a sparse-mesh with n PUs, list ranking can be solved in 14=ffl \Delta
log n), with high probability.
3.2 Sparse-Ruling-Sets
The performance of the previous algorithm can be boosted by first applying the
highly efficient sparse-ruling-sets algorithm from [13] to reduce the number of
nodes by a factor of !(1). We summarize the main ideas.
Algorithm sparse ruling sets
1. In each PU randomly select h 0 nodes as rulers.
2. The rulers initiate waves that run along their list until they reach the next
ruler or a final element. If a node p is reached by a wave from a ruler p 0 , then
3. The rulers send their index to the ruler whose wave reached them.
By a wave, we mean a series of sending operations in which a packet that is
destined for a node p is forwarded to crs(p). Hereafter, we can apply the previous
list-ranking algorithm for ranking the rulers. Finally each non-ruler p asks mst(p)
for the index of the last element of its list and the distance thereof. Details can
be found in [13], particularly the initial nodes must be handled carefully.
Theorem 3 On a sparse-mesh with n PUs, list ranking can be solved in 6=ffl \Delta
log n), with high probability. Under the
same conditions, a bipartite regular graph with h \Delta n edges, can be colored in
O(log(h steps.
4 Faster Routing
Our goal is to construct a deterministic O(1=ffl)-optimal algorithm for routing
-relations. As in [3], we apply the idea from [17] for turning offline routing
algorithms into online algorithms by solving Step 2 of offline route after
sufficient reduction of the graph online. However, here we obtain much more
interesting results.
log n), and define f by log n) 1=2 . As in random
route, we are going to perform 1=ffl rounds. In Round r, 1 - r - 1=ffl, all
packets are routed to the subsets, consisting of n 1\Gammaffl\Deltar PUs each, in which their
destinations lie. We describe Round 1. First the PUs are divided in n ffl sub-
each consisting of n 1\Gammaffl PUs. Then, we perform the following
steps:
Algorithm determine destination
1. Each PU i , its packets on the indices of their destination
subsets. The packets going to the same S j , are filled into
superpackets of size f \Delta log n, leaving one partially filled or empty superpacket
for each j. The superpackets p, going to the same S j , are numbered consecutively
and starting from 0 with numbers a p . ff ij denotes the total number of
superpackets going from S i to S j .
2. For each j, 0 the PUs perform a parallel prefix on the ff ij , to
make the numbers A
available in PU i ,
3. For each superpacket p in PU i ,
dest
For each superpacket p with destination in S j , dest p gives the index of the PU
in S j , to which p is going to be routed first. Because the PUs in S j are the
destination of h \Delta n 1\Gammaffl =(f \Delta log n)+n superpackets, the bipartite graph with n nodes
on both sides and one edge for every superpacket has degree h=(f \Delta log n)
Its edges can be colored in o(h) time. If the edge corresponding to a superpacket
gets Color t, 1 - t ! h=(f \Delta log n)+n ffl , then p is going to be routed in superstep
t.
Lemma 4 On a sparse-mesh with n PUs, and log n) all packets can
be routed to their destination subsets of size n 1\Gammaffl in h steps.
Repeating the above steps 1=ffl times, the packets eventually reach their destinations
Theorem 4 On a sparse-mesh with n PUs, h-relations with
can be routed in (h steps.
Our algorithm is not entirely deterministic: the underlying list-ranking algorithm
is randomized. It is likely that deterministic list-ranking for a problem
with nodes per PU can be performed in O(h=ffl) time. But, for our main
claim, that n ffl -relations can be routed (1=ffl)-optimally, we do not need this: if
n) the size of the superpackets can be taken f \Delta log 2 n, and we can
simply apply pointer-jumping for the list-ranking.
5 Faster Sorting
On meshes the best deterministic routing algorithm is more or less a sorting
algorithm [9, 8]. For sparse-meshes the situation is different: the routing algorithm
presented in Section 4 is in no way a sorting algorithm. However, it can
be enhanced to sort in essentially the same time.
We first consider log n). This case also gives the final round in the
sorting algorithm for smaller h hereafter. Define
log(n). In [15] it is shown how to deterministically select a
high-quality sample. The basic steps are:
Algorithm refined sampling
1. Each PU internally sorts all its keys and selects those with ranks j \Delta h=m,
m, to its sample.
2. Perform log n merge and reduce rounds: in Round r, 0 - r ! log n, the
samples, each of size m, in two subsets of 2 r PUs are merged, and only the
keys with even ranks are retained.
3. Broadcast the selected sample M of size m to all PUs.
Lemma 5 On a sparse-mesh with n PUs, refined sampling selects a sample
of size m in o(h) steps. The global rank, Rank p , of an element p, with rank rank p
among the elements of M satisfies
Thus, by merging the sample and its packets, a PU can estimate for each of its
packets p its global rank Rank p with an error bounded by O(h
This implies that if we set
dest
for each packet p, where rank p is defined as above, that then at most h
packets have the same dest-value. Thus all packets can be routed to the PU given
by their dest-values in h steps. Furthermore, after each PU has sorted all
the packets it received, the packets can be rearranged in o(h) steps so that each
PU holds exactly h packets.
Lemma 6 On a sparse-mesh with n PUs, the sorting algorithm based on refined
sampling sorts an h-relation in h
For log n), we stick close to the pattern of the routing algorithm
from Section 4: we perform 1=ffl rounds of bucket-sort with buckets of decreasing
sizes. . For each round of the bucket sorting, we
run refined sampling with log(n). This sample selection
takes steps. In Round r, 1 - r - 1=ffl, the sample is used to guess
in which subset of size n 1\Gammar=ffl each packet belongs. Then they are routed to these
subsets with the algorithm of Section 4.
Theorem 5 On a sparse-mesh with n PUs, the sorting algorithm based on refined
sampling sorts an h-relation in (h+o(h))=ffl steps, for all
The remark at the end of Section 4 can be taken over here: n) is
required for making the algorithm deterministic, but this has no impact on the
claim that (1=ffl)-optimality can be achieved for
6 Higher Dimensional Sparse-Meshes
One of the shortcomings of the PADAM [2, 3] is that it has no natural generalization
for higher-dimensions. The sparse-mesh can be generalized easily.
subset of a d-dimensional n \Theta \Delta \Delta \Delta \Theta n grid is a diagonal, if the
projection of all its points on any of the (d \Gamma 1)-dimensional coordinate planes is
a bijection.
For this we need the following definition (a simplification of the definition in [6]):
Diag
Lemma 7 [6] In a d-dimensional n \Theta \Delta \Delta \Delta \Theta n grid, Diag d is a diagonal.
Figure
2: A three-dimensional 4 \Theta 4 \Theta 4 PUs, such that if they
were occupied by towers in a three-dimensional chess game, none of them could
capture another one.
The d-dimensional sparse-mesh, consists of the n d\Gamma1 PUs on Diag d , inter-connected
by d \Delta n d\Gamma1 buses of length n. The PU at position
denoted PU x0 ;:::;x
. The routing is dimension-ordered. Thus, after t substeps,
d, a packet traveling from PU x0 ;:::;x
to PU x 0;:::;x 0
has reached position
The scheduling should exclude
bus conflicts. Diag 3 is illustrated in Figure 2.
6.1 Basic Results
Now it is even harder to find a conflict-free allocation of the buses. Different from
before, it is no longer automatically true, that every one-relation can be routed
offline in one step (but see Lemma 10!). For example, in a four-dimensional
sparse-mesh, under the permutation
at the positions (0; 0; x pass through position
(0; 0; 0; 0). However, for the most common routing pattern, balanced h-relations,
this is no problem, because it can be written as the composition of n shifts.
A shift is a routing pattern under which, for all x the packets in
have to be routed to PU (x0+s0 ) mod n;:::;(x
given s
Lemma 8 A shift in which each PU contributes one packets can be routed in
one step.
Lemma 8 implies that the randomized routing algorithm from Section 2 carries
on with minor modifications. One should first solve the case of large
log n) and then extend the algorithms to smaller
Theorem 6 On a d-dimensional sparse-mesh with n d\Gamma1 PUs, the generalization
of random route routes randomizations with log n) in (d \Gamma 1)
steps, with high probability. Arbitrary h-relations can be routed in twice
as many steps.
The algorithm for ranking randomized lists of Section 3 has the complexity of
a few routing operations, and this remains true. The same holds for our technique
of Section 5 for enhancing a routing algorithm into a sorting algorithm with the
same complexity. The only algorithm that does not generalize is the deterministic
algorithms of Section 4, because it is not based on balanced h-relations.
6.2 Deterministic Routing
In this section we analyze the possibilities of deterministic routing on higher-dimensional
sparse-meshes. Actually, for the sake of a simple notation, we describe
our algorithms for two-dimensional sparse-meshes, but we will take care
that they consist of a composition of shifts.
For log n), we can match the randomized result. The algorithm
is a deterministic version of the randomized one: it consists of two phases. In
the first phase the packets are smoothed-out, in the second they are routed to
their destinations. Smoothing-out means rearranging the packets so that every
PU holds approximately the same number of packets with destinations in each
of the PUs. Let
Algorithm deterministic route
1. Each PU i , its packets on the indices of their
destination PU. The packets going to the same PU j ,
superpackets of size f \Delta log n, leaving one partially filled or empty superpacket
for each j.
2. Construct a bipartite graph with n nodes on both sides, and an edge from
Node i on the left to Node j on the right, for every superpacket going from
PU i to PU j . Color this graph with (f colors. A superpacket p with
Color dest
3. Perform shifts. In Shift t, routes the
dest to PU (i+t) mod n .
4. Perform shifts. In Shift t, routes the
with destination in PU (i+t) mod n to this PU.
The algorithm works, because the coloring assures that for each source and destination
PU there are exactly f +1 packets with the same dest-value. This implies
that both routings are balanced.
Theorem 7 On a d-dimensional sparse-mesh with n d\Gamma1 PUs deterministic
route routes h-relations with steps.
For smaller h, ideas from deterministic route are combined with ideas
from random route: the algorithm consists of rounds in which the packets
are smoothed-out and then routed to the subsets in which their destinations lie.
The fact that this second phase must be balanced, implies that the smoothing
must be almost perfect. Another essential observation is that if packets are
redistributed among subsets of PUs, and if this routing has to be balanced, that
then the number of subsets should not exceed h. Let log n), and set
. The PUs are divided in n ffl subsets S i of n 1\Gammaffl PUs each. Then all
packets are routed to the subset in which their destinations lie, by performing
Algorithm deterministic route
1. Each PU i , its packets on the indices of their
destination PU. The packets going to the same S j , are filled into
superpackets of size f \Delta log n, leaving one partially filled or empty superpacket
for each j.
2. Construct a bipartite graph with n ffl nodes on both sides, and an edge from
Node i on the left to Node j on the right, for every superpacket going from
S i to S j . Color this graph with (f colors. For a superpacket p with
Color
3. In each S i , the rank, rank p , of
each packet p with dest among the packets p 0 with dest
4. Rearrange the packets within the S i , , such that thereafter a
packet p with rank stands in PU j mod n 1\Gammaffl of S i .
5. Perform shifts. In Shift t,
routes the f dest to
PU i in S (j+t) mod n ffl .
6. In each S i , the rank, rank p ,
of each packet p with destination in S j among the packets with destination in
this subset.
7. Rearrange the packets within the S i , , such that thereafter a
packet p with rank stands in PU j mod n 1\Gammaffl of S i .
8. Perform shifts. In Shift t,
routes the f with destination in S (j+t) mod n ffl
to PU i in this subset.
The ranks can be computed as in Section 4 in O(n ffl \Delta log steps. The
coloring guarantees that
Lemma 9 After Step 4, for each j, 0 every PU holds exactly f
superpackets p with dest
holds exactly f with destination in S j .
The rearrangements within the subsets are routings with a large h as described
by Theorem 7 (the superpackets do not need to be filled into supersuperpackets!),
and thus they can be performed without further recursion.
Theorem 8 On a d-dimensional sparse-mesh with n d\Gamma1 PUs deterministic
route routes h-relations with log n) in (6
steps.
From an algorithmic point of view deterministic route is the climax of the
paper: all developed techniques are combined in a non-trivial way, to obtain a
fairly strong result. As in Section 4, n) is required for also making
the coloring deterministic.
6.3 Three-Dimensional Sparse-Meshes
Three-dimensional sparse-meshes are practically the most interesting generaliza-
tion. It is a lucky circumstance that for them even the results from Section 4
carry on, because of the following
On a three-dimensional sparse-mesh, each one-relation can be routed
in a single step.
Proof: The permutation is a composition of OE corrects the
ith coordinate. OE 0 maps a position Diag 3 to a position
The injectivity of the projection on the plane x carries over to OE 0 . Thus,
after OE 0 , no two packets stand in the same position. Analogously, it follows that
the inverse of OE 2 , is an injection. Thus, not even at the beginning of OE 2 , that is
after OE 1 , two packets may have been sharing a position.
leads to the following analogue of Theorem 4:
Theorem 9 On a three-dimensional n \Theta n \Theta n sparse-mesh, h-relations with
log n) can be routed in (2 steps.
The coated-mesh [10] can be trivially generalized for higher dimensions. How-
ever, whereas routing on a two-dimensional coated-mesh is almost as easy as on
a sparse-mesh, there is no analogue of Lemma 10 for three-dimensional coated-
meshes.
7 Conclusion
Our analysis reveals that the sparse-mesh has a very different character than the
mesh. Whereas on a mesh several approaches essentially give the same result, we
see that on a sparse-mesh, there is a great performance difference. For
column-sort performs poorly, because the operations in subnetworks are costly,
while on a mesh they are for free. On a mesh also the randomized algorithm
inspired by [19] performs optimal [12, 7], because there the worst-case time consumption
is determined by the time the packets need to cross the bisection. For
the sparse-mesh this argument does not apply, and our deterministic routing
algorithm is twice as fast.
--R
'A Unified Framework for Off-Line Permutation Routing in Parallel Networks,' Mathematical Systems Theory
'On Edge Coloring Bipartite Graphs,' SIAM Journal on Computing
An Introduction to Parallel Algorithms
'Randomized Multipacket Routing and Sorting on Meshes,' Algorithmica
'Work-Optimal Simulation of PRAM Models on Meshes,' Nordic Journal of Computing
'A Fast Parallel Algorithm for Routing in Permutation Networks,' IEEE Transactions on Computers
'k-k Routing, k-k Sorting, and Cut-Through Routing on the Mesh,' Journal of Algorithms
'From Parallel to External List Ranking,' Techn.
Permutation Routing and Sorting on Meshes with Row and Column Buses
--TR
--CTR
Martti Forsell, A parallel computer as a NOC region, Networks on chip, Kluwer Academic Publishers, Hingham, MA, | routing;theory of parallel computation;algorithms;meshes;sorting;networks;list-ranking |
358939 | Data prefetch mechanisms. | The expanding gap between microprocessor and DRAM performance has necessitated the use of increasingly aggressive techniques designed to reduce or hide the latency of main memory access. Although large cache hierarchies have proven to be effective in reducing this latency for the most frequently used data, it is still not uncommon for many programs to spend more than half their run times stalled on memory requests. Data prefetching has been proposed as a technique for hiding the access latency of data referencing patterns that defeat caching strategies. Rather than waiting for a cache miss to initiate a memory fetch, data prefetching anticipates such misses and issues a fetch to the memory system in advance of the actual memory reference. To be effective, prefetching must be implemented in such a way that prefetches are timely, useful, and introduce little overhead. Secondary effects such as cache pollution and increased memory bandwidth requirements must also be taken into consideration. Despite these obstacles, prefetching has the potential to significantly improve overall program execution time by overlapping computation with memory accesses. Prefetching strategies are diverse, and no single strategy has yet been proposed that provides optimal performance. The following survey examines several alternative approaches, and discusses the design tradeoffs involved when implementing a data prefetch strategy. | Introduction
By any metric, microprocessor performance has increased at a dramatic rate over the past decade.
This trend has been sustained by continued architectural innovations and advances in
microprocessor fabrication technology. In contrast, main memory dynamic RAM (DRAM)
performance has increased at a much more leisurely rate, as shown in Figure 1. This expanding
gap between microprocessor and DRAM performance has necessitated the use of increasingly
aggressive techniques designed to reduce or hide the large latency of memory accesses [16].
Chief among the latency reducing techniques is the use of cache memory hierarchies [34]. The
static RAM (SRAM) memories used in caches have managed to keep pace with processor memory
request rates but continue to be too expensive for a main store technology. Although the use of
large cache hierarchies has proven to be effective in reducing the average memory access penalty
for programs that show a high degree of locality in their addressing patterns, it is still not
uncommon for scientific and other data-intensive programs to spend more than half their run times
stalled on memory requests [25]. The large, dense matrix operations that form the basis of many
such applications typically exhibit little locality and therefore can defeat caching strategies.
The poor cache utilization of these applications is partially a result of the "on demand" memory
fetch policy of most caches. This policy fetches data into the cache from main memory only after
the processor has requested a word and found it absent from the cache. The situation is illustrated
in Figure 2a where computation, including memory references satisfied within the cache hierarchy,
are represented by the upper time line while main memory access time is represented by the lower
time line. In this figure, the data blocks associated with memory references r1, r2, and r3 are not
found in the cache hierarchy and must therefore be fetched from main memory. Assuming the
referenced data word is needed immediately, the processor will be stalled while it waits for the
corresponding cache block to be fetched. Once the data returns from main memory it is cached and
forwarded to the processor where computation may again proceed.
Year
Performance101000
System
Figure
1. System and DRAM performance since 1988. System performance is measured by
SPECfp92 and DRAM performance by row access times. All values are normalized to their
1988 equivalents (source: Internet SPECtable, ftp://ftp.cs.toronto.edu/pub/jdd/spectable).
Note that this fetch policy will always result in a cache miss for the first access to a cache block
since only previously accessed data are stored in the cache. Such cache misses are known as cold
start or compulsory misses. Also, if the referenced data is part of a large array operation, it is
likely that the data will be replaced after its use to make room for new array elements being
streamed into the cache. When the same data block is needed later, the processor must again bring
it in from main memory incurring the full main memory access latency. This is called a capacity
miss.
Many of these cache misses can be avoided if we augment the demand fetch policy of the cache
with the addition of a data prefetch operation. Rather than waiting for a cache miss to perform a
memory fetch, data prefetching anticipates such misses and issues a fetch to the memory system in
advance of the actual memory reference. This prefetch proceeds in parallel with processor
computation, allowing the memory system time to transfer the desired data from main memory to
the cache. Ideally, the prefetch will complete just in time for the processor to access the needed
data in the cache without stalling the processor.
An increasingly common mechanism for initiating a data prefetch is an explicit fetch instruction
issued by the processor. At a minimum, a fetch specifies the address of a data word to be
brought into cache space. When the fetch instruction is executed, this address is simply passed
on to the memory system without forcing the processor to wait for a response. The cache responds
to the fetch in a manner similar to an ordinary load instruction with the exception that the
AAAA
AAAA
AAAA
AAAA
AAAA
AAAA
AAAA
AAAA
AAAA
AAAA
AAAA
AAAA
AAAA
AAAA
AAAA
AAAA
AAAA
AAAA3010(a)
(b)
(c)
Time
AAAA
AAAA
AAAA
AAAA
AAAA
AAAA
Computation Memory Acesss Cache Hit Cache Miss Prefetch
Figure
2. Execution diagram assuming a) no prefetching, b) perfect prefetching and c)
degraded prefetching.
referenced word is not forwarded to the processor after it has been cached. Figure 2b shows how
prefetching can be used to improve the execution time of the demand fetch case given in Figure 2a.
Here, the latency of main memory accesses is hidden by overlapping computation with memory
accesses resulting in a reduction in overall run time. This figure represents the ideal case when
prefetched data arrives just as it is requested by the processor.
A less optimistic situation is depicted in Figure 2c. In this figure, the prefetches for references r1
and r2 are issued too late to avoid processor stalls although the data for r2 is fetched early enough
to realize some benefit. Note that the data for r3 arrives early enough to hide all of the memory
latency but must be held in the processor cache for some period of time before it is used by the
processor. During this time, the prefetched data are exposed to the cache replacement policy and
may be evicted from the cache before use. When this occurs, the prefetch is said to be useless
because no performance benefit is derived from fetching the block early.
A prematurely prefetched block may also displace data in the cache that is currently in use by the
processor, resulting in what is known as cache pollution. Note that this effect should be
distinguished from normal cache replacement misses. A prefetch that causes a miss in the cache
that would not have occurred if prefetching was not in use is defined as cache pollution. If,
however, a prefetched block displaces a cache block which is referenced after the prefetched block
has been used, this is an ordinary replacement miss since the resulting cache miss would have
occurred with or without prefetching.
A more subtle side effect of prefetching occurs in the memory system. Note that in Figure 2a the
three memory requests occur within the first 31 time units of program startup whereas in Figure
2b, these requests are compressed into a period of 19 time units. By removing processor stall
cycles, prefetching effectively increases the frequency of memory requests issued by the processor.
Memory systems must be designed to match this higher bandwidth to avoid becoming saturated
and nullifying the benefits of prefetching. This is can be particularly true for multiprocessors where
bus utilization is typically higher than single processor systems.
It is also interesting to note that software prefetching can achieve a reduction in run time despite
adding instructions into the execution stream. In Figure 3, the memory effects from Figure 2 are
ignored and only the computational components of the run time are shown. Here, it can be seen
that the three prefetch instructions actually increase the amount of work done by the processor.
Several hardware-based prefetching techniques have also been proposed which do not require the
use of explicit fetch instructions. These techniques employ special hardware which monitors the
processor in an attempt to infer prefetching opportunities. Although hardware prefetching incurs
no instruction overhead, it often generates more unnecessary prefetches than software prefetching.
Unnecessary prefetches are more common in hardware schemes because they speculate on future
memory accesses without the benefit of compile-time information. If this speculation is incorrect,
cache blocks that are not actually needed will be brought into the cache. Although unnecessary
prefetches do not affect correct program behavior, they can result in cache pollution and will
AAAA
AAAA
AAAA
AAAA
AAAA
AAAA
AAAA
AAAA
AAAA
Prefetch Overhead
Prefetching
Figure
3. Software prefetching overhead.
consume memory bandwidth.
To be effective, data prefetching must be implemented in such a way that prefetches are timely,
useful, and introduce little overhead. Secondary effects in the memory system must also be taken
into consideration when designing a system that employs a prefetch strategy. Despite these
obstacles, data prefetching has the potential to significantly improve overall program execution
time by overlapping computation with memory accesses. Prefetching strategies are diverse and no
single strategy has yet been proposed which provides optimal performance. In the following
sections, alternative approaches to prefetching will be examined by comparing their relative
strengths and weaknesses.
2. Background
Prefetching, in some form, has existed since the mid-sixties. Early studies [1] of cache design
recognized the benefits of fetching multiple words from main memory into the cache. In effect,
such block memory transfers prefetch the words surrounding the current reference in hope of
taking advantage of the spatial locality of memory references. Hardware prefetching of separate
cache blocks was later implemented in the IBM 370/168 and Amdahl 470V [33]. Software
techniques are more recent. Smith first alluded to this idea in his survey of cache memories [34]
but at that time doubted its usefulness. Later, Porterfield [29] proposed the idea of a "cache load
instruction" with several RISC implementations following shortly thereafter.
Prefetching is not restricted to fetching data from main memory into a processor cache. Rather, it
is a generally applicable technique for moving memory objects up in the memory hierarchy before
they are actually needed by the processor. Prefetching mechanisms for instructions and file systems
are commonly used to prevent processor stalls, for example [38,28]. For the sake of brevity, only
techniques that apply to data objects residing in memory will be considered here.
Non-blocking load instructions share many similarities with data prefetching. Like prefetches,
these instructions are issued in advance of the data's actual use to take advantage of the parallelism
between the processor and memory subsystem. Rather than loading data into the cache, however,
the specified word is placed directly into a processor register. Non-blocking loads are an example
of a binding prefetch, so named because the value of the prefetched variable is bound to a named
location (a processor register, in this case) at the time the prefetch is issued. Although non-blocking
loads will not be discussed further here, other forms of binding prefetches will be
examined.
Data prefetching has received considerable attention in the literature as a potential means of
boosting performance in multiprocessor systems. This interest stems from a desire to reduce the
particularly high memory latencies often found in such systems. Memory delays tend to be high in
multiprocessors due to added contention for shared resources such as a shared bus and memory
modules in a symmetric multiprocessor. Memory delays are even more pronounced in distributed-memory
multiprocessors where memory requests may need to be satisfied across an interconnection
network. By masking some or all of these significant memory latencies, prefetching can be an
effective means of speeding up multiprocessor applications.
Due to this emphasis on prefetching in multiprocessor systems, many of the prefetching
mechanisms discussed below have been studied either largely or exclusively in this context.
Because several of these mechanisms may also be effective in single processor systems,
multiprocessor prefetching is treated as a separate topic only when the prefetch mechanism is
inherent to such systems.
3. Software Data Prefetching
Most contemporary microprocessors support some form of fetch instruction which can be used
to implement prefetching [3,31,37]. The implementation of a fetch can be as simple as a load
into a processor register that has been hardwired to zero. Slightly more sophisticated
implementations provide hints to the memory system as to how the prefetched block will be used.
Such information may be useful in multiprocessors where data can be prefetched in different
sharing states, for example.
Although particular implementations will vary, all fetch instructions share some common
characteristics. Fetches are non-blocking memory operations and therefore require a lockup-free
cache [21] that allows prefetches to bypass other outstanding memory operations in the cache.
Prefetches are typically implemented in such a way that fetch instructions cannot cause
exceptions. Exceptions are suppressed for prefetches to insure that they remain an optional
optimization feature that does not affect program correctness or initiate large and potentially
unnecessary overhead, such as page faults or other memory exceptions.
The hardware required to implement software prefetching is modest compared to other prefetching
strategies. Most of the complexity of this approach lies in the judicious placement of fetch
instructions within the target application. The task of choosing where in the program to place a
fetch instruction relative to the matching load or store instruction is known as prefetch
scheduling.
In practice, it is not possible to precisely predict when to schedule a prefetch so that data arrives in
the cache at the moment it will be requested by the processor, as was the case in Figure 2b. The
execution time between the prefetch and the matching memory reference may vary, as will memory
latencies. These uncertainties are not predictable at compile time and therefore require careful
consideration when scheduling prefetch instructions in a program.
Fetch instructions may be added by the programmer or by the compiler during an optimization
pass. Unlike many optimizations which occur too frequently in a program or are too tedious to
implement by hand, prefetch scheduling can often be done effectively by the programmer. Studies
have indicated that adding just a few prefetch directives to a program can substantially improve
performance [24]. However, if programming effort is to be kept at a minimum, or if the program
contains many prefetching opportunities, compiler support may be required.
Whether hand-coded or automated by a compiler, prefetching is most often used within loops
responsible for large array calculations. Such loops provide excellent prefetching opportunities
because they are common in scientific codes, exhibit poor cache utilization and often have
predictable array referencing patterns. By establishing these patterns at compile-time, fetch
instructions can be placed inside loop bodies so that data for a future loop iteration can be
prefetched during the current iteration.
As an example of how loop-based prefetching may be used, consider the code segment shown in
Figure 4a. This loop calculates the inner product of two vectors, a and b, in a manner similar to
the innermost loop of a matrix multiplication calculation. If we assume a four-word cache block,
this code segment will cause a cache miss every fourth iteration. We can attempt to avoid these
cache misses by adding the prefetch directives shown in Figure 4b. Note that this figure is a source
code representation of the assembly code that would be generated by the compiler.
This simple approach to prefetching suffers from several problems. First, we need not prefetch
every iteration of this loop since each fetch actually brings four words (one cache block) into the
cache. Although the extra prefetch operations are not illegal, they are unnecessary and will
degrade performance. Assuming a and b are cache block aligned, prefetching should be done only
on every fourth iteration. One solution to this problem is to surround the fetch directives with
an if condition that tests when i modulo true. The overhead of such an explicit
prefetch predicate, however, would likely offset the benefits of prefetching and therefore should be
avoided. A better solution is to unroll the loop by a factor of r where r is equal to the number of
words to be prefetched per cache block. As shown in Figure 4c, unrolling a loop involves
replicating the loop body r times and increasing the loop stride to r. Note that the fetch
for
for
for
for
(a)
(b)
(c)
(d)
Figure
4. Inner product calculation using a) no prefetching, b) simple prefetching, c)
prefetching with loop unrolling and d) software pipelining.
directives are not replicated and the index value used to calculate the prefetch address is changed
from i+1 to i+r.
The code segment given in Figure 4c removes most cache misses and unnecessary prefetches but
further improvements are possible. Note that cache misses will occur during the first iteration of
the loop since prefetches are never issued for the initial iteration. Unnecessary prefetches will occur
in the last iteration of the unrolled loop where the fetch commands attempt to access data past
the loop index boundary. Both of the above problems can be remedied by using software pipelining
techniques as shown in Figure 4d. In this figure, we have extracted select code segments out of the
loop body and placed them on either side of the original loop. Fetch statements have been
prepended to the main loop to prefetch data for the first iteration of the main loop, including ip.
This segment of code is referred to as the loop prolog. An epilog is added to the end of the main
loop to execute the final inner product computations without initiating any unnecessary prefetch
instructions.
The code given in Figure 4 is said to cover all loop references because each reference is preceded
by a matching prefetch. However, one final refinement may be necessary to make these prefetches
effective. The examples in Figure 4 have been written with the implicit assumption that prefetching
one iteration ahead of the data's actual use is sufficient to hide the latency of main memory
accesses. This may not be the case. Although early studies [4] were based on this assumption,
Klaiber and Levy [20] recognized that this was not a sufficiently general solution. When loops
contain small computational bodies, it may be necessary to initiate prefetches d iterations before
the data is referenced. Here, d is known as the prefetch distance and is expressed in units of loop
iterations. Mowry, et. al. [25] later simplified the computation of d to
l
s
where l is the average memory latency, measured in processor cycles, and s is the estimated cycle
time of the shortest possible execution path through one loop iteration, including the prefetch
overhead. By choosing the shortest execution path through one loop iteration and using the ceiling
operator, this calculation is designed to err on the conservative side and thus increase the likelihood
that prefetched data will be cached before it is requested by the processor.
Returning to the main loop in Figure 4d, let us assume an average miss latency of 100 processor
cycles and a loop iteration time of 45 cycles so that d = 3. Figure 5 shows the final version of the
inner product loop which has been altered to handle a prefetch distance of three. Note that the
prolog has been expanded to include a loop which prefetches several cache blocks for the initial
three iterations of the main loop. Also, the main loop has been shortened to stop prefetching three
iterations before the end of the computation. No changes are necessary for the epilog which carries
out the remaining loop iterations with no prefetching.
The loop transformations outlined above are fairly mechanical and, with some refinements, can be
applied recursively to nested loops. Sophisticated compiler algorithms based on this approach
have been developed to automatically add fetch instructions during an optimization pass of a
compiler [25], with varying degrees of success. Bernstein, et al. [3] measured the run-times of
twelve scientific benchmarks both with and without the use of prefetching on a PowerPC 601-
based system. Prefetching typically improved run-times by less than 12% although one benchmark
ran 22% faster and three others actually ran slightly slower due to prefetch instruction overhead.
Santhanam, et al. [31] found that six of the ten SPECfp95 benchmark programs ran between 26%
and 98% faster on a PA8000-based system when prefetching was enabled. Three of the four
remaining SPECfp95 programs showed less than a 7% improvement in run-time and one program
was slowed down by 12%.
Because a compiler must be able to reliably predict memory access patterns, prefetching is
normally restricted to loops containing array accesses whose indices are linear functions of the loop
indices. Such loops are relatively common in scientific codes but far less so in general applications.
Attempts at establishing similar software prefetching strategies for these applications are hampered
by their irregular referencing patterns [9,22,23]. Given the complex control structures typical of
general applications, there is often a limited window in which to reliably predict when a particular
datum will be accessed. Moreover, once a cache block has been accessed, there is less of a chance
that several successive cache blocks will also be requested when data structures such as graphs and
linked lists are used. Finally, the comparatively high temporal locality of many general
applications often result in high cache utilization thereby diminishing the benefit of prefetching.
Even when restricted to well-conformed looping structures, the use of explicit fetch instructions
exacts a performance penalty that must be considered when using software prefetching. Fetch
instructions add processor overhead not only because they require extra execution cycles but also
because the fetch source addresses must be calculated and stored in the processor. Ideally, this
prefetch address should be retained so that it need not be recalculated for the matching load or
store instruction. By allocating and retaining register space for the prefetch addresses, however,
the compiler will have less register space to allocate to other active variables. The addition of
fetch instructions is therefore said to increase register pressure which, in turn, may result in
additional spill code to manage variables "spilled" out to main memory due to insufficient register
space. The problem is exacerbated when the prefetch distance is greater than one since this implies
either maintaining d address registers to hold multiple prefetch addresses or storing these addresses
in memory if the required number of address registers are not available.
Comparing the transformed loop in Figure 5 to the original loop, it can be seen that software
prefetching also results in significant code expansion which, in turn, may degrade instruction cache
performance. Finally, because software prefetching is done statically, it is unable to detect when a
prefetched block has been prematurely evicted and needs to be re-fetched.
for
for
prolog -prefetching only
main loop -prefetching
and computation
computation only
Figure
5. Final inner product loop transformation.
4. Hardware Data Prefetching
Several hardware prefetching schemes have been proposed which add prefetching capabilities to a
system without the need for programmer or compiler intervention. No changes to existing
executables are necessary so instruction overhead is completely eliminated. Hardware prefetching
also can take advantage of run-time information to potentially make prefetching more effective.
4.1 Sequential prefetching
Most (but not all) prefetching schemes are designed to fetch data from main memory into the
processor cache in units of cache blocks. It should be noted, however, that multiple word cache
blocks are themselves a form of data prefetching. By grouping consecutive memory words into
single units, caches exploit the principle of spatial locality to implicitly prefetch data that is likely
to be referenced in the near future.
The degree to which large cache blocks can be effective in prefetching data is limited by the
ensuing cache pollution effects. That is, as the cache block size increases, so does the amount of
potentially useful data displaced from the cache to make room for the new block. In shared-memory
multiprocessors with private caches, large cache blocks may also cause false sharing
which occurs when two or more processors wish to access different words within the same cache
block and at least one of the accesses is a store. Although the accesses are logically applied to
separate words, the cache hardware is unable to make this distinction since it operates only on
whole cache blocks. The accesses are therefore treated as operations applied to a single object and
cache coherence traffic is generated to ensure that the changes made to a block by a store
operation are seen by all processors caching the block. In the case of false sharing, this traffic is
unnecessary since only the processor executing the store references the word being written.
Increasing the cache block size increases the likelihood of two processors sharing data from the
same block and hence false sharing is more likely to arise.
Sequential prefetching can take advantage of spatial locality without introducing some of the
problems associated with large cache blocks. The simplest sequential prefetching schemes are
variations upon the one block lookahead (OBL) approach which initiates a prefetch for block b+1
when block b is accessed. This differs from simply doubling the block size in that the prefetched
blocks are treated separately with regard to the cache replacement and coherence policies. For
example, a large block may contain one word which is frequently referenced and several other
words which are not in use. Assuming an LRU replacement policy, the entire block will be
retained even though only a portion of the block's data is actually in use. If this large block were
replaced with two smaller blocks, one of them could be evicted to make room for more active data.
Similarly, the use of smaller cache blocks reduces the probability that false sharing will occur.
OBL implementations differ depending on what type of access to block b initiates the prefetch of
b+1. Smith [34] summarizes several of these approaches of which the prefetch-on-miss and
tagged prefetch algorithms will be discussed here. The prefetch-on-miss algorithm simply initiates
a prefetch for block b+1 whenever an access for block b results in a cache miss. If b+1 is already
cached, no memory access is initiated. The tagged prefetch algorithm associates a tag bit with
every memory block. This bit is used to detect when a block is demand-fetched or a prefetched
block is referenced for the first time. In either of these cases, the next sequential block is fetched.
Smith found that tagged prefetching reduced cache miss ratios in a unified (both instruction and
data) cache by between 50% and 90% for a set of trace-driven simulations. Prefetch-on-miss was
less than half as effective as tagged prefetching in reducing miss ratios. The reason prefetch-on-
miss is less effective is illustrated in Figure 6 where the behavior of each algorithm when accessing
three contiguous blocks is shown. Here, it can be seen that a strictly sequential access pattern will
result in a cache miss for every other cache block when the prefetch-on-miss algorithm is used but
this same access pattern results in only one cache miss when employing a tagged prefetch
algorithm.
The HP PA7200 [5] serves as an example of a contemporary microprocessor that uses OBL
prefetch hardware. The PA7200 implements a tagged prefetch scheme using either a directed or an
undirected mode. In the undirected mode, the next sequential line is prefetched. In the directed
mode, the prefetch direction (forward or backward) and distance can be determined by the
pre/post-increment amount encoded in the load or store instructions. That is, when the
contents of an address register are auto-incremented, the cache block associated with a new address
is prefetched. Compared to a base case with no prefetching, the PA7200 achieved run-time
improvements in the range of 0% to 80% for 10 SPECfp95 benchmark programs [35]. Although
performance was found to be application-dependent, all but two of the programs ran more than
20% faster when prefetching was enabled.
Note that one shortcoming of the OBL schemes is that the prefetch may not be initiated far enough
in advance of the actual use to avoid a processor memory stall. A sequential access stream
resulting from a tight loop, for example, may not allow sufficient lead time between the use of
block b and the request for block b+1. To solve this problem, it is possible to increase the number
of blocks prefetched after a demand fetch from one to K, where K is known as the degree of
prefetching. Prefetching K > 1 subsequent blocks aids the memory system in staying ahead of
demand-fetched
prefetched
demand-fetched
prefetched
demand-fetched
prefetched
demand-fetched
prefetched
demand-fetched
prefetched
demand-fetched
prefetched
prefetched0 demand-fetched
prefetched
prefetched
prefetched
demand-fetched
prefetched
demand-fetched
prefetched
prefetched
demand-fetched
prefetched
prefetched
prefetched1 prefetched
prefetched
prefetched
(c)
(a)
(b)
Figure
6. Three forms of sequential prefetching: a) Prefetch on miss, b) tagged prefetch and
c) sequential prefetching with 2.
rapid processor requests for sequential data blocks. As each prefetched block, b, is accessed for
the first time, the cache is interrogated to check if blocks b+1, . b+K are present in the cache and,
if not, the missing blocks are fetched from memory. Note that when scheme is identical
to tagged OBL prefetching.
Although increasing the degree of prefetching reduces miss rates in sections of code that show a
high degree of spatial locality, additional traffic and cache pollution are generated by sequential
prefetching during program phases that show little spatial locality. Przybylski [30] found that this
overhead tends to make sequential prefetching unfeasible for values of K larger than one.
Dahlgren and Stenstr-m [11] proposed an adaptive sequential prefetching policy that allows the
value of K to vary during program execution in such a way that K is matched to the degree of
spatial locality exhibited by the program at a particular point in time. To do this, a prefetch
efficiency metric is periodically calculated by the cache as an indication of the current spatial
locality characteristics of the program. Prefetch efficiency is defined to be the ratio of useful
prefetches to total prefetches where a useful prefetch occurs whenever a prefetched block results in
a cache hit. The value of K is initialized to one, incremented whenever the prefetch efficiency
exceeds a predetermined upper threshold and decremented whenever the efficiency drops below a
lower threshold as shown in Figure 7. Note that if K is reduced to zero, prefetching is effectively
disabled. At this point, the prefetch hardware begins to monitor how often a cache miss to block b
occurs while block b-1 is cached and restarts prefetching if the respective ratio of these two
numbers exceeds the lower threshold of the prefetch efficiency.
Simulations of a shared memory multiprocessor found that adaptive prefetching could achieve
appreciable reductions in cache miss ratios over tagged prefetching. However, simulated run-time
comparisons showed only slight differences between the two schemes. The lower miss ratio of
adaptive sequential prefetching was found to be partially nullified by the associated overhead of
increased memory traffic and contention.
Jouppi [19] proposed an approach where K prefetched blocks are brought into a FIFO stream
buffer before being brought into the cache. As each buffer entry is referenced, it is brought into the
cache while the remaining blocks are moved up in the queue and a new block is prefetched into the
tail position. Note that since prefetched data are not placed directly into the cache, this scheme
avoids any cache pollution. However, if a miss occurs in the cache and the desired block is also not
found at the head of the stream buffer, the buffer is flushed. Therefore, prefetched blocks must be
accessed in the order they are brought into the buffer for stream buffers to provide a performance
benefit.
K-
K++
time
upper threshold
lower threshold
prefetch
efficiency
Figure
7. Sequential adaptive prefetching
Palacharla and Kessler [27] studied stream buffers as a replacement for a secondary cache. When
a primary cache miss occurs, one of several stream buffers is allocated to service the new reference
stream. Stream buffers are allocated in LRU order and a newly allocated buffer immediately
fetches the next K blocks following the missed block into the buffer. Palacharla and Kessler found
that eight stream buffers and performance in their simulation study. With
these parameters, stream buffer hit rates (the percentage of primary cache misses that are satisfied
by the stream buffers) typically fell between 50% and 90%.
However, Memory bandwidth requirements were found to increase sharply as a result of the large
number of unnecessary prefetches generated by the stream buffers. To help mitigate this effect, a
small history buffer is used to record the most recent primary cache misses. When this history
buffer indicates that misses have occurred for both block b and block b + 1, a stream is allocated
and blocks b are prefetched into the buffer. Using this more selective stream
allocation policy, bandwidth requirements were reduced at the expense of some slightly reduced
stream buffer hit rates. The stream buffers described by Palacharla and Kessler were found to
provide an economical alternative to large secondary caches and were eventually incorporated into
the Cray T3E multiprocessor [26].
In general, sequential prefetching techniques require no changes to existing executables and can be
implemented with relatively simple hardware. Compared to software prefetching, sequential
hardware prefetching performs poorly when non-sequential memory access patterns are
encountered, however. Scalar references or array accesses with large strides can result in
unnecessary prefetches because these types of access patterns do not exhibit the spatial locality
upon which sequential prefetching is based. To enable prefetching of strided and other irregular
data access patterns, several more elaborate hardware prefetching techniques have been proposed.
4.2 Prefetching with arbitrary strides
Several techniques have been proposed which employ special logic to monitor the processor's
address referencing pattern to detect constant stride array references originating from looping
structures [2,13,32]. This is accomplished by comparing successive addresses used by load or
store instructions. Chen and Baer's scheme [7] is perhaps the most aggressive proposed thus
far. To illustrate its design, assume a memory instruction, m i , references addresses a 1 , a 2 and a 3
during three successive loop iterations. Prefetching for m i will be initiated if
a a
where D is now assumed to be the stride of a series of array accesses. The first prefetch address
will then be A a
3 is the predicted value of the observed address, a 3 . Prefetching
continues in this way until the equality A a
no longer holds true.
Note that this approach requires the previous address used by a memory instruction to be stored
along with the last detected stride, if any. Recording the reference histories of every memory
instruction in the program is clearly impossible. Instead, a separate cache called the reference
prediction table (RPT) holds this information for only the most recently used memory instructions.
The organization of the RPT is given in Figure 8. Table entries contain the address of the memory
instruction, the previous address accessed by this instruction, a stride value for those entries which
have established a stride and a state field which records the entry's current state. The state diagram
for RPT entries is given in Figure 9.
The RPT is indexed by the CPU's program counter (PC). When memory instruction m i is
executed for the first time, an entry for it is made in the RPT with the state set to initial signifying
that no prefetching is yet initiated for this instruction. If m i is executed again before its RPT entry
has been evicted, a stride value is calculated by subtracting the previous address stored in the RPT
from the current effective address. To illustrate the functionality of the RPT, consider the matrix
multiply code and associated RPT entries given in Figure 10.
In this example, only the load instructions for arrays a, b and c are considered and it is assumed
that the arrays begin at addresses 10000, 20000 and 30000, respectively. For simplicity, one
word cache blocks are also assumed. After the first iteration of the innermost loop, the state of the
RPT is as given in Figure 10b where instruction addresses are represented by their pseudo-code
mnemonics. Since the RPT does not yet contain entries for these instructions, the stride fields are
initialized to zero and each entry is placed in an initial state. All three references result in a cache
miss.
After the second iteration, strides are computed as shown in Figure 10c. The entries for the array
references to b and c are placed in a transient state because the newly computed strides do not
match the previous stride. This state indicates that an instruction's referencing pattern may be in
transition and a tentative prefetch is issued for the block at address effective address
is not already cached. The RPT entry for the reference to array a is placed in a steady state
because the previous and current strides match. Since this entry's stride is zero, no prefetching
will be issued for this instruction. Although the reference to array a hits in the cache due a demand
fetch in the previous iteration, the references to arrays b and c once again result in a cache miss.
During the third iteration, the entries for array references b and c move to the steady state when
the tentative strides computed in the previous iteration are confirmed. The prefetches issued during
the second iteration result in cache hits for the b and c references, provided that a prefetch distance
instruction tag previous address stride state
PC effective address
prefetch address
Figure
8. The organization of the reference prediction table.
of one is sufficient.
From the above discussion, it can be seen that the RPT improves upon sequential policies by
correctly handling strided array references. However, as described above, the RPT still limits the
prefetch distance to one loop iteration. To remedy this shortcoming, a distance field may be added
to the RPT which specifies the prefetch distance explicitly. Prefetch addresses would then be
calculated as
effective address
The addition of the distance field requires some method of establishing its value for a given RPT
entry. To calculate an appropriate value, Chen and Baer decouple the maintenance of the RPT
from its use as a prefetch engine. The RPT entries are maintained under the direction of the PC as
described above but prefetches are initiated separately by a pseudo program counter, called the
lookahead program counter (LA-PC) which is allowed to precede the PC. The difference between
the PC and LA-PC is then the prefetch distance, d. Several implementation issues arise with the
addition of the lookahead program counter and the interested reader is referred to [2] for a
complete description.
In [8], Chen and Baer compared RPT prefetching to Mowry's software prefetching scheme [25]
and found that neither method showed consistently better performance on a simulated shared
memory multiprocessor. Instead, it was found that performance depended on the individual
program characteristics of the four benchmark programs upon which the study was based.
Software prefetching was found to be more effective with certain irregular access patterns for
which an indirect reference is used to calculate a prefetch address. The RPT may not be able to
establish an access pattern for an instruction which uses an indirect address because the instruction
may generate effective addresses which are not separated by a constant stride. Also, the RPT is
less efficient at the beginning and end of a loop. Prefetches are issued by the RPT only after an
access pattern has been established. This means that no prefetches will be issued for array data for
at least the first two iterations. Chen and Baer also noted that it may take several iterations for the
RPT to achieve a prefetch distance that completely masks memory latency when the LA-PC was
used. Finally, the RPT will always prefetch past array bounds because an incorrect prediction is
necessary to stop subsequent prefetching. However, during loop steady state, the RPT was able to
dynamically adjust its prefetch distance to achieve a better overlap with memory latency than the
software scheme for some array access patterns. Also, software prefetching incurred instruction
overhead resulting from prefetch address calculation, fetch instruction execution and spill code.
initial steady
transient
no
prediction
Correct stride prediction
Incorrect stride prediction
Incorrect prediction with stride update
initial Start state. No prefetching.
transient Stride in transition. Tentative prefetch.
steady Constant Stride. Prefetch if stride - 0.
no prediction
prefetching.
Figure
9. State transition graph for reference prediction table entries.
Dahlgren and Stenstr-m [10] compared tagged and RPT prefetching in the context of a distributed
shared memory multiprocessor. By examining the simulated run-time behavior of six benchmark
programs, it was concluded that RPT prefetching showed limited performance benefits over tagged
prefetching, which tends to perform as well or better for the most common memory access patterns.
Dahlgren showed that most array strides were less than the block size and therefore were captured
by the tagged prefetch policy. In addition, it was found that some scalar references showed a
limited amount of spatial locality that could captured by the tagged prefetch policy but not by the
RPT mechanism. If memory bandwidth is limited, however, it was conjectured that the more
conservative RPT prefetching mechanism may be preferable since it tends to produce fewer useless
prefetches.
As with software prefetching, the majority of hardware prefetching mechanisms focus on very
regular array referencing patterns. There are some notable exceptions, however. Harrison and
Mehrotra [17] have proposed extensions to the RPT mechanism which allow for the prefetching of
data objects connected via pointers. This approach adds fields to the RPT which enable the
detection of indirect reference strides arising from structures such as linked lists and sparse
matrices. Joseph and Grunwald [18] have studied the use of a Markov predictor to drive a data
prefetcher. By dynamically recording sequences of cache miss references in a hardware table, the
prefetcher attempts to predict when a previous pattern of misses has begun to repeat itself. When
float a[100][100], b[100][100], c[100][100];
for
for
for
(a)
Tag Previous Address Stride State
ld b[i][k] 20,000 0 initial
ld c[k][j] 30,000 0 initial
ld a[i][j] 10,000 0 initial
(b)
Tag Previous Address Stride State
ld b[i][k] 20,004 4 transient
ld c[k][j] 30,400 400 transient
ld a[i][j] 10,000 0 steady
(c)
Tag Previous Address Stride State
ld b[i][k] 20,008 4 steady
ld c[k][j] 30,800 400 steady
ld a[i][j] 10,000 0 steady
(d)
Figure
10. The RPT during execution of matrix multiply.
the current cache miss address is found in the table, prefetches for likely subsequent misses are
issued to a prefetch request queue. To prevent cache pollution and wasted memory bandwidth,
prefetch requests may be displaced from this queue by requests that belong to reference sequences
with higher a probability of occurring.
5. Integrating Hardware and Software Prefetching
Software prefetching relies exclusively on compile-time analysis to schedule fetch instructions
within the user program. In contrast, the hardware techniques discussed thus far infer prefetching
opportunities at run-time without any compiler or processor support. Noting that each of these
approaches has its advantages, some researchers have proposed mechanisms that combine elements
of both software and hardware prefetching.
Gornish and Veidenbaum [15] describe a variation on tagged hardware prefetching in which the
degree of prefetching (K) for a particular reference stream is calculated at compile time and passed
on to the prefetch hardware. To implement this scheme, a prefetching degree (PD) field is
associated with every cache entry. A special fetch instruction is provided that prefetches the
specified block into the cache and then sets the tag bit and the value of the PD field of the cache
entry holding the prefetched block. The first K blocks of a sequential reference stream are
prefetched using this instruction. When a tagged block, b, is demand fetched, the value in its PD
field, K b , is added to the block address to calculate a prefetch address. The PD field of the newly
prefetched block is then set to K b and the tag bit is set. This insures that the appropriate value of K
is propagated through the reference stream. Prefetching for non-sequential reference patterns is
handled by ordinary fetch instructions.
Zheng and Torrellas [39] suggest an integrated technique that enables prefetching for irregular data
structures. This is accomplished by tagging memory locations in such a way that a reference to one
element of a data object initiates a prefetch of either other elements within the referenced object or
objects pointed to by the referenced object. Both array elements and data structures connected via
pointers can therefore be prefetched. This approach relies on the compiler to initialize the tags in
memory, but the actual prefetching is handled by hardware within the memory system.
The use of a programmable prefetch engine has been proposed by Chen [6] as an extension to the
reference prediction table described in Section 4.2. Chen's prefetch engine differs from the RPT in
that the tag, address and stride information are supplied by the program rather than being
dynamically established in hardware. Entries are inserted into the engine by the program before
entering looping structures that can benefit from prefetching. Once programmed, the prefetch
engine functions much like the RPT with prefetches being initiated when the processor's program
counter matches one of the tag fields in the prefetch engine.
VanderWiel and Lilja [36] propose a prefetch engine that is external to the processor. The engine is
a general processor that executes its own program to prefetch data for the CPU. Through a shared
second-level cache, a producer-consumer relationship is established between the two processors in
which the engine prefetches new data blocks into the cache only after previously prefetched data
have been accessed by the compute processor. The processor also partially directs the actions of
the prefetch engine by writing control information to memory-mapped registers within the prefetch
engine's support logic.
These integrated techniques are designed to take advantage of compile-time program information
without introducing as much instruction overhead as pure software prefetching. Much of the
speculation performed by pure hardware prefetching is also eliminated, resulting in fewer
unnecessary prefetches. Although no commercial systems yet support this model of prefetching, the
simulation studies used to evaluate the above techniques indicate that performance can be enhanced
over pure software or hardware prefetch mechanisms.
6. Prefetching in Multiprocessors
In addition to the prefetch mechanisms above, several multiprocessor-specific prefetching
techniques have been proposed. Prefetching in these systems differs from uniprocessors for at least
three reasons. First, multiprocessor applications are typically written using different programming
paradigms than uniprocessors. These paradigms can provide additional array referencing
information which enable more accurate prefetch mechanisms. Second, multiprocessor systems
frequently contain additional memory hierarchies which provide different sources and destinations
for prefetching. Finally, the performance implications of data prefetching can take on added
significance in multiprocessors because these systems tend to have higher memory latencies and
more sensitive memory interconnects.
Fu and Patel [12] examined how data prefetching might improve the performance of vectorized
multiprocessor applications. This study assumes vector operations are explicitly specified by the
programmer and supported by the instruction set. Because the vectorized programs describe
computations in terms of a series of vector and matrix operations, no compiler analysis or stride
detection hardware is required to establish memory access patterns. Instead, the stride information
encoded in vector references is made available to the processor caches and associated prefetch
hardware.
Two prefetching policies were studied. The first is a variation upon the prefetch-on-miss policy in
which K consecutive blocks following a cache miss are fetched into the processor cache. This
implementation of prefetch-on-miss differs from that presented earlier in that prefetches are issued
only for scalars and vector references with a stride less than or equal to the cache block size. The
second prefetch policy, which will be referred to as vector prefetching here, is similar to the first
policy with the exception that prefetches for vector references with large strides are also issued. If
the vector reference for block b misses in the cache, then blocks b, b
are fetched.
Fu and Patel found both prefetch policies improve performance over the no prefetch case on an
Alliant FX/8 simulator. Speedups were more pronounced when smaller cache blocks were assumed
since small block sizes limit the amount of spatial locality a non-prefetching cache can capture
while prefetching caches can offset this disadvantage by simply prefetching more blocks. In
contrast to other studies, Fu and Patel found both sequential prefetching policies were effective for
values of K up to 32. This is in apparent conflict with earlier studies which found sequential
prefetching to degrade performance for K > 1. Much of this discrepancy may be explained by
noting how vector instructions are exploited by the prefetching scheme used by Fu and Patel. In
the case of prefetch-on-miss, prefetching is suppressed when a large stride is specified by the
instruction. This avoids useless prefetches which degraded the performance of the original policy.
Although vector prefetching does issue prefetches for large stride referencing patterns, it is a more
precise mechanism than other sequential schemes since it is able to take advantage of stride
information provided by the program.
Comparing the two schemes, it was found that applications with large strides benefited the most
from vector prefetching, as expected. For programs in which scalar and unit-stride references
dominate, the prefetch-on-miss policy tended to perform slightly better. For these programs, the
lower miss ratios resulting from the vector prefetching policy were offset by the corresponding
increase in bus traffic.
Gornish, et. al. [14] examined prefetching in a distributed memory multiprocessor where global
and local memory are connected through a multistage interconnection network. Data are prefetched
from global to local memory in large, asynchronous block transfers to achieve higher network
bandwidth than would be possible with word-at-a-time transfers. Since large amounts of data are
prefetched, the data are placed in local memory rather than the processor cache to avoid excessive
cache pollution. Some form of software-controlled caching is assumed to be responsible for
translating global array addresses to local addresses after the data been placed in local memory.
As with software prefetching in single-processor systems, loop transformations are performed by
the compiler to insert prefetch operations into the user code. However, rather than inserting
fetch instructions for individual words within the loop body, entire blocks of memory are
prefetched before the loop is entered. Figure 11 shows how this block prefetching may be used
with a vector-matrix product calculation. In Figure 11b, the iterations of the original loop (Figure
11a) have been partitioned among NPROC processors of the multiprocessor system so that each
processor iterates over 1
NPROC th of a and c. Also note that the array c is prefetched a row at a
time. Although it is possible to pull out the prefetch for c so that the entire array is fetched into
local memory before entering the outermost loop, it is assumed here that c is very large and a
prefetch of the entire array would occupy more local memory than is available.
The block fetches given in Figure 11b will add processor overhead to the original computation in a
manner similar to the software prefetching scheme described earlier. Although the block-oriented
prefetch operations require size and stride information, significantly less overhead will be incurred
than with the word-oriented scheme since fewer prefetch operations will be needed. Assuming
equal problem sizes and ignoring prefetches for a, the loop given Figure 11 will generate N+1
block prefetches as compared to the 12
c hprefetches that would result from applying a
word-oriented prefetching scheme.
Although a single bulk data transfer is more efficient than dividing the transfer into several smaller
messages, the former approach will tend to increase network congestion when several such
messages are being transferred at once. Combined with the increased request rate prefetching
induces, this network contention can lead to significantly higher average memory latencies. For a
(b)
(a)
Figure
11. Block prefetching for a vector-matrix product calculation.
set of six numerical benchmark programs, Gornish noted that prefetching increased average
memory latency by a factor of between 5.3 and 12.7 over the no prefetch case.
An implication of prefetching into the local memory rather than the cache is that the array a in
Figure
11 cannot be prefetched. In general, this scheme requires that all data must be read-only
between prefetch and use because no coherence mechanism is provided which allows writes by one
processor to be seen by the other processors. Data transfers are also restricted by control
dependencies within the loop bodies. If an array reference is predicated by a conditional statement,
no prefetching is initiated for the array. This is done for two reasons. First, the conditional may
only test true for a subset of the array references and initiating a prefetch of the entire array would
result in the unnecessary transfer of a potentially large amount of data. Second, the conditional
may guard against referencing non-existent data and initiating a prefetch for such data could result
in unpredictable behavior.
Honoring the above data and control dependencies limits the amount of data which can be
prefetched. On average, 42% of loop memory references for the six benchmark programs used by
Gornish could not be prefetched due to these constraints. Together with the increased average
memory latencies, the suppression of these prefetches limited the speedup due to prefetching to less
than 1.1 for five of the six benchmark programs.
Mowry and Gupta [24] studied the effectiveness of software prefetching for the DASH DSM
multiprocessor architecture. In this study, two alternative designs were considered. The first
places prefetched data in a remote access cache (RAC) which lies between the interconnection
network and the processor cache hierarchy of each node in the system. The second design
alternative simply prefetched data from remote memory directly into the primary processor cache.
In both cases, the unit of transfer was a cache block.
The use of a separate prefetch cache such as the RAC is motivated by a desire to reduce contention
for the primary data cache. By separating prefetched data from demand-fetched data, a prefetch
cache avoids polluting the processor cache and provides more overall cache space. This approach
also avoids processor stalls that can result from waiting for prefetched data to be placed in the
cache. However, in the case of a remote access cache, only remote memory operations benefit
from prefetching since the RAC is placed on the system bus and access times are approximately
equal to those of main memory.
Simulation runs of three scientific benchmarks found that prefetching directly into the primary
cache offered the most benefit with an average speedup of 1.94 compared to an average of 1.70
when the RAC was used. Despite significantly increasing cache contention and reducing overall
cache space, prefetching into the primary cache resulted in higher cache hit rates, which proved to
be the dominant performance factor. As with software prefetching in single processor systems, the
benefit of prefetching was application-specific. Speedups for two array-based programs achieved
speedups over the non-prefetch case of 2.53 and 1.99 while the third, less regular, program showed
a speedup of 1.30.
7. Conclusions
Prefetching schemes are diverse. To help categorize a particular approach it is useful to answer
three basic questions concerning the prefetching mechanism: 1) When are prefetches initiated, 2)
where are prefetched data placed, and 3) what is prefetched?
When Prefetches can be initiated either by an explicit fetch operation within a program, by logic
that monitors the processor's referencing pattern to infer prefetching, or by a combination
of these approaches. However they are initiated, prefetches must be issued in a timely
manner. If a prefetch is issued too early there is a chance that the prefetched data will
displace other useful data from the higher levels of the memory hierarchy or be displaced
itself before use. If the prefetch is issued too late, it may not arrive before the actual
memory reference and thereby introduce processor stall cycles. Prefetching mechanisms
also differ in their precision. Software prefetching issues fetches only for data that is
likely to be used while hardware schemes tend data in a more speculative manner.
Where The decision of where to place prefetched data in the memory hierarchy is a fundamental
design decision. Clearly, data must be moved into a higher level of the memory hierarchy
to provide a performance benefit. The majority of schemes place prefetched data in some
type of cache memory. Other schemes place prefetched data in dedicated buffers to protect
the data from premature cache evictions and prevent cache pollution. When prefetched
data are placed into named locations, such as processor registers or memory, the prefetch
is said to be binding and additional constraints must be imposed on the use of the data.
Finally, multiprocessor systems can introduce additional levels into the memory hierarchy
which must be taken into consideration.
What Data can be prefetched in units of single words, cache blocks, contiguous blocks of
memory or program data objects. Often, the amount of data fetched is determined by the
organization of the underlying cache and memory system. Cache blocks may be the most
appropriate size for uniprocessors and SMPs while larger memory blocks may be used to
amortize the cost of initiating a data transfer across an interconnection network of a large,
distributed memory multiprocessor.
These three questions are not independent of each other. For example, if the prefetch destination is
a small processor cache, data must be prefetched in a way that minimizes the possibility of
polluting the cache. This means that precise prefetches will need to be scheduled shortly before the
actual use and the prefetch unit must be kept small. If the prefetch destination is large, the timing
and size constraints can be relaxed.
Once a prefetch mechanism has been specified, it is natural to wish to compare it with other
schemes. Unfortunately, a comparative evaluation of the various proposed prefetching techniques
is hindered by widely varying architectural assumptions and testing procedures. However, some
general observations can be made.
The majority of prefetching schemes and studies concentrate on numerical, array-based
applications. These programs tend to generate memory access patterns that, although
comparatively predictable, do not yield high cache utilization and therefore benefit more from
prefetching than general applications. As a result, automatic techniques which are effective for
general programs remain largely unstudied.
To be effective, a prefetch mechanism must perform well for the most common types of memory
referencing patterns. Scalar and unit-stride array references typically dominate in most
applications and prefetching mechanisms should capture this type of access pattern. Sequential
prefetching techniques concentrate exclusively on these access patterns. Although comparatively
infrequent, large stride array referencing patterns can result in very poor cache utilization. RPT
mechanisms sacrifice some scalar performance in order to cover strided referencing patterns.
Software prefetching handles both types of referencing patterns but introduces instruction
overhead. Integrated schemes attempt to reduce instruction overhead while still offering better
prefetch coverage than pure hardware techniques.
Finally, memory systems must be designed to match the added demands prefetching imposes.
Despite a reduction in overall execution time, prefetch mechanisms tend to increase average
memory latency. This is a result of effectively increasing the memory reference request rate of the
processor thereby introducing congestion within the memory system. This particularly can be a
problem in multiprocessor systems where buses and interconnect networks are shared by several
processors.
Despite these application and system constraints, data prefetching techniques have produced
significant performance improvements on commercial systems. Efforts to improve and extend these
known techniques to more diverse architectures and applications is an active and promising area of
research. The need for new prefetching techniques is likely to continue to be motivated by
increasing memory access penalties arising from both the widening gap between microprocessor
and memory performance and the use of more complex memory hierarchies.
8.
--R
"Performance Evaluation of Computing Systems with Memory Hierarchies,"
"An Effective On-chip Preloading Scheme to Reduce Data Access Penalty,"
"Compiler Techniques for Data Prefetching on the PowerPC,"
"Software Prefetching,"
"Design of the HP PA 7200 CPU,"
"An Effective Programmable Prefetch Engine for On-chip Caches,"
"Effective Hardware-Based Data Prefetching for High Performance Processors,"
"A Performance Study of Software and Hardware Data Prefetching Schemes,"
"Data Access Microarchitectures for Superscalar Processors with Compiler-Assisted Data prefetching,"
"Effectiveness of Hardware-based Stride and Sequential Prefetching in Shared-memory Multiprocessors,"
"Fixed and Adaptive Sequential Prefetching in Shared-memory Multiprocessors,"
"Data Prefetching in Multiprocessor Vector Cache Memories,"
"Stride Directed Prefetching in Scalar Processors,"
"Compiler-directed Data Prefetching in Multiprocessors with Memory Hierarchies,"
"An Integrated Hardware/Software Scheme for Shared-Memory Multiprocessors,"
"Comparative Evaluation of Latency Reducing and Tolerating Techniques,"
"A Data Prefetch Mechanism for Accelerating General Computation,"
"Prefetching using Markov Predictors,"
"Improving Direct-mapped Cache Performance by the Addition of a Small Fully-associative Cache and Prefetch Buffers,"
"An Architecture for Software-Controlled Data Prefetching,"
"Lockup-free Instruction Fetch/prefetch Cache Organization,"
"SPAID: Software Prefetching in Pointer and Call-Intensive Environments,"
"Compiler-based Prefetching for Recursive Data Structures,"
"Tolerating Latency through Software-controlled Prefetching in Shared-memory Multiprocessors,"
"Design and Evaluation of a Compiler Algorithm for Prefetching,"
The Cray T3E Architecture Overview
"Evaluating Stream Buffers as a Secondary Cache Replacement,"
"Exposing I/O concurrency with informed prefetching,"
Software Methods for Improvement of Cache Performance on Supercomputer Applications.
"The Performance Impact of Block Sizes and Fetch Strategies,"
"Data Prefetching on the HP PA-8000,"
"Prefetch Unit for Vector Operations on Scalar Computers,"
"Sequential Program Prefetching in Memory Hierarchies,"
"Cache Memories,"
"When Caches are not Enough : Data Prefetching Techniques,"
"Hiding Memory Latency with a Data Prefetch Engine,"
"The MIPS R10000 Superscalar Microprocessor,"
"An intelligent I-cache prefetch mechanism,"
"Speeding up Irregular Applications in Shared-Memory Multiprocessors: Memory Binding and Group Prefetching,"
--TR
Software prefetching
Tolerating latency through software-controlled prefetching in shared-memory multiprocessors
An architecture for software-controlled data prefetching
Data prefetching in multiprocessor vector cache memories
Data access microarchitectures for superscalar processors with compiler-assisted data prefetching
An effective on-chip preloading scheme to reduce data access penalty
Prefetch unit for vector operations on scalar computers (abstract)
Design and evaluation of a compiler algorithm for prefetching
directed prefetching in scalar processors
Cache coherence in large-scale shared-memory multiprocessors
Evaluating stream buffers as a secondary cache replacement
A performance study of software and hardware data prefetching schemes
Speeding up irregular applications in shared-memory multiprocessors
Compiler techniques for data prefetching on the PowerPC
An effective programmable prefetch engine for on-chip caches
Compiler-based prefetching for recursive data structures
Compiler-directed data prefetching in multiprocessors with memory hierarchies
Prefetching using Markov predictors
Data prefetching on the HP PA-8000
Dependence based prefetching for linked data structures
The performance impact of block sizes and fetch strategies
Improving direct-mapped cache performance by the addition of a small fully-associative cache and prefetch buffers
Cache Memories
Exposing I/O concurrency with informed prefetching
When Caches Aren''t Enough
The MIPS R10000 Superscalar Microprocessor
Limited Bandwidth to Affect Processor Design
Effective Hardware-Based Data Prefetching for High-Performance Processors
Branch-Directed and Stride-Based Data Cache Prefetching
Lockup-free instruction fetch/prefetch cache organization
A study of branch prediction strategies
Effectiveness of hardware-based stride and sequential prefetching in shared-memory multiprocessors
Distributed Prefetch-buffer/Cache Design for High Performance Memory Systems
Software methods for improvement of cache performance on supercomputer applications
--CTR
Nathalie Drach , Jean-Luc Bchennec , Olivier Temam, Increasing hardware data prefetching performance using the second-level cache, Journal of Systems Architecture: the EUROMICRO Journal, v.48 n.4-5, p.137-149, December 2002
Alexander Gendler , Avi Mendelson , Yitzhak Birk, A PAB-based multi-prefetcher mechanism, International Journal of Parallel Programming, v.34 n.2, p.171-188, April 2006
Addressing mode driven low power data caches for embedded processors, Proceedings of the 3rd workshop on Memory performance issues: in conjunction with the 31st international symposium on computer architecture, p.129-135, June 20-20, 2004, Munich, Germany
Binny S. Gill , Dharmendra S. Modha, SARC: sequential prefetching in adaptive replacement cache, Proceedings of the USENIX Annual Technical Conference 2005 on USENIX Annual Technical Conference, p.33-33, April 10-15, 2005, Anaheim, CA
Ismail Kadayif , Mahmut Kandemir , Guilin Chen, Studying interactions between prefetching and cache line turnoff, Proceedings of the 2005 conference on Asia South Pacific design automation, January 18-21, 2005, Shanghai, China
Ismail Kadayif , Mahmut Kandemir , Feihui Li, Prefetching-aware cache line turnoff for saving leakage energy, Proceedings of the 2006 conference on Asia South Pacific design automation, January 24-27, 2006, Yokohama, Japan
Aviral Shrivastava , Eugene Earlie , Nikil Dutt , Alex Nicolau, Aggregating processor free time for energy reduction, Proceedings of the 3rd IEEE/ACM/IFIP international conference on Hardware/software codesign and system synthesis, September 19-21, 2005, Jersey City, NJ, USA
Vlad-Mihai Panait , Amit Sasturkar , Weng-Fai Wong, Static Identification of Delinquent Loads, Proceedings of the international symposium on Code generation and optimization: feedback-directed and runtime optimization, p.303, March 20-24, 2004, Palo Alto, California
Resit Sendag , Ying Chen , David J. Lilja, The Impact of Incorrectly Speculated Memory Operations in a Multithreaded Architecture, IEEE Transactions on Parallel and Distributed Systems, v.16 n.3, p.271-285, March 2005
Franoise Fabret , H. Arno Jacobsen , Franois Llirbat , Joo Pereira , Kenneth A. Ross , Dennis Shasha, Filtering algorithms and implementation for very fast publish/subscribe systems, ACM SIGMOD Record, v.30 n.2, p.115-126, June 2001
Jike Cui , Mansur. H. Samadzadeh, A new hybrid approach to exploit localities: LRFU with adaptive prefetching, ACM SIGMETRICS Performance Evaluation Review, v.31 n.3, p.37-43, December
Jean Christophe Beyler , Philippe Clauss, Performance driven data cache prefetching in a dynamic software optimization system, Proceedings of the 21st annual international conference on Supercomputing, June 17-21, 2007, Seattle, Washington
Wang , Kuan-Ching Li , Kuo-Jen Wang , Ssu-Hsuan Lu, On the Design and Implementation of an Effective Prefetch Strategy for DSM Systems, The Journal of Supercomputing, v.37 n.1, p.91-112, July 2006
Kenneth A. Ross, Conjunctive selection conditions in main memory, Proceedings of the twenty-first ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, June 03-05, 2002, Madison, Wisconsin
Weidong Shi , Hsien-Hsin S. Lee , Mrinmoy Ghosh , Chenghuai Lu , Alexandra Boldyreva, High Efficiency Counter Mode Security Architecture via Prediction and Precomputation, ACM SIGARCH Computer Architecture News, v.33 n.2, p.14-24, May 2005
Brendon Cahoon , Kathryn S. McKinley, Simple and effective array prefetching in Java, Proceedings of the 2002 joint ACM-ISCOPE conference on Java Grande, p.86-95, November 03-05, 2002, Seattle, Washington, USA
Kenneth A. Ross, Selection conditions in main memory, ACM Transactions on Database Systems (TODS), v.29 n.1, p.132-161, March 2004
Luis M. Ramos , Jos Luis Briz , Pablo E. Ibez , Victor Vials, Data prefetching in a cache hierarchy with high bandwidth and capacity, Proceedings of the 2006 workshop on MEmory performance: DEaling with Applications, systems and architectures, p.37-44, September 16-20, 2006, Seattle, Washington
Sangyeun Cho , Lei Jin, Managing Distributed, Shared L2 Caches through OS-Level Page Allocation, Proceedings of the 39th Annual IEEE/ACM International Symposium on Microarchitecture, p.455-468, December 09-13, 2006
Zhen Yang , Xudong Shi , Feiqi Su , Jih-Kwon Peir, Overlapping dependent loads with addressless preload, Proceedings of the 15th international conference on Parallel architectures and compilation techniques, September 16-20, 2006, Seattle, Washington, USA
Gokul B. Kandiraju , Anand Sivasubramaniam, Going the distance for TLB prefetching: an application-driven study, ACM SIGARCH Computer Architecture News, v.30 n.2, May 2002
David Siegwart , Martin Hirzel, Improving locality with parallel hierarchical copying GC, Proceedings of the 2006 international symposium on Memory management, June 10-11, 2006, Ottawa, Ontario, Canada
Ali-Reza Adl-Tabatabai , Richard L. Hudson , Mauricio J. Serrano , Sreenivas Subramoney, Prefetch injection based on hardware monitoring and object metadata, ACM SIGPLAN Notices, v.39 n.6, May 2004
Jun Yan , Wei Zhang, Hybrid multi-core architecture for boosting single-threaded performance, ACM SIGARCH Computer Architecture News, v.35 n.1, p.141-148, March 2007
Weidong Shi , Hsien-Hsin S. Lee, Accelerating memory decryption and authentication with frequent value prediction, Proceedings of the 4th international conference on Computing frontiers, May 07-09, 2007, Ischia, Italy
Trishul M. Chilimbi , Martin Hirzel, Dynamic hot data stream prefetching for general-purpose programs, ACM SIGPLAN Notices, v.37 n.5, May 2002
Jacob Leverich , Hideho Arakida , Alex Solomatnikov , Amin Firoozshahian , Mark Horowitz , Christos Kozyrakis, Comparing memory systems for chip multiprocessors, ACM SIGARCH Computer Architecture News, v.35 n.2, May 2007
A. Mahjur , A. H. Jahangir , A. H. Gholamipour, On the performance of trace locality of reference, Performance Evaluation, v.60 n.1-4, p.51-72, May 2005
Kristof Beyls , Erik H. D'Hollander, Generating cache hints for improved program efficiency, Journal of Systems Architecture: the EUROMICRO Journal, v.51 n.4, p.223-250, April 2005 | prefetching;memory latency |
359194 | Dynamically Relaxed Block Incomplete Factorizations for Solving Two- and Three-Dimensional Problems. | To efficiently solve second-order discrete elliptic PDEs by Krylov subspace-like methods, one needs to use some robust preconditioning techniques. Relaxed incomplete factorizations (RILU) are powerful candidates.\ Unfortunately, their efficiency critically depends on the choice of the relaxation parameter $\omega$ whose "optimal" value is not only hard to estimate but also strongly varies from one problem to another. These methods interpolate between the popular ILU and its modified variant (MILU). Concerning the pointwise schemes, a new variant of RILU that dynamically computes variable $\omega=\omega_i$ has been introduced recently. Like its ancestor RILU and unlike standard methods, it is robust with respect to both existence and performance. On top of that, it breaks the problem-dependence of ``$\omega_{opt}$."\ A block version of this dynamically relaxed method is proposed and compared with classical pointwise and blockwise methods as well as with some existing "dynamic" variants, showing that with the new blockwise preconditioning technique, anisotropies are handled more effectively. | Introduction
. As model problem, we take the following self-adjoint second
order elliptic PDE
in\Omega with suitable boundary conditions on
where\Omega denotes the unit square (2D case) or cube (3D case); the coefficients p, q
and r are positive and bounded while t is nonnegative and bounded. The differential
operator is discretized by using the five-point (2D case) or seven-point (3D case) finite
difference approximation (point scheme box integration [30, 41]). The mesh points are
ordered according to the lexicographic ordering. One then results in a large system
of linear equations of the form
where A is a block tridiagonal or pentadiagonal diagonally dominant Stieltjes matrix,
b is a vector that depends on both the rhs f of (1.1) and the boundary conditions,
while u is the vector of unknowns. Combined with some appropriate preconditioning
matrix, the conjugate gradient method is (one of) the most widely used method(s) (see
e.g. [1, 2, 6, 11, 19, 20, 35]) for solving system (1.2). Relaxed incomplete factorizations
(RILU) are powerful preconditioning techniques that interpolate, through a relaxation
parameter, between the popular incomplete LU factorization ILU and its modified
variant MILU that preserves rowsums of A [4, 13]. As opposed to ILU and MILU,
the two main advantages of RILU are following
y Research supported by the Commission of the European Communities HCM Contract No. ERB-
CHBG-CT93-0420, at Utrecht University, Mathematical Institute, The Netherlands. Current address
de Bruxelles, Service des Milieux Continus (CP 194/5), 50, avenue F.D.
Roosevelt, B-1050 Brussels, Belgium. magolu@ulb.ac.be
z Universit'e Libre de Bruxelles, Service de M'etrologie Nucl'eaire (CP 165), 50 av. F.D. Roosevelt,
B-1050 Brussels, Belgium. ynotay@ulb.ac.be. Research supported by the " Fonds National de la
Recherche Scientifique", Belgium.
M.M. MAGOLU AND Y. NOTAY
1. it does not suffer a lot from existence problem [13, 17, 18];
2. it is robust with respect to discontinuities and anisotropy [4, 34].
Two major inconveniences are that the "optimal" value of the relaxation parameter
strongly varies from a problem to another and the behavior could be very
sensitive to variations of ! around the observed "! opt " [12, 39]. In [34], a new variant
of RILU has been proposed. There, the relaxation parameter is variable and dynamically
computed during the incomplete factorization phase. Like its precursor RILU,
it is robust with respect to both existence and performance. In addition, its performance
does not critically depends on involved parameter. We intend to propose a
block version of this dynamically relaxed method (DRBILU) and to compare it with
classical pointwise and blockwise methods as well as with some existing "dynamic"
variants. We stress that, even for three-dimensional problems, we consider a linewise
partitioning of the unknowns : each block corresponds to a set of gridpoints along a
line parallel to the x-axis in the physical domain [28].
Our study is outlined as follows. Needed general terminology and notation are
gathered in Section 2. In Section 3, we first review some variants of block incomplete
factorizations in Subsection 3.1 and 3.2. We next establish, in Subsection 3.3,
some theoretical results that motivate the introduction of dynamically relaxed block
preconditioners. Comparative numerical experiments are reported and discussed in
Section 4.
2. General terminology and notation.
2.1. Order relation. The order relation between real matrices and vectors is
the usual componentwise order : if
A is called nonnegative (positive) if A - 0
2.2. Stieltjes matrices. A real square matrix A is called a Stieltjes matrix (or
equivalently, a symmetric M-matrix) if it is symmetric positive definite and none of
its offdiagonal entries is positive [10].
2.3. Normalized point LU-factorization. Given a Stieltjes matrix S, by its
normalized point LU factorization we understand the factorization
s where
P s is pointwise diagonal and L s is pointwise lower triangular such that diag(L s
2.4. Miscellaneous symbols. We describe below some symbols that are used
in our study. A denotes a given square matrix of order n.
the transpose of A
the ith smallest eigenvalue of A
(A), the smallest eigenvalue of A
(A), the largest eigenvalue of A
the pointwise diagonal matrix whose diagonal
entries coincide with those of A
the pointwise tridiagonal matrix whose main
three diagonals coincide with those of A
DYNAMICALLY RELAXED BLOCK PRECONDITIONERS 3
diag the block diagonal matrix whose block
diagonal entries coincide with those of A
offdiag
e : the vector whose all components are equal to 1
Throughout this work, the term block will refer to the linewise partitioning mentioned
in the introduction.
3. Blockwise incomplete factorizations. For simplicity, throughout this work,
we consider only blockwise incomplete factorizations with no fill-in allowed outside
the main block diagonal part of A. Let n x , n y and n z denote the number of unknowns
in respectively the x-, the y- and the z-direction (if any), then the order of the matrix
A is According to our assumptions, A is either block tridiagonal (2D
case), i.e.
or block pentadiagonal (3D case), i.e.,
A may be split as
into its block diagonal part D and strictly block lower (upper) triangular part \GammaL
(\GammaL t ). The matrix
where P is the block diagonal matrix computed according to Algorithm 3.1, is referred
to as the block incomplete LU-factorization (BILU) of A [16]. P
stands for the normalized point LU factorization of P i;i . Other choices for g
are
discussed in [8, 16]. Algorithms that handle more general matrices may be found in
[8, 25].
Given that all the blocks A i;i\Gamma1 and A i;i\Gamman y are diagonal, each P i;i is a tridiagonal
Stieltjes matrix. It is well known that, if T is a pointwise tridiagonal Stieltjes
may be cheaply computed from the normalized point LU
factorization of
T . This follows from the fact that T \Gamma1 may be rewritten
as
The lower part of e
may then be computed by simple identification of the relevant
entries in the above identity. This is performed in Algorithm 3.2 where, for simplicity,
we set
To improve the performance of Algorithm 3.1, various variants have been proposed
in the literature. Most of them differ from the basic method only in the way of
computing diag(P i;i ) which is modified so as to satisfy rowsum relations and/or to keep
control of the extreme eigenvalues of the preconditioned matrix B \Gamma1 A [4, 21, 22, 26].
Reviewing all existing block variants is beyond the scope of our study. We shall
confine ourselves to relaxed and dynamically modified methods.
4 M.M. MAGOLU AND Y. NOTAY
Algorithm 3.1 (BILU)
Compute, for
if (3D) and ny
i\Gamman y ;i\Gamman y
A i\Gamman y ;i
(2)
LU-factorization
Algorithm 3.2 ( e
qn;n
normalized point LU-factorization of T
3.1. Relaxed block incomplete factorizations. Now, fill-ins that are neglected
in Algorithm 3.1 are accumulated and added to the current main diagonal
after multiplication by a relaxation parameter !
3.3. It amounts to impose the following rowsum relation [22]
where M stands for the pointwise diagonal matrix defined as in Algorithm 3.3 with
In other words, M satisfies
diag block
In the 2D case, (3.4) simplifies to
DYNAMICALLY RELAXED BLOCK PRECONDITIONERS 5
Algorithm 3.3 (RBILU)
Compute, for
with
if (3D) and ny
i\Gamman y ;i\Gamman y
A i\Gamman y
with
A i;i\Gamman y
s i\Gamman
i\Gamman y ;i\Gamman y A i\Gamman y ;i e i
(2)
if (3D) and
A i;i+1 e i+1 +A i;i+ny e i+ny
relaxed block incomplete LU-factorization
diagonal (modification) matrix
As is well known, multiplication by P \Gamma1 (or by P
in Algorithm 3.3) has to
be implemented as solution of a (small) linear system. With recovers BILU
while with gets the standard modified variant (MBILU) whose conditioning
properties are theoretically investigated in, a.o., [9, 27, 32, 23, 5, 25, 28]. In the case
under consideration, A is a (nonsingular) irreducible Stieltjes matrix, Algorithm 3.3
cannot breakdown; it gives rise to a (nonsingular) diagonally dominant Stieltjes matrix
later purposes, note that the existence analysis discussed in [26, 32]
covers any incomplete factorization such that
Be
Like their pointwise counterparts, both standard methods
suffer mainly from robustness problems [17, 22, 25, 26, 34, 40]. Optimal performances
are achieved with 0- 1. The trouble is that ! opt strongly depends on the problem
Most severe is the fact that performances could be highly sensitive to the
variation of ! around ! opt which is very hard to estimate [4, 12, 39]. In the case of
6 M.M. MAGOLU AND Y. NOTAY
uniform grid of mesh size h in all directions, it is advocated in [4] to use
problems [4]), in which case one has (see Subsection 3.3) that
In the light of the theory extensively developed in [9, 22, 23, 28, 32, 34], one should
take
with
ae n y in 2D case
case
in order to handle any grid. In [39], it has been suggested to try In [34] a
pointwise dynamic version of RILU, termed DRILU, that improves the performance
stability with respect to the parameter involved, has been introduced. Before discussing
the blockwise version of DRILU, we would like to say a few words about
dynamically modified block methods.
3.2. Dynamically modified block incomplete factorizations. The standard
modified method efficient only in "nice" situations; e.g., in the case
of fixed mesh size, Dirichlet boundary conditions and monotonous variation for the
PDE coefficients. It is now well-established that the performance strongly depends on
the ordering strategy, the variations of both the PDE coefficients and the mesh size,
and the boundary conditions [25, 28]. With dynamically modified methods, the goal
is to be in more general situations as efficient as, or even more efficient than, classical
modified method in nice circumstances, without changing the numbering of the
unknowns. To this end, small perturbations are dynamically added to the diagonal
entries of P i;i (initially computed with imposed constraints are
stand for a O(1) positive parameter independent of n x , n y and n z . If
one applies Algorithm 3.4 then the following rowsum relation holds
where, at each grid point j, the perturbation - j is defined as in Algorithm 3.4(3).
There holds Moreover, one has that [22, 26]
In [22, 26], where the 2D case is discussed, it is proposed to take
we extend to the 3D case as in agreement with the theoretical argument
in [9, 22, 23, 28, 32]. The analysis of the 2D case shows that the performances do not
strongly depend on the variation of
and that "i opt - 1
range of PDEs [22, 26].
Algorithm 3.4 outperforms by far both basic block methods (Algorithm 3.3 with
long as there is isotropy (including strong discontinuities) or moderate
anisotropy in the PDE coefficients (see [22]). In the case of strong anisotropy, it
gives rise to several degenerate (say, of O(fflh 2 ) with isolated smallest
eigenvalues, which slows down the convergence of PCG process [33, 37, 38, 26]. This
occurs e.g. in the case of PDEs that involve both isotropy and strong anisotropy (see
e.g. [24, 26, 34]). In such a situation, in the 2D case, it is better to use Algorithm 3.5
DYNAMICALLY RELAXED BLOCK PRECONDITIONERS 7
Algorithm 3.4 (DMBILU)
Compute, for
with
if (3D) and ny
i\Gamman y ;i\Gamman y
A i\Gamman y
with
+A i;i\Gamman y
s i\Gamman
i\Gamman y ;i\Gamman y A i\Gamman y ;i e i
e
if (3D) and
e
A
e
dynamically modified block incomplete LU-factorization
diagonal (modification) matrix
that cancels the perturbation - j at unsafe nodes [24, 26]. Observe that Algorithm 3.5
reduces to Algorithm 3.4 in the absence of "strong" anisotropy. Now there holds
where, at each grid point j, the perturbation - j is equal to - j whenever - j is defined,
and 0 otherwise [24, 26]. It is worth noting that unsafe nodes are nodes where the
coefficients p and q are strongly anisotropic; the factor 10 in Algorithm 3.5 gives
a "measure" of the amount of anisotropy beyond which the perturbations should be
discarded [26, Subsection 4.4]. Not any generalization of Algorithm 3.5 has been
proposed so far to handle 3D problems. This is not a trivial task; one has to take into
account the PDE coefficient r too. Contrary to the 2D case [24, 26], it is not easy to
8 M.M. MAGOLU AND Y. NOTAY
establish in which of the many different possibilities, with respect to anisotropy, one
should drop the perturbations without dramatically increasing the largest eigenvalues
of the preconditioned matrix. An alternative solution is investigated in the next
section, which is the main contribution of this work.
Algorithm 3.5 (DMBILU ) (2D case)
Compute, for
as in DMBILU
then
endif
as in DMBILU
dynamically modified block incomplete
LU-factorization with dropping test
3.3. Dynamically relaxed block incomplete factorizations. Relaxed methods
have been observed to successfully handle strongly anisotropic problems, whenever
the relaxation parameter is properly chosen (see, e.g., Section 4). Unfortunately, no
general theory has been provided to date to estimate ! opt . The pointwise variant of
standard RILU, introduced in [34] and termed DRILU, dynamically computes variable
relaxation parameters; it combines the robustness, with respect to the variation of the
PDE coefficients, of optimized RILU with the relative insensitivity, to the variation of
the parameters involved, of dynamically perturbed methods. The blockwise version
of DRILU that we are going to present in this section is based on a generalization
of [34, Theorem 5.1] which motivated the introduction of DRILU. We first give an
auxiliary result.
Lemma 3.1. [34, Lemma 5.1] Let F - 0 be a pointwise strictly upper triangular
matrix and, Q 0 and Q 1 stand for nonnegative pointwise diagonal matrices. If C is a
matrix and x a positive vector such that
then C is nonnegative definite.
DYNAMICALLY RELAXED BLOCK PRECONDITIONERS 9
Theorem 3.2. Let stand for a diagonally dominant Stieltjes
matrix, such that while L is strictly block lower triangular. Let P
denote a block diagonal diagonally dominant Stieltjes matrix.
p be the normalized point LU factorization of P . Assume
further that
diag block
tridiag
M is a pointwise diagonal matrix such that
f
diag block
e
with
for some pointwise diagonal matrix
such that W - I.
If, for all 1
where
then
Proof. First, given that
to check that, (3.12) and (3.13) imply that
Be
diag block
e
offdiag block
e
diag block
offtridiag
Next, let be the pointwise diagonal matrix whose diagonal entries are
given by
One has 0 - \Theta - \Gamma1 I. Set
diag block
tridiag
denotes the pointwise diagonal matrix such that B 1 Taking (3.12)
into account, it is an easy matter to show that offdiag(B 1 Hence it follows that
is a Stieltjes matrix and therefore nonnegative definite [10]. Now,
p is
a Stieltjes matrix such that P e - 0, whence (see [10]) L \Gamma1
be the pointwise diagonal matrix defined by
ae
one has by (3.16) that XP2
denote the pointwise diagonal matrices such that, respectively,
By application of Lemma 3.1, successively to B 2 and B 3 , one
easily shows that both matrices are nonnegative definite. On the other hand, since
readily checks that
diag block
offtridiag
offdiag block
where \Delta 4 is a pointwise diagonal matrix whose explicit form does not matter. By
(3.18) one has that
diag block
offtridiag
e
offdiag block
Now, by definition of - (see (3.17)), one easily deduces that 1+! i
i, so that either -
This implies that I \Gamma W - 2\Theta. Therefore the right hand side of (3.19) is nonnegative
definite. The conclusion readily follows.
Corollary 3.3. If B corresponds to some relaxed block incomplete LU factorization
of A with
with
Proof. Apply Theorem 3.2 with M defined as in Eq. (3.4) and
all
Note that, since L \Gamma1
imply for MBILU factorizations
with
(3.
DYNAMICALLY RELAXED BLOCK PRECONDITIONERS 11
(3.22) is nothing but the upper bound on the basis of which both DMBILU and
DMBILU have been elaborated, by imposing ~
- i' [22, 24, 26]. An alternative way
to achieve the latter imposed upper bound consists in computing P so as to satisfy
the assumptions of Theorem 3.2 with
for all
In view of (3.12) and (3.13), this could be achieved by
1. computing the entries of P as in Algorithm 3.1 (block by block);
2. subtracting, from the pointwise diagonal entries of P , the quantity ( f
which is defined by (3.13) with equality symbol, where ! i is defined by (3.24).
The corresponding block preconditioner, which we call dynamically relaxed block incomplete
LU-factorization (DRBILU), is described in Algorithm 3.6. Note that, if
p is a pointwise tridiagonal Stieltjes matrix, then so is
.
Therefore, e
may be computed by means of Algorithm 3.2 with
Unlike RBILU, the relaxation parameter is now dynamically
modulated in function of - i 's, in a similar way as perturbations are added in
DMBILU. This leads us to expect a similar stability with respect to the choice of the
parameter i.
For all the block preconditioners discussed so far, the parameters involved may be
chosen so as to achieve the same upper bound, say i', for the the largest eigenvalues of
the preconditioned matrix B \Gamma1 A. In the context of the PCG method, the convergence
behavior also depends on the distribution of the smallest eigenvalues. The following
estimate, that relates - has been obtained
and successfully tested in [7, 26, 31]
where X denotes the pointwise diagonal matrix such that Be
which satisfies \Gamma1 - ffl h;i - 1, depends weakly upon both the mesh size parameter h
and i. ffl rise in general to very accurate estimates in the case of i - n,
for both pointwise [7, 31] and blockwise preconditioners [26]. One has (e;
that the order of
magnitude of the smallest eigenvalues - mostly depends on the sum (e; X e) of
all perturbations. As far as DMBILU and DMBILU are concerned one has (e; X
O(1) [22, 26], whence it follows that the smallest eigenvalues of B \Gamma1 A do not depend
on h, i.e., are O(1).
As regards RBILU, with
i' , one has from (3.3) that
i' M , which
shows that all perturbations but the first block nodes ones are O(h). The smallest
eigenvalues are then clearly O(h). Therefore, -(B \Gamma1 asymptotically O(h \Gamma2 ).
Nevertheless, with a "properly chosen" value for !, RBILU performs very well in
practice because of the nice distribution of (interior) eigenvalues [4].
Now, for DRBILU where
perturbations (i.e. occur only at
selected nodes according to (3.24). Moreover, for all
i' . It is then an easy matter to show that, for the same target
upper bound, the perturbations introduced in DRBILU are not larger than the ones
added in RBILU. Therefore, on the basis of (3.25) one may expect DRBILU to be at
least as robust as RBILU. It is of worth noting that for both RBILU and DRBILU,
Algorithm 3.6 (DRBILU)
Compute, for
with
e
if (3D) and ny
i\Gamman y ;i\Gamman y
A i\Gamman y
with
e
+A i;i\Gamman y
s i\Gamman
i\Gamman y ;i\Gamman y A i\Gamman y ;i e i\Gamman y
(2)
e
if (3D) and
e
dynamically relaxed block incomplete LU-factorization
diagonal (relaxation) matrix
the perturbations are in direct proportion to the neglected fill-ins which are (very)
small; this occurs in particular in 2D problems with strong anisotropy (p - q or
Subsection 4.4] (see also (3.28)).
For comparison purposes, let us mention that as far as DMBILU is concerned,
one has by [22, Lemma 4.4], with
O(h), the following sharp upper bound
on the perturbations -
DYNAMICALLY RELAXED BLOCK PRECONDITIONERS 13
This bound could be large in the case of strong anisotropy. For instance, in 2D
case, when p - q (in (1.1)) one could have that - In
such a situation, the smallest degenerate eigenvalues of D \Gamma1 A are reproduced, up
to some multiplicative constants, by B \Gamma1 A (see (3.25)). If the number - of such
degenerate eigenvalues is not very small (i.e., if
n), this results in a very
slow convergence for the PCG process [33]. In the case of DRBILU, since (3.16) is
equivalent to - i
, it is straightforward to establish that
with
Therefore,
1-j-n
diag block
tridiag
e
whence it follows that the perturbations involved in DRBILU can never be much
larger than the ones involved in DMBILU. It is obvious from all the considerations
above that, with DRBILU, one tries to combine the advantages of both RBILU and
DMBILU.
Observe finally that, in the case of 2D problems, even if the perturbations introduced
by DRBILU are very small, their sum (e; X
Me) could be larger
than the corresponding sum for DMBILU , in particular when the number j of nodes
where the perturbations are dropped is large enough, for instance O(n). In order
to perform a fair comparison, we give in Algorithm 3.7 a version of DRBILU, termed
DRBILU , which uses the same dropping test as in DMBILU .
4. Numerical experiments. The PCG method is run with the zero vector as
starting approximate solution and the residual error reduction
as convergence criterion. The computations are performed in double precision FORTRAN
on a Sun 514MP sparc workstation. For comparison purposes, the precondi-
tionings include :
1. RBILU (Algorithm 3.3). Four values of the parameter ! have been tested :
respectively, the unmodified and (unper-
turbed) modified standard block methods;
defined as in Eq. (3.8). For small
and moderate problem sizes, this includes ! - 0:95 that has been suggested
in [39], while for large problems, this includes ! - 0:99 which is the optimal
observed to the nearest 0:01 for minimizing the number of iterations.
14 M.M. MAGOLU AND Y. NOTAY
Algorithm 3.7 (DRBILU ) (2D case)
Compute, for
as in DRBILU
then
else
endif
as in DRBILU
dynamically relaxed block incomplete LU-factorization
with dropping test
2. DMBILU and DMBILU (Algorithms 3.4 and 3.5). We have used 1according to the recommendations made in [26].
3. DRBILU and DRBILU (Algorithms 3.6 and 3.7). We report the results
4 which we anticipate to be near optimal as in DMBILU and
DMBILU .
4. ILU and MILU. The popular pointwise unmodified and modified incomplete
LU-factorizations (see, e.g., [6]).
5. DRILU ([34]). The pointwise dynamically relaxed incomplete LU-factorization
method. As recommended in [34], the parameter involved is chosen so as to
target the upper bound n 1
d where denotes the space dimension.
In the first two problems, DMBILU and DRBILU are not considered because
they coincide with, respectively, DMBILU and DRBILU. Problem 3 is essentially
Stone's problem [36, 26]. The next three problems are some 3D extensions of the first
three ones. The last example is intended for comparing the behavior of block and
pointwise methods in the case of elongated grids and non uniform mesh size, which
arise in 3D simulation problems.
DYNAMICALLY RELAXED BLOCK PRECONDITIONERS 15
Problem 1. (2D)
ffl The rhs of the linear system to solve is chosen such that the function u 0 (x;
generates the solution on the grid.
Problem 2. (2D, from [40])
ae 100 in (1=4; 3=4) \Theta (1=4; 3=4)
elsewhere
ae 100 in (1=4; 3=4) \Theta (1=4; 3=4)
elsewhere
Problem 3. (2D, essentially from [36])
ffl The coefficients p, q and t are specified in Fig. 1
ffl The rhs of the linear system to solve is chosen such that the function u 0 (x;
generates the solution on the grid.
-x
y
region p q
d
d
t0Fig. 1. Problem 3. Configuration and specification of the PDE coefficients; d stands for a positive
parameter.
Problem 4. (3D)
@\Omega
M.M. MAGOLU AND Y. NOTAY
Problem 5. (3D)
ae 100 in (1=4; 3=4) \Theta (1=4; 3=4) \Theta (0; 1)
elsewhere
ae 100 in (1=4; 3=4) \Theta (1=4; 3=4) \Theta (0; 1)
elsewhere
Problem 6. (3D)
ffl The coefficients p, q, r, t and f depend only on (x,y) as specified in Fig. 2
@\Omega -x
y
region p q
d
d
r1d
Fig. 2. Problem 6. Configuration and specification of the PDE coefficients; all the regions extend
from stands for a positive parameter.
Problem 7. (3D)
ae
1000 in (0; 1=8) \Theta (0; 1=8) \Theta (0; 1=8)
elsewhere
ae 1 in (0; 1=8) \Theta (0; 1=8) \Theta (0; 1=8)
elsewhere
For all problems but the last one, we have used a uniform mesh size h in each
direction. In the case of Problem 7, we have used non uniform rectangular grids
obtained by setting h
ny and h z = 4h0 (z)
nz , where the function h 0
is defined by
0:25 if 0:25 - t - 0:5
DYNAMICALLY RELAXED BLOCK PRECONDITIONERS 17
We give in Tables 1-14 the extremal eigenvalues and/or the spectral condition
numbers as well as the exponent - from the assumed asymptotic relationship -(B \Gamma1
constant. - is estimated from the largest two problems
data (say, h \Gamma1 =96 and h \Gamma1 =192, or 40 and 80 in 3D cases). Whenever - min (B \Gamma1
is strongly isolated from the rest of the spectrum, we also include both the second
smallest eigenvalue and the effective spectral condition number
which accounts for the superlinear convergence of the PCG method (see e.g. [33,
37, 38, 40]). As far as pointwise preconditioners (ILU, MILU and DRBILU) are
concerned, whose conditioning analysis is not investigated here, we have computed
only the number of iterations to achieve the prescribed accuracy, in order to save
space. Problems 3 and 6, with d large enough, are examples of PDE with degenerate
smallest eigenvalues. We report in Tables 5 and 11, for
case) and h case), the numerically computed smallest and largest four
eigenvalues associated to each block preconditioner involved. The pointwise Jacobi
(or diagonal) preconditioner whose smallest eigenvalues are connected to those of the
other preconditioners through Eq. (3.25), is also considered. In Tables 6, 13 and 14,
the number of PCG iterations to reach the target accuracy are collected for each
problem and for h
of the mesh sizes along the three directions in the case of Problem 7. From all the
tables, the following observations can be made.
1. As expected, RBILU (with smallest eigenvalues of O(h)
and largest eigenvalues of approximately O(h \Gamma1 ). Nevertheless, the O(h \Gamma2 )
behavior of -(B \Gamma1 compensated by the nice distribution of interior eigen-values
[4], which explains the relative good performance of the preconditioner
(see
Tables
6, 13 and 14, RBILU with
2. For both DMBILU and DRBILU as well as their " versions", the smallest
eigenvalues are in general O(1) while the largest ones are O(h \Gamma1 ), so that the
(effective) spectral condition numbers are O(h \Gamma1 ). Observe however that, in
the case of strong anisotropy (Problems 3 and 6 with smallest
eigenvalues associated to DRBILU are O(h), whence it follows that the
(effective) spectral condition numbers are O(h \Gamma2 ); as in RBILU, the good
behavior of DRBILU is due to the nice distribution of interior eigenvalues.
3. In the case of isotropic problems, DMBILU and DRBILU give better results
than classical methods (RBILU with does "optimized" RBILU.
All winning methods perform quite similarly, even if there is strong jump
in the PDE coefficients (Problems 2 and 5), in which case the small first
eigenvalue is the only one that is strongly isolated from the others.
4. In the presence of strong anisotropy (Problems 3 and 6 with d=10 3 ), RBILU
essentially reproduce the disastrous distribution of
the smallest eigenvalues of D \Gamma1 A, which slows down the convergence of PCG
process [33]. It should be noted that for RBILU with
eigenvalues are more clustered than for DMBILU as one would expect by
comparing the largest eigenvalues. DRBILU, DMBILU and DRBILU successfully
break with the dependence upon D \Gamma1 A, which reflects in the number
of PCG iterations. As predicted, DRBILU is a little bit more efficient than
DRBILU. For RBILU with the rate of convergence mostly depends on
M.M. MAGOLU AND Y. NOTAY
the distribution of the largest eigenvalues, the smallest ones being known to
cluster around 1.
5. In accordance with previous works, [16] (2D), [3, 28] (3D), blockwise (linewise)
methods turn out to be more efficient than pointwise counterparts. In the
case of 2D problems, the reduction in the number of iterations, from point
methods to block methods, is at least about 50%, while it is around 30%
in the 3D cases. The gain is even more spectacular in the case of strongly
anisotropic problems (see Table 6, Problem 3b and Table 13, Problem 6b).
As regards their computational complexity, let us mention that each PCG
iteration with blockwise preconditioners needs two more flops per point than
for pointwise preconditioners, which is rather small as compared to the total
number of flops per PCG iteration [3].
6. The performances of blockwise methods in general, and in particular, DM-
BILU and DRBILU, are (almost) insensitive to the variation of the number
of gridpoints along the x-direction, say, the direction which determines the
blocks (see Tables 12 and 14). This is in quite agreement which the analysis
performed in [28].
7. DRBILU is the only preconditioner which is always among the best three
ones, whatever the problem tested (see Tables 6, 13 and 14). Whenever
DRBILU is not absolutely the winner, it is not far from the latter (we have
also observed that the variation of the parameter i around 1
4 does not have
a significant effect on the behavior of the preconditioners).
From our discussion together with the analysis made in Section 3, we conclude
that the most promising method is DRBILU (with It perfoms quite well
in a wide range of situations. Its main merit is that, contrary to DMBILU and
DMBILU , it does not (strongly) depend on a dropping test that has been set up on
the basis of experiments performed only on five-diagonal 2D problems [24, 26]. The
dropping test involved has not yet been extended to 3D PDEs, while the performances
of DRBILU are quite satisfactory for both two- and three-dimensional problems. Even
though all our computations were performed on well structured grids for which natural
blockwise partitionings are available, the block methods that we have investigated
could be applied to finite element unstructured grids. The problem of defining blocks
in such irregular grids has been solved recently in [14, 15]. Combining the techniques
developed in the latter papers with the preconditoners discussed here would result in
robust preconditioners to tackle real life engineering problems. This awaits further
investigation.
DYNAMICALLY RELAXED BLOCK PRECONDITIONERS 19
Table
Problem 1. Extremal eigenvalues (-min and -max) and/or spectral condition number (-) of B
exponent - corresponding to the (estimated) asymptotic relationship -=Ch \Gamma- , C denoting a constant.
RBILU
48 16.2 4.02 0.58 2.39 4.09 0.39 1.96 5.05
DMBILU DRBILU
48 0.65 2.52 3.88 0.77 2.80 3.64 0.98 3.78 3.84
Table
Problem 2. Extremal eigenvalues (-min , -2 and -max) and/or (effective) spectral condition number
corresponding to the (estimated) asymptotic relationship
denoting a constant.
RBILU
48 2300 13.7 51.3 70E-4 0.61 2.97 422 4.86
48 38E-4 0.44 2.28 608 5.21 54E-4 0.56 2.94 549 5.27
DRBILU
48 76E-4 0.66 3.29 435 4.98 20E-3 0.85 5.42 271 6.39
M.M. MAGOLU AND Y. NOTAY
Table
Problem 3 with Extremal eigenvalues (-min , -2 and -max) and/or (effective) spectral
condition number (- (2) ) - of B \Gamma1 A; exponent - corresponding to the (estimated) asymptotic relationship
-=Ch \Gamma- , C denoting a constant.
RBILU
48 7298 38.4 65.8 25E-4 0.42 3.54 8.52
192 116422 614. 309. 57E-5 0.11 6.85 64.0
48 13E-4 0.24 2.76 11.5 50E-5 0.13 3.22 24.8
DRBILU
48 19E-4 0.34 3.63 10.6 44E-4 0.67 5.98 8.95
DYNAMICALLY RELAXED BLOCK PRECONDITIONERS 21
Table
Problem 3 with Extremal eigenvalues (-min , -2 and -max) and/or (effective) spectral
condition number (- (2) ) - of B \Gamma1 A; exponent - corresponding to the (estimated) asymptotic relationship
-=Ch \Gamma- , C denoting a constant.
RBILU
48 7302 36.2 75.0 26E-4 0.43 3.14 7.22
48 14E-4 0.28 2.45 8.88 11E-6 37E-4 3.84 1029
DRBILU
48 19E-4 0.34 4.20 12.4 43E-4 0.64 6.49 10.2
22 M.M. MAGOLU AND Y. NOTAY
Table
Problem 3 with Distribution of extremal eigenvalues of B \Gamma1 A for different
preconditioners.
Preconditioning smallest eigenvalues largest eigenvalues
1. 1. 1. 1. 159. 204. 248. 368.
Point Jacobi 3E-9 59E-8 17E-7 46E-7 2. 2. 2. 2.
Table
Number of PCG iterations to achieve kr (i)
Problem 1 Problem 2 Problem 3a Problem 3b
ILU
DYNAMICALLY RELAXED BLOCK PRECONDITIONERS 23
Table
Problem 4. Extremal eigenvalues (-min and -max) and/or spectral condition number (-) of B
exponent - corresponding to the (estimated) asymptotic relationship -=Ch \Gamma- , C denoting a constant.
RBILU
DMBILU DRBILU
Table
Problem 5. Extremal eigenvalues (-min , -2 and -max) and/or (effective) spectral condition number
corresponding to the (estimated) asymptotic relationship
denoting a constant.
RBILU
DRBILU
M.M. MAGOLU AND Y. NOTAY
Table
Problem 6 with Extremal eigenvalues (-min , -2 and -max) and/or (effective) spectral
condition number (- (2) ) - of B \Gamma1 A; exponent - corresponding to the (estimated) asymptotic relationship
-=Ch \Gamma- , C denoting a constant.
RBILU
DRBILU
Table
Problem 6 with Extremal eigenvalues (-min , -2 and -max) and/or (effective) spectral
condition number (- (2) ) - of B \Gamma1 A; exponent - corresponding to the (estimated) asymptotic relationship
-=Ch \Gamma- , C denoting a constant.
RBILU
DRBILU
Table
Problem 6 with Distribution of extremal eigenvalues of B \Gamma1 A for different
preconditioners.
Preconditioning smallest eigenvalues largest eigenvalues
1. 1. 1. 1. 444. 511. 807. 3129
Point Jacobi 17E-9 39E-7 57 E-7 15E-6 2. 2. 2. 2.
26 M.M. MAGOLU AND Y. NOTAY
Table
Problem 7. Extremal eigenvalues (-min and -max) and/or spectral condition number (-) of B \Gamma1 A.
RBILU
grid -min -max -min -max -
160 \Theta 80 \Theta 40 374. 30.2 0.12 10.6 91.7 62E-3 7.5 121.
DMBILU DRBILU
grid -min -max -min -max -min -max -
160 \Theta 80 \Theta 40 0.29 10.6 36.7 0.33 11.0 32.4 0.63 15.3 24.1
Table
Number of PCG iterations to achieve kr (i) Problems 4, 5, and 6 (6a:
Problem 4 Problem 5 Problem 6a Problem 6b
ILU
DRILU
Table
Number of PCG iterations to achieve kr (i) k2 =kr (0) k2 -10 \Gamma7 for Problem 7. Grids: (a)=40 \Theta 40 \Theta
(g)=160 \Theta 80 \Theta 40 , (h)=80 \Theta 160 \Theta 40 .
grid (a) (b) (c) (d) (e) (f) (g) (h)
43 43 61
28 42 28 28 28
ILU 76 158 114 128 175 138 205 205
Acknowledgments
. Part of this work was done while the first author was holding
a postdoctoral position at the Mathematical Institute of Utrecht University. He
thanks Henk van der Vorst for his warm hospitality, and for suggesting to include the
three-dimensional case in this study. Thanks are also due to the referees for their
constructive comments.
--R
Cambridge University Press
Finite Element Solution of Boundary Value Problems.
Vectorizable preconditioners for elliptic difference equations in three space dimensions
On the eigenvalue distribution of class of preconditioning methods
On eigenvalue estimates for block incomplete factorization methods
Templates for the Solution of Linear Systems
Modified incomplete factorization strategies
On sparse block factorization iterative methods
Existence and conditioning properties of sparse approximate block factorizations
Nonnegative Matrices in the Mathematical Sciences
A survey of preconditioned iterative methods
Fourier analysis of relaxed incomplete factorization preconditioners
Approximate and incomplete factorizations
An object-oriented framework for block preconditioning
BPKIT block preconditioning tool kit
Block preconditioning for the conjugate gradient method
Beware of unperturbed modified incomplete factorizations
Relaxed and stabilized incomplete factorizations for nonself-adjoint linear sys- tems
Matrix Computations
Closer to the solution: iterative linear solvers
Compensative block incomplete factorizations
Modified block-approximate factorization strategies
Analytical bounds for block approximate factorization methods
Empirically modified block incomplete factorizations
Ordering strategies for modified block incomplete factorizations
Taking advantage of the potentialities of dynamically modified block incomplete factorizations
On the conditioning analysis of block approximate factorization methods
Theoretical comparison of pointwise
Efficient planewise like preconditioners to cope with 3D prob- lems
Computational Methods in Engineering and Science
Conditioning analysis of modified block incomplete factorizations
On the convergence rate of the conjugate gradients in the presence of rounding errors
Iterative Methods for Sparse Linear Systems
Iterative solution of implicit approximation of multidimensional partial differential equations
The convergence behaviour of conjugate gradients and ritz values in various circumstances
The rate of convergence of conjugate gradients
ICCG and related methods for 3D problems on vector computers
The convergence behaviour of preconditioned CG and CG-S
Iterative Solution of Elliptic Systems and Application to the Neutron Diffusion Equations of Reactor Physics
--TR | large sparse linear systems;preconditioned conjugate gradient;incomplete factorizations;diagonal relaxation;discretized partial differential equations |
359199 | The Finite Mass Method. | The finite mass method, a new Lagrangian method for the numerical simulation of gas flows, is presented and analyzed. In contrast to the finite volume and the finite element method, the finite mass method is founded on a discretization of mass, not of space. Mass is subdivided into small mass packets of finite extension, each of which is equipped with finitely many internal degrees of freedom. These mass packets move under the influence of internal and external forces and the laws of thermodynamics and can undergo arbitrary linear deformations. The method is based on an approach recently developed by Yserentant and can attain a very high accuracy. | Introduction
. Fluid mechanics is usually stated in terms of conservation
laws that link the change of a quantity like mass or momentum inside a given volume
to a flux of this quantity across the boundary of the volume. The finite volume
method is directly based on this formulation. Space is subdivided into little cells, and
the balance laws for mass, momentum and energy are set up for each of these cells
separately. Similarly, also the finite element method is based on a discretization of
space and a choice for the trial functions on the resulting cells.
In contrast, the finite mass method is founded on a discretization of mass, an idea
which is at least as obvious and that can be traced back to the work of von Neumann
[10] or of Pasta and Ulam [9] in the late 1940's and the 1950's. Instead of dividing
space into elementary cells, we divide mass into a finite number of mass packets
of finite extension, each of which equipped with a given number of internal degrees
of freedom. These mass packets move under the influence of internal and external
forces and the laws of thermodynamics and can intersect and penetrate each other.
They can contract, expand, rotate, and even change their shape. Their internal mass
distribution is described by a fixed shape function, similarly as with finite elements.
Although the finite mass method is a purely Lagrangian approach, it has not much
to do with particle methods as used for Boltzmann-like transport equations; in some
way, it is much closer to finite element and finite volume schemes. The approximations
it produces are differentiable functions and not discrete measures. The method is
basically of second order, and in some experiments we observed even fourth order
convergence!
The Lagrangian form of description of fluid flows can have many advantages. For
example, there are no problems with free surfaces, and no convection terms arise. Such
features make numerical methods based on the Lagrangian view especially attractive
unbounded space. In fact, one of the most popular methods of this
type, Monaghan's smoothed particle hydrodynamics [8], has its origins in astrophysics.
Like the smoothed particle hydrodynamics, the finite mass method is a completely
grid-free approach, but it is not a method de facto imitating statistical mechanics and
possesses a sounder mathematical and physical foundation.
The finite mass method is a generalization and extension of the particle model of
compressible fluids that had been proposed by the last author in [11], [12] and [13] and
is based on the principles developed there. The compactness and convergence results
Mathematisches Institut der Universitat Tubingen, 72076 Tubingen, Germany
obtained in [11] and [13] concerning the transition to the continuum limit transfer to
the present situation. One of the essential differences to the approach in the articles
mentioned above is that the single mass packets can now undergo arbitrary linear
deformations and not only rotations and changes of size. Although the dimension of
the configuration manifold of the single mass packet increases because of that, this
strongly simplifies the equations of motion the mass packets are subject to because
the configuration manifold is now a linear space. The equations of motion take the
same form for all space dimensions. The main advantage, however, are the superior
approximation properties, due to the fact that the mass packets can now be deformed
by the flow.
The main issue with a method like ours is how the equations of motion for the
mass packets or particles, as we often prefer to say, are set up. We start from the basic
physical principles that finally lead to the Euler and Navier-Stokes equations, not from
these equations themselves and a least squares or Galerkin approach. In consequence,
the equations of motion also do not break down when particles completely cover
each other. In the most simple case of an adiabatic, inviscid flow, the equations
of motion for the particles are derived from a Lagrange-function with the internal
energy as potential energy. To damp the fluctuation part of the local kinetic energy
that necessarily arises with every such model, frictional forces vanishing in the limit
of particle sizes tending to zero are added to these potential forces. They break the
invariance to time reversal and make the method consistent with the second law of
thermodynamics.
The paper is organized as follows. In x2, the finite mass method is explained
and derived in detail, both for inviscid and for viscous fluids. It is shown how the
mass density, the velocity field and, as second independent thermodynamic quantity,
the entropy are discretized, and it is discussed what sort of approximation properties
can be expected. The equations of motion describing the local interaction of
the mass packets and the time evolution of the system are set up. A main feature
of the approach is that mass, momentum, angular momentum and energy are exactly
conserved. The conservation of mass follows immediately from the construction
and is discussed in x2. The conservation of energy, momentum and angular momentum
is studied in x3. The conservation of angular momentum is a remarkable fact
as in continuum mechanics the conservation of angular momentum is hidden in the
symmetry of the stress tensor and does not appear explicitly as a conservation law.
Consequently, most discretizations violate this principle.
The finite mass method as described in x2 is invariant to arbitrary translations
and rotations and, as it concerns the shape the mass packets can attain, even to
every linear transformation of space. These properties are reflected by the quadrature
formulas that are developed in x4 to evaluate the integrals which define the forces
acting upon the particles. Conservation of momentum, angular momentumand energy
are maintained. In x4, we also discuss how the forces and moments acting upon the
particles can be calculated on the computer.
In x5, a suitable time discretization is presented. The proposed scheme is an
exponential integrator in the spirit of the recent paper [4] by Hochbruck and Lubich.
In case of pure pressure forces, it transfers to the well-known Stormer-Verlet method
for second order equations and conserves momentum and angular momentum exactly.
Finally, in x6, some typical test calculations for flows in two space dimensions are
documented. These examples clearly demonstrate the potential and the high accuracy
of the method.
We restrict our attention in this article to free flows in vacuum. For the inviscid
case, the reflection laws needed for particles touching the walls of bounded volumes
have already been given in [11], [12] and [13]. Their numerical realization will be
discussed elsewhere.
For a certain background in mechanics and fluid dynamics, we refer to the text-books
[5], [6] by Landau and Lifschitz on one hand and [1] by Chorin and Marsden
on the other hand, and to the classical monograph [2] by Courant and Friedrichs.
2. The particle model of compressible fluids. The basic ingredient of the
finite mass method is a continuously differentiable shape function / : R d ! R, d the
space dimension, with compact support that attains only values 0. This function
describes the internal mass distribution inside the mass packets into which the fluid
is subdivided. We assume that
Z
Z
The second property states that the origin of the body coordinate system attached to
a particle is its center of mass. Further we suppose that
Z
with the y k the components of y.
For example, the function / may be built up from a piecewise polynomial function
e
in one space variable. If we let
d
Y
e
the conditions (2.1) are equivalent to
Z
e
Z
e
The second condition in (2.4) also implies that the integrals (2.2) vanish for k 6= l.
The constant J is given by
Z
e
and does not depend on the space dimension. A suitable choice for e
/, that we have
used in the numerical computations documented in x6, is the normalized third order
B-spline given by
e
for jj 1 and by e
1. It can be composed of smaller copies and
be used to build up a basis for the cubic spline functions on a uniform grid. This e
fulfills the conditions (2.4), and the constant (2.5) takes the value
An advantage of such tensor product like shape functions are their good approximation
properties.
The points y of the particle i move along the trajectories
The vector q i (t) determines the position of the particle and the matrix H i (t) its size,
shape and orientation in space. Correspondingly,
are the body coordinates of the point at position x in space at time t. Let
denote the mass of the particle i. The total mass density
then results from the superposition of the mass densities of the single particles.
The points y of the particle i have the velocity
Inserting the expression (2.9) for y, one gets the velocity field
of the particle i related to the space coordinates. The total mass flux density
again results from the superposition of the mass flux densities of the single particles.
With the local mass fractions
the velocity field v of the flow, that is defined by the relation
is the convex combination
of the velocity fields of the single particles.
To simplify notation, we introduce the abbreviation
where again
The gradient of / i , that is the derivative of / i with
respect to x, is
and can, as the derivatives
be expressed in terms of the internal variables y.
To derive the expression above for the derivative with respect to the matrix H i ,
we suppress the index i serving for the distinction of the particles and write / instead
of / i . First we observe that, for H fixed and E tending to zero,
E)
and, because of
for A tending to zero, similarly
E)
where
k;l
denotes the inner product of two matrices A and B. One findsdet(H +E) /
det H (r/)(y)
This means that the linear mapping
det H /(y)
is the total derivative of
det H /
at given H. Inserting Ej l for the entries of E and using (2.17), (2.18),
@/
follows.
The continuity equation expressing the conservation of mass is automatically
satisfied with the given ansatz. Independently of the size and shape of the particles
and their distribution in space,
@ae
@t
This follows from the fact that such an equation holds for every single particle, as one
proves using (2.19) and div v
6 CHRISTOPH GAUGER, PETER LEINEN, HARRY YSERENTANT
To study the potential accuracy of the approach, we start from a given twice continuously
differentiable velocity field u. Then fixing the particle trajectories q i (t) and
the matrices H i (t) for t t 0 , with given initial values, as solutions of the differential
equations
the velocity field (2.12) of the particle i reads
and is therefore a second order approximation of u in a neighborhood of
As the mass fractions (2.14) form a partition of unity, the resulting overall velocity
field
remains a second order approximation of u on the region occupied by mass, independent
of where the particles are located. The mass density is an exact solution of the
transport equation (2.20) with respect to this perturbed velocity field and therefore,
with corresponding initial values, a good approximation of the true density in the
sense of a backward error analysis. For a more careful and detailed analysis of this
type also covering the motion in an external force field, we refer to [14].
For a complete description of the thermodynamic state of a compressible fluid,
besides the mass density ae a second thermodynamic quantity like temperature is
needed. Most convenient for our purposes is the entropy density s that is given here
in the form
where the S i (t) denote the specific entropies of the single particles. In particular, the
specific entropy
is the convex combination
of the specific entropies of the single particles. As with the velocity, one recognizes
that this representation potentially leads to a first order approximation of the exact
specific entropy independent of the distribution of the particles, where the actual
accuracy can again be much higher.
The pressure, the absolute temperature and the internal energy per unit volume
are functions of the mass density and the entropy density. These functions are not
independent of each other but are connected by the Gibbs fundamental relation of
thermodynamics taking the form
@ae
@s
@s
in the present variables. We assume that the internal energy "(ae; s) is defined for all
ae ? 0 and all real s and is twice continuously differentiable on this set. We suppose
that
@"
@s
for all these ae and s and require that "=ae and the first order partial derivatives of "
can be extended by the value 0 at (ae; to functions that are continuous on
the sectors jsj S ae, S ? 0. Barytropic ideal gases with
ae 0
s
ae
represent a simple example. The constant 0 ? 0 is a characteristic pressure, the constant
mass density, and the constant c v ? 0 the characteristic
heat for constant volume. The constant fl ? 1 is the ratio of the specific heats for
constant pressure and constant volume. A typical value is
The particle i has the kinetic energy
that attains the closed representation
denotes the Euclidean norm of a vector and the Frobenius norm of a matrix,
respectively. The total kinetic energy of the system is
From the kinetic energy (2.33) and the internal energy
Z
the Lagrange-function
of the system is formed. For the completely adiabatic case that is described by the
Euler-equations of gas dynamics, the equations of motion
d
dt
d
dt
are derived from this Lagrange-function. With the normalized forces
explicitly given by
@ae
@"
@s
@ae
@"
@s
these equations read
Note that the masses m i of the particles cancel out and appear only implicitly in
the mass density ae and the entropy density s. The derivatives of / i occurring in the
expressions above can be written as functions of the internal particle coordinates (2.9)
and are given by (2.19). The pressure forces (2.37) acting upon a group of particles
can be interpreted as a kind of surface force [11].
As long as the specific entropies S i of the single particles are assumed to be
constant, the time evolution of the system is completely determined by the equations
of motion (2.36). This system is time reversible, a fact that contradicts the behavior of
actual fluids where heat is generated in shock fronts. Therefore it has been proposed
in [13] to add the (normalized) frictional force
F (r)
Z
to the right-hand side of the first equation in (2.40). This force damps local velocity
fluctuations and couples the particles softly together. The difference between the
velocity of the given particle and the velocity field of the surrounding flow determines
the direction of the force and the scalar function R 0 its size. The quantity in the
equation for the H i corresponding to the force (2.41) is
In smooth flows, the forces (2.41) and (2.42) will be comparatively small and will
vanish with the second power of the local particle size, but they can dominate in
shocks.
For a better quantitative understanding of the frictional forces (2.41), (2.42), we
pause for a moment and consider a little model problem. We neglect the pressure
forces, keep R constant, and fix the velocity field The equations of motion for
a single particle then read
that is, within the time the velocity of the particle is reduced by the factor
1=e. This justifies calling the local relaxation time of the system. With
the trajectory of the particle is
Thus the particle is stopped in distance T jv 0 j from its position at the given time t 0 .
The friction among the particles generates heat. Therefore the specific entropies
are no longer constant and increase in time. They obey the differential equations
The quantity ' i defined by
is the mean value
Z
of the absolute temperature around the particle and
Z
the specific heat supply. The quantity
is the density of the fluctuation energy. It can be rewritten as
which is more convenient for computational purposes. The equation (2.43) is the
second law of thermodynamics. It ensures conservation of energy as will be shown in
x3. Note that the specific entropy of a particle never decreases.
If the q i and H i and the function R are kept fixed,
is a linear mapping. To study this mapping, we utilize the energy norm given by
and the corresponding energy inner product, respectively. Note that the square of
the energy norm of the vector consisting of the velocity components q 0
i is the
total kinetic energy (2.33) of the system. Denoting by b
v i the velocities (2.16) formed
with b
i instead of q 0
, one has
Z
with the symmetric bilinear form
in the velocities v i and b v i . This shows that the linear mapping (2.49) is negative
semidefinite and selfadjoint with respect to the energy inner product. For R constant,
its eigenvalues range in the interval [\Gamma1=T; 0]. The proof of (2.51) is based on the
same arguments as the proof of Theorem 1 below.
In viscous fluids, an additional force
Z
acts upon the mass contained in a volume W t transported with the flow, that is
the mass initially contained in a given volume inside the region occupied
by mass. The symmetric tensor T is the viscous part of the stress tensor and is a
function of the density, the temperature and of the symmetric part
of the gradient of the velocity field. In Newtonian fluids, it depends linearly on D
and v, respectively, and is given by
d
in d space dimensions. The first term on the right-hand side has trace zero and
describes the shear forces in the fluid. The second part comes from the bulk viscos-
ity. The viscosity coefficients j and i are nonnegative functions of ae and ', which
guarantees
a fundamental property needed to satisfy the second law of thermodynamics. For
most gases, can be assumed.
To model the viscous forces, one starts from the observation that
Z
Z
where the ffi are continuously differentiable functions with values between 0 and 1
vanishing outside W t and tending pointwise to the characteristic function of W t for ffi
tending to 0. If I denotes the set of indices of the particles initially contained in the
region W , the mass fraction
i2I
serves as our discrete counterpart of the functions ffi . This leads to the viscous force
Z
acting upon this group of particles, or to the normalized force
F (v)
Z
acting upon the particle i, where the integrals extend over the region occupied by
mass at the given time t. Note that the expression (2.59) correctly takes into account
only that part of the boundary of the moving volume that is touched by mass from its
exterior. Therefore the construction also covers the case not directly included above
when a part of the boundary of the moving volume belongs to the boundary of the
region occupied by mass. The quantities M i corresponding to the forces (2.60) are
Z
Z
and the additional specific heat supply to the particle i is
Z
Provided the given particle is located in the interior of the region occupied by mass,
the forces (2.60) and (2.61) formally vanish for stress tensors T of divergence zero, as
continuum mechanics requires. The proof is by partial integration, where the second
integral on the right-hand side of (2.61) cancels.
The problem with this approach is that, in more than one space dimension, the
derivatives of the mass fractions i can have singularities and are not always square
integrable. Fortunately, this effect is often compensated by the behavior of T and the
properties of the shape function /. Assume that T is of the form (2.55) and that the
viscosity coefficients behave like
with ff ? 0. Let the gradient of / satisfy the pointwise inverse estimate
holds for the cubic splines (2.3), (2.6) with 3. Then there exist
constants C i depending on the number of the locally interacting particles and their
mass, size, and shape with
ae ff jr
Provided that ff \Gamma 2=k ? 0, this demonstrates that the integrands above are continuous
and tend to zero at the points at which ae tends to zero. Therefore, under the given
circumstances, one can refrain from replacing the i by regularized mass fractions
such as proposed in [11] and [12]. However, our considerations would immediately
transfer to this case.
To incorporate heat conduction, the right-hand side of the entropy equation (2.43)
has to be supplemented to
Z
The heat flux vector k is mostly given by Fourier's law
where the coefficient function 0 is the heat conductivity. For the heat flux to
be well defined, k has to satisfy an estimate similar to the estimate above for the
viscosity coefficients.
For the rest of this section, we restrict ourselves to Newtonian fluids (2.55). For
given q i and H i , the mapping
is then linear. Denoting by b
D the tensor (2.54) formed with the velocities b q 0
instead of q 0
Z
Ddx
holds, with the integrals discretized as described above. Because
D \Gammad
(tr D)(tr b
this proves that also the linear mapping (2.68) is symmetric and negative semidefinite
with respect to the energy inner product given by (2.50). For the proof of (2.69), we
refer again to the proof of Theorem 1 below.
3. The conservation of energy, momentum, and angular momentum.
The conservation of energy, momentum, and angular momentum are basic physical
properties of any closed system and must therefore be reproduced by the finite mass
method. Moreover, energy estimates are basic for the mathematical examination of
the model and, in particular, are needed to transfer the compactness and convergence
results from [11] and [13] to the present situation of particles underlying arbitrary
linear deformations. The conservation of energy also prevents the determinants of
the H i from becoming arbitrarily small since, with equations of state like (2.30), this
would require too much energy. Our first result is:
Theorem 1. The total energy composed of the kinetic energy (2.33)
of the particles and the internal energy (2.34) is a constant of motion.
Proof. Utilizing the representation (2.32) of the kinetic energy of a single particle,
one first obtains
d
dt
Therefore the equations of motion yield
d
where the terms coming from the pressure forces (2.37) have canceled with the corresponding
derivatives of V ,
comes from the frictional forces (2.41), (2.42), and
from the viscous forces (2.60), (2.61).
Before we start calculating
3 , we state two simple algebraic relations
that are repeatedly used in this section, namely that
A
for all square matrices A and B and all vectors a and b and that
A
for all square matrices A, B and C.
For example, with I the first of these two relations yields
for arbitrary vectors f . Choosing
Z
follows. Because, by the definition (2.16) of v,2
with (2.48) this leads to the representation
Z
Rq dx
of the first of the two terms above. Inserting Tr i for f in (3.3), one finds
Z
for the second part coming from the viscous forces (2.60) and (2.61). As
by the symmetry of T and
by the definition of v i and (3.2), the second integral in the formula above cancels and
the sum attains the value
Z
14 CHRISTOPH GAUGER, PETER LEINEN, HARRY YSERENTANT
As by (2.43), (2.44), (2.46), (2.62) and (2.66)
Z
Z
this proves the proposition and demonstrates that exactly the right amount of kinetic
energy is converted to heat.
Surprisingly, for vanishing frictional and viscous forces, one has both conservation
of energy and entropy, which contradicts the usual conception of gas flows and would
not be possible in continuum mechanics, but which is explained by the fact that the
kinetic energy (2.33) considered here is composed of the kinetic energies of the single
particles and is not identical with the mean kinetic energy
E(t) =2
Z
known from continuum mechanics; it consists additionally of the local fluctuation
energy
Z
ae
In smooth flows, this fluctuation energy is a negligibly small part of the total kinetic
energy and will vanish with the fourth power of the particle size, but it can dominate
where the particles clash. The role of the frictional forces (2.41) and (2.42) is to
convert this kind of fluctuation energy into true internal energy.
The total momentum of the system is defined as
Z
Because of (2.1), it has the closed representation
Theorem 2. The total momentum (3.6) is a constant of motion.
Proof. The representation (3.7) and the equations of motion yield
d
dt
0with the three parts
coming from the pressure forces (2.38), the frictional forces (2.41), and the viscous
forces (2.60). With (2.19), the first part reads
@ae
@"
@s
By the definitions (2.10) of the mass density and (2.24) of the entropy density, this
means
Z
As ", under the given assumptions, is a continuously differentiable function with
compact support, this yields P 0
2 , with (2.41) one obtains
Z
As, by the definition (2.16) of the velocity field v,
As the i form a partition of unity, finally also the part
Z
resulting from the viscous forces vanishes.
Last, we consider the scalar quantities
Z
ae(x;
with fixed skew-symmetric matrices W, that have, by the definition of v and because
of (2.1) and (2.2), the closed representation
In three space dimensions, the components of the total angular momentum
Z
are quantities of this form, and vice versa all quantities of the form (3.8) can be
composed of the components of the angular momentum (3.10). Therefore, for the
three-dimensional case, the following theorem states that the angular momentum of
the system is a constant of motion. In other space dimensions, one gets less or more
first integrals, corresponding to the dimension of the space of the skew-symmetric
matrices.
Theorem 3. For arbitrarily given skew-symmetric matrices W, the scalar quantity
L defined by (3.8) is a constant of motion.
Proof. As a \Delta vectors a and A \Delta
one gets
dL
dt
from the representation (3.9) of L. The equations of motion therefore yield
dL
0where the first part
comes from the pressure forces (2.38) and (2.39), the second part
from the frictional forces (2.41) and (2.42), and the third part
from the viscous forces (2.60) and (2.61).
We show that each of these three parts vanishes separately. From the representation
of the partial derivatives of / i , (3.1) and (3.2), and the skew-symmetry
of W, first
follows. Thus (2.38) and (2.39) yield
@ae
@"
@s
or, again taking into account the definitions of ae and s, summed up
Z
As " is a continuously differentiable function with compact support, partial integration
gives
and therefore L 0
because W is skew-symmetric. In the same way, (3.1) yields
and therefore, with (2.41) and (2.42),
Z
As, by the definition (2.16) of the velocity field v,
one obtains L 0
(3.2), the symmetry of the viscous stress tensor T and the
skew-symmetry of W give
and therefore as above
Z
As the i form a partition of unity, finally also L 0
4. The discretization of the integrals. Our particle model of compressible
fluids is purely Lagrangian. It is invariant to arbitrary translations and rotations and,
as it concerns the shape the particles can attain, even to every linear transformation
of space. These properties must be reflected by the quadrature formula needed to
evaluate the integrals that define the forces acting upon the particles.
We start from the observation that the integral of a scalar or vector function f
weighted by the mass density (2.10) can be written as a sum
Z
Z
of contributions from the single mass packets. The integrals on the right-hand side of
(4.1) live on the reference domain on which the shape function / is strictly positive.
They are replaced by a fixed quadrature formula
Z
ff g(a )
with weights ff ? 0 and nodes a inside the support of the shape function /, which
is considered as the weight function here. This results in the composite rule
Z
Z
f d
with weights m j ff and nodes q j +H j a that is based on the given discretization of
mass and not on a subdivision of space into elementary cells. Therefore it has the
desired invariance properties.
Utilizing the quadrature rule (4.3), one replaces the potential energy (2.34) by
the fully discrete expression
Z
denotes the specific internal energy. With this discrete potential, the
normalized forces
(4.
and the temperatures
are formed. As both the specific internal energy itself and the quadrature points
depend on the q i and H i , the forces F i and M i correspondingly split into two parts
that are of different structure. Into
F (1)
@ae
@~"
@s
@ae
@~"
@s
d
all quadrature points q j +H j a contained in the support of the given particle i enter
whereas
F (2)
and the corresponding term
M (2)
are completely determined by the function values and the first order derivatives of
the mass density and the entropy density at the quadrature points q i +H i a assigned
to the particle i itself. The local temperature (4.6) around the particle i transfers to
Z @~"
@s
The forces (4.7) replace the forces (2.38) and (2.39) in the Lagrangian equations of
motion (2.36) and (2.40), respectively, and the local temperature (4.6) the local temperature
(2.44) in the entropy equation (2.43). Note that, through the assumptions
we made on "(ae; s) and because
ff
the quantities above behave numerically well. The frictional forces (2.41) and (2.42)
and the heat supply (2.46) are directly discretized using the quadrature rule (4.3).
For the viscous forces (2.60) and (2.61), the heat supply (2.62) and the heat flux in
(2.66) one can proceed correspondingly.
The method reacts sensitively on the choice of the weighted quadrature rule (4.2)
for the reference domain. Experience has shown that quite a few quadrature points
are needed to exploit the full accuracy of the approach. A too small number of
quadrature points leads to instabilities, in particular when the quadrature points are
not properly spaced; a high polynomial accuracy alone does not suffice. For the
Table
The basic one-dimensional quadrature rule for cubic B-splines
nodes 4131656631641tensor-product third order B-splines described at the beginning of x2, we had good
experience with the tensor-product counterpart of the one-dimensional quadrature
rule given by Table 1. This quadrature formula is exact for fifth order polynomials
and assigns 5 2 or quadrature points to each particle in two space dimensions.
The total number of quadrature points q j +H j a entering into the computation for
a given particle i depends on the overlap of the B-splines and will generally be much
higher, not much less than in two space dimensions.
The conservation of energy, momentum, and angular momentum transfers to the
present case of discretized integrals. The proofs from the last section can be taken
over with minor changes concerning only the pressure terms. For the proof that the
total momentum (3.6) remains a constant of motion, one has only to utilize that
Z
Z
r~" d
such that the quantity
vanishes. In the proof that the quantities (3.8) associated with the angular momentum
are constants of motion, the relations
Z
Z
enter, giving L 0
The essential idea to compute the forces acting upon the particles is to separate
the quadrature points from the particles. This reflects the fact that particles do not
interact directly with each other but only with global fields like the mass density or
the velocity.
Two different data structures are used. The first data structure is associated with
the particles themselves. It contains the fixed particles masses m i , the values q i , H i ,
i and S i finally to be determined, storage for the local temperatures ' i , and for
the forces acting upon the particles and the heat supply to the particles, of course.
The second data structure is associated with the quadrature points. First, it contains
their positions and the weights and then all necessary
global information at xZ like the values ae Z of the mass density, s Z of the entropy
density, j Z of the mass flux density or vZ of the velocity, quantities associated with
the viscous stress tensor T, should the occasion arise, and the other field information
needed in the computation.
The procedure to compute the integrals involved in the differential equations is
then quite simple and consists of three phases. We will describe this procedure only
for the inviscid case; the case of additional viscous forces is similar.
In the first phase, the quadrature points are generated and the values ae Z of the
mass density (2.10), s Z of the entropy density (2.24), j Z of the mass flux density
of the intermediate quantity
as well as the gradients of ae and s at the quadrature points are assembled. This is
a loop on all particles. In this phase, information is transferred from the particles to
the quadrature points.
In the second phase, the values of the derivatives
@ae
@s
and, to incorporate the frictional forces (2.41) and (2.42), of
ae Z
are computed. These operations work only on the single quadrature points.
In the final third phase, the discrete integrals determining the forces, the heat
supplies and the local temperatures are computed using the results of phase 1 and
phase 2. In this phase, information is transferred from the quadrature points back to
the particles.
Phase 1 and phase 3 require an efficient access to the quadrature points contained
in the support of a given particle. Search trees can be used for this purpose, which
have to be set up after the quadrature points have been generated. The information
needed to access the quadrature points assigned to a given particle i has
to be stored separately.
5. A time-stepping procedure. The remaining big system of differential equations
for the particle positions q i , the deformation matrices H i and the entropies S i
has to be solved numerically. In this section, we present a simple, robust second order
method for that which is adapted to the structure of this system and the properties
of its solutions. The method has been proposed to us by Christian Lubich [7] and is
inspired by the recent work of Hochbruck and Lubich [4] on exponential integrators.
Its stability range is de facto independent of the strength of the frictional and viscous
forces but shrinks, as with all schemes for this type of equation, when approaching
incompressibility.
First, we combine the vectors q i and the matrices H i to a big vector
of dimension (d and the S i to a vector z of dimension N . The system of
differential equations (2.40) together with the additional frictional forces (2.41) and
(2.42) and the viscous forces (2.60) and (2.61) can then be written in the form
and the entropy equation reads
The first term on the right-hand side of (5.2) corresponds to
with the F i and M i the discrete counterparts (4.7) of the pressure forces (2.38) and
(2.39). The second term
corresponds to the discretized versions of the frictional forces (2.41) and (2.42) and
of the viscous forces (2.60) and (2.61) in Newtonian fluids (2.55). The notation (5.5)
reflects that these forces depend only linearly on q 0
. As the considerations in
x2 have shown, the matrices are symmetric negative semidefinite with
respect to the energy inner product determined by (2.50). Finally, the right-hand side
of (5.3) corresponds to
with the ffi Q i the discretized versions of (2.46) and (2.62), respectively, plus eventually
a term coming from heat conduction. This function depends nonlinearly on all its
arguments.
The proposed method for the numerical solution of the system (5.2), (5.3) is a
leap-frog scheme. To come from the approximations for y 0 at time t k \Gamma =2 and for y
and z at time t k to the new approximations at times t k +=2 and t k respectively,
one first computes
The new value for the derivative of y is then
and the new value for y itself is
where OE 0 is given by
\Theta OE 2
and OE 1 and OE 2 are the entire functions
The new value z k+1 finally is implicitly determined by the equation
z
22 CHRISTOPH GAUGER, PETER LEINEN, HARRY YSERENTANT
Using the new b k+1 from the next time step, the approximation
is obtained at little additional cost. This approximation does not enter into the further
computations. The computation starts with the approximation
for y 0 at t 0 +=2 and the approximation
for y itself at the quantities z 1 and y 0
are then computed as above.
The method is constructed such that it would yield the exact values of y and y 0
for f and A constant. For vanishing frictional and viscous forces, i.e. for
the scheme transfers to the time-reversible integrator
known as Verlet method in molecular dynamics, where (5.13) reduces to
With the starting step above, the Verlet method is equivalent to the one-step method
k+1=2 is now considered as the intermediate value
The Verlet method is symplectic and has therefore very favorable properties for Hamiltonian
systems as arising in the present case of pure pressure forces [3]. In particular,
it preserves momentum and angular momentum: Let
denote the total momentum (3.7) and the scalar quantity (3.9).
As\Omega is a skew-symmetric
matrix and
by Theorem 2 and Theorem 3 and the considerations in x4, respectively, both quantities
retain their value in the transition from one time level to the next.
The matrix-vector products OE( A)b can be computed by a Krylov-space method
based on the symmetric Lanczos process using the energy inner product. Started with
the vector b, after m steps the Lanczos algorithm delivers a matrix V consisting of
columns orthonormal with respect to this inner product, a symmetric tridiagonal
matrix H of dimension m \Theta m, and a rank one matrix R such that
The matrix-vector products are then approximated by
With a spectral decomposition
of the small symmetric tridiagonal matrix H, OE( H) is given by
Note that the computation of Ax corresponds to an evaluation of the discretized
frictional forces (2.41) and (2.42) and, if necessary, of the discretized viscous forces
(2.60) and (2.61) and can be performed without any explicit knowledge of A. The
approximation (5.23) converges very fast for entire functions OE, much faster than the
conjugate gradient method for the solution of a linear system with the coefficient
Usually, very few Krylov steps suffice.
In the examples presented in the next section, we solved the equation (5.12)
approximately by two steps of a simple fixed point iteration. More sophisticated
methods will be needed when heat conduction is present.
6. Examples. In this section, we have compiled some two-dimensional examples
that exhibit a typical behavior and compare the numerical with the known exact
solutions. As shape function / of the particles we have used the bicubic B-spline
(2.3), (2.6) together with the 25-point quadrature rule described in x4.
To find a good approximation of the given initial data, we assume that the region
occupied by mass at time contained inside an axiparallel rectangle and cover
this rectangle by a regular grid of gridsize h. The initial matrices are then H i
and the initial positions are selected from the gridpoints, that we denote by x k here.
To determine the particle masses, we first compute the coefficients a k 0 minimizing
an appropriately chosen distance between the linear combination
a k
and the given initial mass density ae, for example by the projected Gau-Seidel
method. At the points x k for which a k ? 0, particles with are located.
The remaining gridpoints are ignored in the sequel.
The initial velocities can be determined by solving the interpolation problem
at the particle positions q Setting
the discrete mass flux will virtually be a fourth order approximation to the continuous
mass flux aev at time sufficiently smooth. The specific entropies S i (0) are
computed in the same way interpolating the entropy density s. The alternative is to
choose the initial velocities correspondingly to (2.21) as
which works better with dominating external forces and has the advantage that velocity
fields linearly depending on x like rigid body motions are exactly reproduced.
For reference, we recall the Navier-Stokes equations
@ae
@t
ae
@t
@s
@t
that govern the time evolution of a smooth compressible flow in the absence of heat
conduction and that transfer to the Euler equations for the inviscid case
These equations have to be completed by material laws like (2.30) and (2.55) and the
basic thermodynamic relation (2.28). Often, the equations are stated in a different,
but mathematically equivalent form, with (6.6) also written as conservation law and
replaced by the conservation law for the energy. For the special case of inviscid
ideal gases (2.30), the entropy equation (6.7) can be replaced by the equation
@
@t
coupling pressure and velocity.
6.1. Gas clouds. The first example serves, more or less, as consistency check
and shows how accurate the method can be. We reproduce the self-similar solutions
of finite mass of the Euler equations. The functions (6.9), (6.10) satisfy the continuity
equation (6.5) regardless of the choice of the density profile \Psi and the matrix function
H(t), and of course also the entropy equation (6.7). Starting from the internal energy
of a barytropic ideal gas (2.30), the momentum equation (6.6) reads
ae 0
has been set and H is an abbreviation for the determinant of H.
Up to normalization,
ae
is the only function with compact support that satisfies the relation (6.11) for an
appropriately chosen function H(t). For 1! 2, the function (6.12) is continuously
differentiable at 1. The corresponding matrix functions H(t) are the solutions of
the differential equation
ae 0
det H(t)
with initial values det H(0) ? 0. This differential equation coincides with the differential
equation from (2.40) one would obtain for the motion of a single
with the shape function (6.12).
If one lets the particles move according to the differential equations (2.21) in the
velocity field (6.10), the whole configuration would simply undergo a series of linear
transformations. So it is no wonder that a high accuracy can be reached in this
example. In the computation presented here, we started from the material constants
and the initial values
yielding a rotating, first contracting and then again expanding gas ball. The initial
positions q i (0), the masses m i and the matrices H i (0) have been determined as described
in the introduction to this section. To reproduce the initial velocity field (6.10)
exactly, the initial velocities q 0
fixed by (6.4), and not via
the linear system (6.2) as in the other examples. The initial entropies are S i
We set R = 250 in this example.
We solved the problem over the period 0 t 100 with the stepsize
in the time-stepping procedure from x4, starting from a grid of gridsize
21 \Theta 21 B-splines finally yielding 325 particles. Fig. 1 shows the initial configuration
of the particles at time and the final configuration at time 100, the latter
rescaled from the radius determined by (6.13) and (6.15) to the initial radius. The
Fig. 1. The particles in the gas cloud example at times
initial configuration of the particles is almost retained over this long period, although
the gas ball first shrinks to the radius 0:515 and then again expands to the final radius
516:9, that is by more than the factor thousand. The exact and the approximate
solution cannot be distinguished with the naked eye; the maximum norm of the error
is about one per thousand of the maximum norm of the solution. Fig. 2 shows a cross
section of the mass density together with the differences to the approximate mass
26 CHRISTOPH GAUGER, PETER LEINEN, HARRY YSERENTANT
densities along the line x
are again rescaled to the same radius and the errors have been multiplied by the
factor thousand in order to compare them with the exact solution. Astonishingly, the
Fig. 2. Mass density and absolute error in the gas cloud example at times
relative size of the error does not increase. Probably one could run this example to
infinity.
There is a viscous counterpart of the solutions above, with the same density profile
and the viscosity coefficients j and i constant multiples of 'ae. Also these solutions are
well reproduced, even if with a slightly reduced accuracy. In examples like these, the
shear viscosity must counterbalance the bulk viscosity; otherwise physical instabilities
occur.
6.2. Shock fronts. The second example shows how the particle method behaves
when shocks develop in an inviscid fluid and demonstrates how the frictional forces
(2.41), (2.42) work. In this example, the transformation of kinetic energy into internal
energy plays a dominant role.
We consider a spherically symmetric shock front ct in an ideal gas (2.30)
and make for the velocity, the density, and the pressure the ansatz
r
x
r
r
r
for ct and
with constant values ae 1 and 1 for jxj ! ct.
To determine the values c, ae 1 and 1 and the functions V (), M () and P () for
1=c from the initial state
and
we first observe that the differential equations (6.5), (6.6) and (6.8) transfer to
where d is again the space dimension. For d ? 1, these differential equations can be
used to compute V , M and P numerically. For
The next step is to express c, ae 1 and 1 as functions of the values
fixed for the time being. Let u c be the radial components of
the velocity on both sides of the shock front relative to the velocity of the front itself.
As continuum mechanics teaches (see [1], for example), the mass flux, the momentum
flux and the energy flux across the shock front are continuous. This is equivalent to
the relations
With help of (6.24) and (6.25), the equation (6.26) can be replaced by the Hugoniot
equation
that equivalently expresses the conservation of energy.
For an ideal gas (2.30), the pressure and the internal energy are coupled by the
equation of state
The conservation of mass (6.24) leads to
c
and with (6.28), the Hugoniot equation (6.27) takes the form
Inserting (6.29) and (6.30) into (6.25), one finally obtains a quadratic equation for
the velocity of the shock front. Only the positive solution
of this equation is of interest; the negative solution leads to a rarefraction shock that
is not compatible with the second law of thermodynamics.
Taking into account the functional dependence (6.23), (6.31) represents an equation
for the unknown velocity c and the corresponding value
This equation can be solved numerically. With (6.29) and (6.30), one then yields also
the values ae 1 and 1 for the density and the pressure inside the shock. As an example,
with
28 CHRISTOPH GAUGER, PETER LEINEN, HARRY YSERENTANT
one obtains
in two space dimensions. The values (6.32) mean that, at time the fluid collides
in the origin with nearly the double speed of sound, which takes the value p
here.
We set R = 250ae in this example. The idea behind this form of R is that friction
should grow when the density of matter increases, as physical intuition suggests. In
our actual computation we placed 251 \Theta 251 particles on a regular grid covering the
square [\Gamma5; 5] 2 and followed their motion in the time interval 0 t 1. Approximately
10000 of these particles are contained in the region [\Gamma2; 2] 2 of interest here
at time slightly more than 20000 at time 1. With the time stepsize
the exponential integrator needed 955 Krylov steps, at most 5 for each of
the 200 time steps, a typical behavior also observed in other examples.
Fig. 3 shows a cross section of the exact and the approximate solution along the
on the interval \Gamma2 x 1 2 and Fig. 4 the corresponding contour lines of the
approximate density at time 1. Both the position and the height of the shock and
the solution profile outside the shock are very well reproduced and the approximate
solution retains its spherical symmetry. The shock is practically as well resolved as
possible with particles of the given size, and as a closer look at the results of the
single time steps shows, the shock does not smear during the computation. Except
for the shock region itself and a small neighborhood of the origin, the error of density
and pressure is less than one per thousand. The same holds for the velocity field
outside the shock. When passing the shock, the particles almost loose all their kinetic
energy such that the velocity is practically zero inside the shock. The error in the
mass density around the origin is probably due to the discontinuous initial data; the
region over which this error extends seems to shrink proportionally to the particle
size. An interesting observation is that behind the shock the particles again rearrange
to a rectangular grid, as can be seen in Fig. 5.
Fig. 3. Density, pressure, and the velocity in the shock front example at time
Fig. 4. Contour lines of the density in the shock front example at time
Fig. 5. Particle contours in a neighborhood of the shock.
6.3. Shear flows. Our last example serves as a test for the modeling of the
shear viscosity. We keep the mass density and the pressure
constant, neglect the heat production, that is set the right-hand side of (6.7) to zero,
and consider velocity fields of the form
where n is a given unit vector fixing the direction of the flow and
component of x in direction of a unit vector e orthogonal to n. Again, the continuity
equation is fulfilled independent of the choice of the functions OE and c. For a
Newtonian fluid (2.55) with constant shear viscosity j, the momentum equation (6.6)
reads
This means that either OE() is linear and c(t) constant, that is
up to a translation in direction of e, or correspondingly
with
The solution corresponding to the velocity field (6.37) is remarkable in so far as
it is reproduced exactly provided that the initial data are exact and the integrals
defining the forces are evaluated exactly. The reason is that all forces acting upon the
particles then vanish such that all particles are distorted in the same way and mass
density and pressure remain constant.
The solution with the velocity field (6.38) represents a more severe test. If one
lets the particles move according to the differential equations (2.21) in the velocity
field (6.38), the particle positions q i and the matrices H i at time t would be
with the function ff(t) given by
Thus, in a little while, the particles have practically lost all their kinetic energy and
get stuck.
In our experiments, we used the data
fixing the physical properties of the fluid, set v as in the
first example. To exclude influences coming from an alignment of the particles with
the flow and to avoid problems with the infinite extension of the flow region, we set
p' 3'
p:
The problem then becomes periodic with period interval [0; 3] \Theta [0; 2]. All computations
were for the time interval 0 t 1 with time stepsize
The particles in Fig. 6 stem from such a computation. They have initially been
located on an axiparallel grid of sidelength Fig. 7 shows how the velocity
Fig. 6. The deformation of particles in the shear flow example.
field and the errors corresponding to the initial gridsizes
evolve in time, where the function values have been sampled on a 500 \Theta 500
grid and the associated l 2 -norm has been taken as distance measure. For better
comparison, the errors have been multiplied by the factors 25,
respectively. The picture demonstrates that the viscous time scale is
Fig. 7. The time evolution of the velocity and the rescaled errors in the shear flow example.
perfectly reproduced. In the transition from 30 \Theta 20 to 60 \Theta 40 and from 60 \Theta 40 to
120 \Theta 80 particles, the error decreases approximately by the factor 16, a clear fourth
order convergence, and a fine confirmation of the finite mass method.
--R
A mathematical introduction to fluid mechanics
New York
Solving ordinary differential equations
On Krylov subspace approximations to the matrix exponential operator
Lehrbuch der Theoretischen Physik
Lehrbuch der Theoretischen Physik
Heuristic numerical work in some problems of hydrodynamics
Proposal and analysis of a new numerical method in the treatment of hydrodynamical shock problems
A particle model of compressible fluids
Particles of variable size
Entropy generation and shock resolution in the particle model of compressible fluids
A convergence analysis for the finite mass method for flows in external force and velocity fields
--TR | finite mass method;gridless discretizations;compressible fluids |
359222 | Discrete Kinetic Schemes for Multidimensional Systems of Conservation Laws. | We present here some numerical schemes for general multidimensional systems of conservation laws based on a class of discrete kinetic approximations, which includes the relaxation schemes by S. Jin and Z. Xin. These schemes have a simple formulation even in the multidimensional case and do not need the solution of the local Riemann problems. For these approximations we give a suitable multidimensional generalization of the Whitham's stability subcharacteristic condition. In the scalar multidimensional case we establish the rigorous convergence of the approximated solutions to the unique entropy solution of the equilibrium Cauchy problem. | Introduction
. In this paper we present a new class of numerical schemes
based on a discrete kinetic approximation for multidimensional hyperbolic systems of
conservation laws. Consider a weak solution u
K to the Cauchy
problem
where the system is hyperbolic (symmetrizable) and the flux functions A d are locally
Lipschitz continuous on R K with values in R K . We approximate problem (1.1), (1.2)
by a sequence of semilinear systems
with Cauchy data
Here # is a positive number, # d are real diagonal L - L matrices, P is a real constant
coe#cients K - L matrix, and M is a Lipschitz continuous function defined on R K
# Received by the editors August 5, 1998; accepted for publication (in revised form) November
15, 1999; published electronically May 23, 2000. This work was partially supported by TMR project
http://www.siam.org/journals/sinum/37-6/34307.html
Math-ematiques Appliqu-ees de Bordeaux, Universit-e de Bordeaux 1, 351 cours de la Lib-eration,
Talence, France (aregba@math.u-bordeaux.fr).
# Istituto per le Applicazioni del Calcolo "M. Picone," Consiglio Nazionale delle Ricerche, Viale
del Policlinico 137, I-00161 Rome, Italy (natalini@iac.rm.cnr.it).
1974 DENISE AREGBA-DRIOLLET AND ROBERTO NATALINI
with values in R L . Moreover we suppose the following relations are satisfied for all
fixed
rectangle# in R K :
It is easy to see that, if f # converges in some strong topology to a limit f and if
0 converges to u 0 , then Pf is a solution of problem (1.1), (1.2). In fact system
(1.3) is just a BGK approximation for (1.1); see [5, 12] and references therein. The
interaction term on the right-hand side is given by the di#erence between a nonlinear
function, which describes the equilibria of the system, in our case M(Pf ), and the
unknown f . Our purpose here is to construct numerical schemes for system (1.3) in
order to obtain a numerical approximation of (1.1) in the relaxed limit
As well known for general relaxation problems, see [71, 44, 55], approximation
needs a suitable stability condition to produce the correct limits. In the frame-work
of general 2 - 2 quasi-linear hyperbolic relaxation problems, this condition is
known as the subcharacteristic condition. In section 2, we shall argue in the spirit
of the Chapman-Enskog analysis [71, 44, 14], to find the following stability condition
for
for all # 1 , . , # D ) # (R K ) D and every u belonging to some fixed
rectangle# R K .
Actually, in [54] and for the scalar case convergence of Pf # towards the
Kruzkov entropy solution of (1.1), (1.2) has been obtained under a slightly stronger
version of condition (1.6): every component of the Maxwellian function M is monotone
nondecreasing on the interval I. The main tool in that case is the fact that under
this condition the right-hand side in system (1.3) is quasi-monotone in the sense of
[25] and this implies special comparison and stability properties on the corresponding
system. Unfortunately, similar properties are not verified for nontrivial examples in
the general case K > 1. Therefore, for the systems, we shall use just condition (1.6).
Note that for certain families of kinetic approximations, the results of [6] show that
(1.6) is also a necessary condition for (1.3) to be compatible with the entropies of
(see section 3).
The continuous kinetic approximation of systems of conservation laws in gas dynamics
is classical. In particular, Euler equations can be formally obtained as the
fluid dynamical limit of the Boltzmann equation; see [12, 13]. The rigorous theory of
kinetic approximations for solutions with shocks is recent and the main results were
obtained only in the scalar case. The first result of convergence of a fractional step
BGK approximation with continuous velocities, with an entropy condition for the
limit (weak) solution, was proven in [7] (see also [22]). Another convergence result
was given later in [60], using a continuous velocities BGK model. An important related
kinetic formulation can be found in [41]. Other results have been established for
special systems or partially kinetic approximations [42, 29, 9, 40]. Related numerical
schemes can be found in [19, 59]; for a general overview and many other references
see [24]. Discrete velocities models and their fluid dynamical limits have also been
considered by many people; see the review paper [61]. In particular we mention the
studies on the Broadwell model [11, 72]. Convergence for various relaxation models
was also investigated in [15, 14, 17, 47, 68, 74, 32, 33, 67]. The analysis of the stability
of various nonlinear waves for relaxation models, and in particular for the Jin-Xin
relaxation approximation, can be found in [44, 16, 50, 43, 46, 49]. A general survey of
recent results on relaxation hyperbolic problems is given in [55]. Let us also point out
some numerical references related to our approach. A lot of computational work has
been done in the last ten years in the very closed framework of lattice Boltzmann and
BGK models; see [21, 62] and references therein. Let us also mention the monotone
schemes of [8], which are an example of numerical (relaxed, i.e., discretization
of our construction in the scalar case. Other numerical investigations for
hyperbolic problems with relaxation can be found in [37, 57, 58, 73, 4, 3, 18, 70, 31].
Our numerical schemes are constructed by splitting (1.3) into a homogeneous linear
part and an ordinary di#erential system, which is exactly solved thanks to the
particular structure of the source term. In the scalar case this construction allows us
to preserve the monotonicity properties of (1.3) and to prove convergence results. Our
approximation framework generalizes to systems the construction presented in [54] for
the scalar case, and shares most of the advantages of the relaxation approximation as
proposed in [30] (see also [53, 2, 70]): simple formulation even for general multidimensional
systems of conservation laws and easy numerical implementation, hyperbolicity,
regular approximating solutions. Actually the main advantage, especially in the multidimensional
case, of both the approximations, seems to be the possibility of avoiding
the resolution of local Riemann problems in the design of numerical schemes. Moreover
our framework presents some special properties:
- the scalar and the system cases are treated in the same way at the numerical
- all the approximating problems are in diagonal form, which is very likely for
numerical and theoretical purposes;
- we can easily change the number and the geometry of the velocities involved
in our construction to improve the accuracy of the method.
In this sense our work shares most of the spirit of [56, 34, 45], where very flexible
and simple schemes, which do not need Riemann solvers in their construction, were
proposed to approximate general multidimensional systems of conservation laws. Let
us also observe that the presented algorithms are surely not optimal, but they just
illustrate how to construct an e#cient and simple approximation even for very complicated
systems. This could be useful, for example, in the numerical investigation of
large systems like those arising in the extended thermodynamics and other generalized
moment closures hierarchies for kinetic theories [52, 38, 1]. Further investigations will
be addressed to the construction of high order schemes.
The plan of the article is as follows: In section 2 we establish the stability condition
and define the monotone Maxwellian functions for the scalar case. In section
3 we propose some examples of stable approximations in the class (1.3). The issue of
entropy is discussed. In section 4 we set the numerical schemes and section 5 is devoted
to the convergence results in the scalar multidimensional case. Some numerical
experiments are given in section 6.
After the completion of this work we received a preprint from Serre [64], where he
proves, by using the methods of compensated compactness, the convergence for the
Jin-Xin relaxation approximation and some of the discrete kinetic approximations
contained in the present paper to (one-dimensional) genuinely nonlinear hyperbolic
systems of conservation laws having a positively invariant domain. The convergence
of related first-order numerical schemes has been proved by Lattanzio and Serre in
[35]. The stability conditions are in both cases strongly related to our condition (1.6).
1976 DENISE AREGBA-DRIOLLET AND ROBERTO NATALINI
2. Chapman-Enskog analysis and monotone Maxwellian functions. In
this section we discuss the stability conditions for the discrete kinetic approximation
(1.3). Since the local equilibrium for that system is given by the hyperbolic system
(1.1), it is natural to seek for a dissipative first-order approximation to (1.3), which
is the analogue of the compressible Navier-Stokes equations in the classical kinetic
theory. In principle we could try to use the theory developed in a more general context
in [14]. Unfortunately it is easy to realize that their main assumption, namely the
existence of a strictly convex dissipative entropy for the relaxing system (1.3), which
verifies in particular the requirement (iii) of Definition 2.1 of [14], is not satisfied in
the present case and we need it for a di#erent construction.
Let f # be a sequence of solutions to (1.3)-(1.4) parametrized by #, for a fixed initial
data f 0 , which for simplicity we can choose as a local equilibrium, i.e., f 0
for some u 0 # L # (R D , R K ). Set
Then, from (1.3) and the compatibility assumptions (1.5), we have
Consider a formal expansion of f # in the form
Then
Reporting in (2.1) yields
Now we have
Then, up to the higher order terms in (2.4), we obtain
where
is a K-K matrix. We can state now our stability condition.
Proposition 2.1. The first-order approximation to system (1.3) takes the form
and it is dissipative provided that the following condition is verified:
for all # 1 # R K , . , # D # R K and every u belonging to some fixed
rectangle# R K .
As we shall see in the next section parabolicity of (2.6) is, at least in some cases,
necessary for the compatibility of (1.3) with the entropies of (1.1). But let us recall
that, even in the scalar case, the expansion (2.6) cannot be considered in any way
as a rigorous asymptotic description of system (1.3). Actually, to prove our rigorous
convergence results, we need a slightly stronger version of condition (2.8).
Definition 2.2. Take I # R be a fixed interval. A Lipschitz
continuous function R L is a monotone Maxwellian function (MMF)
for (1.1) and with respect to the interval I if conditions (1.5) are verified and if,
moreover,
M i is a monotone (nondecreasing) function on I, for every i # {1, . , L}.
This condition was used in [54] to show the convergence of approximation (1.3),
(1.4) at the continuous level and in the multidimensional scalar case. In the following
section we present some examples of di#erent approximations according to the choices
of the matrices of velocities # j and the local Maxwellian function M . We discuss the
issue of entropy and we investigate both stability and (only for K=1) monotonicity
conditions.
3. Examples of discrete kinetic approximations. In order to construct the
system (1.3) one must find P , M and # such that the consistency relations (1.5) are
satisfied. The first three examples presented here own a block structure. Keeping
notations of the introduction we take blocks
I K , the identity matrix in R K . Each matrix # d is constituted of N diagonal blocks of
size K- K:
I K , # (d)
With this formalism, (1.3) can be written as
and f #
Considering that A # setting
1978 DENISE AREGBA-DRIOLLET AND ROBERTO NATALINI
we have the following equivalent expression for (2.8):
In the case when the M #
are symmetric we obtain
System (3.1) enters the framework proposed by Bouchut [6]. Suppose that there
exists at least one smooth strictly convex entropy for (1.1), and that the Maxwellian
functions are of form
nd A d (u),
where a n , b nd are scalar. Denote #(M #
n (u)) the set of eigenvalues of M #
under some technical assumptions, it is shown in [6] that the M #
n (u) are diagonalizable.
Moreover if
then (3.1) is compatible with any convex entropy # of (1.1): there exists a kinetic
entropy for (3.1) associated with # and Lax entropy inequalities are satisfied in the
hydrodynamic limit # 0. As well known [36, 10] these inequalities characterize the
admissible weak solutions of (1.1). Moreover, in this case, (2.6) is parabolic. More
general results for Maxwellian functions not in the form (3.4) can be found in [51].
Remark that for a general hyperbolic system of conservation laws neither (3.5) nor
imply convergence of the kinetic approximation. In the scalar case it is always
possible to write (1.3) under the form (3.1), and (3.5) is the monotonicity condition
(2.9). Under this condition convergence holds [54]. Moreover one can use the fact
that # N
and the discrete Jensen inequality to prove directly the following
proposition [54].
Proposition 3.1. Let suppose that M is an MMF. Then (2.8) is
In the general case, denote B the K - D-K - D matrix defined by the blocks B dj ,
D. The stability condition means
positively defined
for all u #
where# is some fixed rectangle of R K .
Example 1. The diagonal relaxation method (DRM). We first consider the minimal
case 1. The system (1.5) is then a squared linear one. We take # > 0
and
Let us denote by e j the canonical unit vector of R D . The characteristic speeds of
system (1.3) are -#e 1 , . , -#eD , # D
. The Maxwellian function is given by
means of the following
A d (u) # /(D
Therefore the result of [6] applies to this model as described above.
For a one-dimensional system of conservation laws this formulation coincides with
the relaxation approximation of [30]. In fact we can set
Then we have
We recall that in this case and for convergence to the unique entropy solution
was proved in [53], and convergence of the associated numerical relaxed schemes was
done in [2]. For other convergence and error estimate results on this model, see also
[33, 67]. However, in several space dimensions, there is no diagonal form for the
formulation of [30], as already pointed out in [54].
For a one-dimensional system the stability condition (2.8) is here:
for all # R K ,
which, if A # (u) is symmetric with spectral ray #(u), gives
#(u).
This coincides with Bouchut's condition (3.5) so that here both (3.5) and (2.8) are
equivalent.
In two space dimensions, (2.8) becomes
# .
In fact in the scalar case we are able to prove that (2.8) and (2.9) coincide.
Proposition 3.2. Let us suppose that and that the choice of P ,
and M is as above. Then the stability condition (2.8) and the monotonicity condition
coincide and can be written as
1980 DENISE AREGBA-DRIOLLET AND ROBERTO NATALINI
Proof. Here 3B is symmetric and its characteristic polynomial is of form
where A and C are defined by the following relations:
Both eigenvalues of B are positive if and only if A and C are positive.
A is positive if and only if #
8 .
Now we remark that
Six cases have to be studied for the values of A # 1 and A # 2 . As A # 1 and A # 2 play symmetric
roles in the formulas, we can suppose that A # 1 # A # 2 and study the three following
cases:
1.
2.
3.
Let us study the first case. C is positive if and only if
It remains to determine the position of 0: two cases are under consideration.
and one can see that
Consequently the stability condition is satisfied if and only if
which is easily seen to be the monotonicity condition (2.9).
so that we again recover the
same condition.
Cases 2 and 3 are similar and we omit the proof.
Example 2. Flux decomposition method (FDM). In this example, in view of a
more accurate approximation, we take a greater number of equations,
and, following an idea due to Brenier [8], we decompose the Jacobians of the fluxes
in positive and negative part. Denoting by B d the diagonal matrix of eigenvalues of
A # d and by Q d the associated matrix of the right eigenvectors we set
- .
Then we can define
with A d
for some # d > 0 and
In the scalar convex case and for an appropriate (first-order) discretization, this choice
corresponds, in the relaxation limit # 0, to the Engquist-Osher numerical scheme
[8]. For a one-dimensional system the stability condition (2.8) is here:
for all # R K , (|A #I K - |A # 0
so that in the one-dimensional scalar case it coincides with the monotonicity condition.
The two-dimensional case condition (2.8) reads
(R K
and
# .
For K=1 this matrix is positive if and only if
and this is exactly the monotonicity condition.
Proposition 3.3. Let us suppose that K=1 and that the choice of P ,
and M is as above. Then the stability condition (2.8) and the monotonicity condition
coincide and can be written as (3.14).
Concerning the entropy properties of this model we refer to [51].
Example 3. Orthogonal velocities method (OVM). This example works with any
number of blocks N # D+ 1. We take the velocities such that
and
1982 DENISE AREGBA-DRIOLLET AND ROBERTO NATALINI
This means that we choose an orthogonal family of D vectors (# 1d , . , #Nd
1, . , D) in the orthogonal space of the vector . The corresponding
Maxwellian function is now given by
A d (u)
a 2
d
where a 2
id . Here again the Maxwellian functions are of type (3.4) and the
result of [6] applies as described in the preceding section.
The one-dimensional case. Let us examine the stability condition in one space
dimension for this approximation. We take # > 0 and
# I K # 1#n#N
diag
Then
We obtain the following expression for B:
I K -
For instance if is symmetric with spectral ray # 2 (u), the stability condition
reads
However Bouchut's condition (3.5) now reads
This approximation therefore gives an example where, even in the scalar case, the
monotonicity condition is strictly stronger than the stability one.
The two-dimensional case. In two space dimensions we take length and direction
varying velocities. Fix J, N # 1, # > 0. Set
diag # n diag # cos( i#
diag # n diag # sin( i#
Here we have J . It is easy to see that (3.15) and (3.16) are satisfied and that
a 2
For the Maxwellian functions are
Proposition 3.4. Let us suppose that and that the choice of P , # 1 , # 2 and
M is as above. We denote # the argument of
sin #. If
then the monotonicity condition (2.9) is satisfied. The stability condition (2.8) is
satisfied as soon as
Proof. The first part of the proposition is immediate. Let us examine the stability
condition (2.8). We remark first that in addition to (3.15), (3.16) we also have
sin 2 i#
cos 3 i#
sin 3 i#
Consequently
for all j, d # {1, 2}, P# d
jd
a 2
d
N .
Denoting
-A
-A #2 2
# .
This matrix is positive if and only if
which ends the proof.
Example 4. Suliciu's method. Let us conclude this presentation by giving an
example which is a bit di#erent from those presented above. For K=2, D=1, let us
consider a system of the form
Take
and
# .
1984 DENISE AREGBA-DRIOLLET AND ROBERTO NATALINI
For A 2 we have a p-system, and in this case the present approximation was
first introduced by Suliciu [65, 66] to study instability problems in phase transitions
described by elastic or viscoelastic constitutive equations. In this case the Chapman-Enskog
analysis gives the stability condition
Some preliminary investigations on this model can be found in [26, 48]. More recent
results have been found in [28, 27, 39, 69]. In particular see [69] for an entropy
condition.
In fact, this model enters the preceding block structure (3.1) by equivalence with
the following one:
# .
The Maxwellian functions M are of type (3.4) so that the result of [6] applies. For
an easy calculation shows that both conditions (2.8) and (3.5) coincide
with (3.26).
4. Discrete kinetic schemes. In this section we construct numerical schemes
for the relaxing semilinear problem (1.3), (1.4) associated with (1.1), (1.2). Of course
a lot of numerical schemes are available for this problem including those presented in
[30]. We present here a finite volume scheme on structured mesh based on a splitting
method.
The space time domain R D
discretized by a rectangular grid:
I # , [0, T
d be the canonical d th vector in R D .
As usual we denote by x# the center of I #x #,d the length of I # in the direction
d, #t
#x #,d ,
#x #,d
#t n .
Finally we set
f #,n
f #,n
f #,n
If
then f #
0 is approximated by
f #,0
We use for u the same notation as for f here above. Let us recall that
and
System (1.3) is split into a linear diagonal hyperbolic part and an ordinary di#erential
system. For a given f #,n
# , the function f #,n+1/2
# is an exact or approximate solution at
time t n+1 of the problem
# .
As the system is diagonal, we may consider each equation separately. We suppose
that the scheme can be put in conservation form:
f #,n+1/2
#x #,d
#,n
ed - #,n
ed # ,
where
#,n
ed
#-k1+ed ,1 , . , f #,n
#-kL+ed ,L , . , f #,n
#,n
ed ,l # 1#l#L
Here k l # Z D and
# l,d (g, . ,
In the following, the scheme on the linear part will be referred to as homogeneous
scheme (HS) and the associated evolution operator will be denoted by H# :
f #,n+1/2
# .
To take into account the contribution of the singular perturbation term on the
right-hand side, we solve on [t n , t n+1 ] the ordinary di#erential system
with initial data
1986 DENISE AREGBA-DRIOLLET AND ROBERTO NATALINI
for all # Z D . Using (1.5) we obtain
so that the solution of (4.9) with data F (t n at t n can be explicitly obtained as
Hence
f #,n+1
where u is defined by
Note that
Therefore we have constructed a wide family of numerical schemes for the semilinear
system (1.3), which di#er by the choice of the HS. In the scalar case (K=1), thanks to
the monotonicity properties of the interaction, we show in the following sections that
in fact the properties of each scheme are roughly speaking the same as those of HS and
the estimates are uniform in #. We shall often refer to these properties, which are now
classical; see [23]: preservation of extrema, monotonicity, total variation diminishing
(TVD), and L 1 -contraction. In particular recall that monotonicity implies all the
other properties and the TVD property implies the preservation of extrema for initial
data in L 1 (R D , R) # L # (R D , R) #BV(R D , R). In the following, the numerical scheme
given by (4.1), (4.2), (4.7), (4.13) will be referred to as discrete kinetic scheme (DKS).
When # 0 in DKS, we obtain the relaxed limit of the scheme:
# .
This last scheme can be written in conservation form:
#x #,d
ed -A n
ed # .
The numerical fluxes are defined by
A n
ed
ed
with
ed
#-k l +ed ), . , M l (u n
#+k l
In the following section, for the scalar case K=1, we specify in what sense this relaxed
scheme is the limit of DKS. Moreover all the estimates for DKS are uniform and pass
to the limit, so that we obtain strong convergence for the limit scheme.
5. Convergence of discrete kinetic schemes for multidimensional scalar
conservation laws. In this section we use monotonicity to prove rigorous convergence
results for DKS and the associated relaxed scheme. We consider a scalar conservation
law (K=1) and {u #
a family of initial data for (1.1) such that
(R D , R) # L # (R D , R) # BV(R D , R) and
ld be such that (1.5) is satisfied.
Throughout this section we suppose without loss of generality that
and for sake of simplicity we consider a uniform mesh
for all # Z D #x #,d = #x d .
Moreover, as shown in section 3, M and the # d can be found satisfying
is an MMF on [-# ].
The two following relations will be useful in that which follows:
for all
and
for all u,
Therefore we have for all # > 0
#f #,0
#f #,0
#f #,0
#,l # ,
TV(f #,0
TV(f #,0
#.
5.1. The supremum norm bound.
Proposition 5.1. Suppose (H1), (H2) are satisfied and that HS preserves the
extrema. Then the scheme DKS is L # stable and there holds
for all # > 0 for all n # 0, #f #,n
# .
Moreover, if HS is monotone, then the same is true for DKS.
Proof.
Using (1.5) and (4.12), we have
S(t,
1988 DENISE AREGBA-DRIOLLET AND ROBERTO NATALINI
By (H2) we have also
for all # Z D , for all l # {1, . , L}, M l (-# f #,0
Suppose that
for all # Z D , for all l # {1, . , L}, M l (-# f #,n
Since HS preserves the extrema, we have
l (-# f #,n+1/2
for every # Z D and l # {1, . , L}. Then
for all # Z D , -# u #,n+1
# .
Using expression (4.12) we obtain
l (-# S l # t, t n , f #,n+1/2
for every t # [t n , t n+1 ], # Z D and l # {1, . , L}. Recalling (5.2) we obtain
(5.4). To conclude we remark that if M is an MMF on the interval [-# ],
then the RHS of (4.9) is a quasi-monotone application on
for all F # [M(-#
We can then apply the results of [25]: the flow given by (4.9) preserves the order.
5.2. BV and L 1 estimates. The following lemma follows easily from the monotonicity
properties of the interaction.
Lemma 5.2. Let F 0 and G 0 be two initial conditions for system (4.9) with corresponding
solutions F (t), G(t). If F 0 , G
for all t # 0
|F l (t) -G l (t)| #
|F 0,l -G 0,l | .
As a consequence we obtain another important result.
Proposition 5.3. Suppose that (H1), (H2) are satisfied for {u #
(1) If HS is TVD then DKS is TVD.
(2) If HS is L 1 contracting then DKS is L 1 contracting and there holds
#f #,n+1
In order to prove the equicontinuity property and the boundedness of the inter-action
we need to estimate the RHS of (1.3).
Lemma 5.4. Suppose that (H1), (H2) are satisfied. Suppose HS is TVD and
can be put in conservation form (4.7) with a Lipschitz continuous flux #. Then there
exists a constant C not depending on # such that
exp #t n
Proof. From (4.13) and the fact that u #,n+
# we have
exp #t n
Moreover, by (4.7),
ld
#x d
#,n
ed ,l - #,n
ed ,l # .
Hence for 1 # l # L
|M l (u #,n+ 1# ) -f #,n+ 1#,l | # |M l (u #,n
#,l | +C
#x d
sup l |# ld ||#,n
ed ,l -#,n
ed ,l | .
Using the fact that the flux is Lipschitz continuous and that for
we have
# |w #+ed - w# | ,
we obtain the desired inequality.
Proposition 5.5. Suppose that (H1), (H2) are satisfied. Suppose HS is TVD.
Then there exists a constant C not depending on # such that
for all t # [0, T
for all t, t # [0, T
Proof. Inequality (5.8) is immediate by (5.7), thanks to a recursive argument.
Let us now prove (5.9). As the numerical flux of HS is Lipschitz continuous, as in the
proof of Lemma 5.4, we obtain
#f #,n+ 1# - f #,n
and
#f #,n+1
As M(u #,0
# we have, by a recursive argument,
#f #,n+1
The above estimates allow us to prove the convergence of our numerical schemes.
First we have convergence towards the unique solution of (1.3), (1.4) when # is fixed.
Theorem 5.6. Let T > 0, # > 0, suppose that (H1), (H2) are satisfied for
Suppose that HS
1990 DENISE AREGBA-DRIOLLET AND ROBERTO NATALINI
is TVD. For any T > 0, let #t
#x be constant. As #t # 0 the sequence f #
# converges in
to the unique solution f # of (1.3), (1.4), f #
sup
sup
As a corollary we obtain global existence of the solution of (1.3), (1.4) (see [54]
for a direct proof).
Proof of Theorem 5.6. We follow exactly the method of [20] and just give a sketch
of the proof: for all t # 0, {f #
is bounded in L 1
# L # . Moreover, by
Proposition 5.3, we can apply Fr-echet-Kolmogorov theorem and obtain a relatively
compact set of L 1
loc . Using the equicontinuity property (5.9) as in the proof of Ascoli-
Arzela's theorem we obtain convergence in L # (0, almost everywhere,
and the limit is a solution of the problem by Lax-Wendro#'s theorem. Estimates
(5.10), (5.11), (5.12) follow from (5.6), (5.8), (5.9).
5.3. The relaxed limit of the scheme. In this section we are interested in
the behavior of our numerical schemes as the parameter # tends to zero. The above
estimates imply boundedness and TVD properties for the relaxed discrete kinetic
schemes. We need one further assumption on the convergence of the initial data:
(H3) the sequence {u #
converges towards a function u 0 in L 1 .
As a consequence
and f #,0 converges towards f As above we set and by (1.5)
this notation is compatible with what happens at
Theorem 5.7. Let T > 0 and suppose that (H1), (H2), and (H3) are satisfied
and HS is TVD. As # 0, f #
# converges in L # ((0, T ); L 1
loc (R) L ) to a limit f # which
satisfies
#x #,d
ed -A n
ed # ,
where the numerical fluxes are defined by
A n
ed
ld # n
ed ,l
(5.
and
ed
#-k l +ed ), . , M l (u n
#+k l
The resulting numerical scheme is TVD and converges to a weak solution of (1.1),
(1.2). Moreover, if HS is monotone, the limit scheme is also monotone and converging
to the unique entropy solution of (1.1), (1.2).
Remark 5.8. (5.17)-(5.19) are the formulas (4.16)-(4.18) obtained formally at
the end of section 4.
Proof. L 1 stability and the TVD property imply that for n # {0, . , N}, {f #,n
0} is bounded in L 1
# BV, so there exists a sequence # k # 0 such that
in L 1
loc and f # k ,n
# for every #. Consequently
in L # (0,
loc ). Estimates (5.14), (5.15), (5.16), monotonicity, and TVD properties
are immediate consequences of Propositions 5.1-5.5. Moreover the resulting scheme
can be written as (5.16)-(5.19). Thus f # is unique and the whole sequence converges.
The consistency with the conservation law (1.1) is the consequence of the consistency
of HS with the linear part of (1.3).
Therefore we can use the same arguments as in the proof of Theorem 5.6 to obtain
convergence and, when HS is monotone, the monotonicity property ensures that the
limit is the unique entropy solution of (1.1), (1.2).
6. Numerical experiments.
6.1. The numerical schemes. System (1.3) is split into a linear diagonal hyperbolic
part and an ordinary di#erential system. For given f #,n
# is an
approximate solution at time t n+1 of the problem
ld # xd f
# .
As the system is diagonal, we may consider each equation separately so that we have
to approximate the scalar problem
where the # d are real and w n
# is a piecewise constant function given in L 1 (R D
(R D ) #BV (R D ). We present here two methods. Both are constructed on a cartesian
grid; see notations in section 4. The first is a straightforward generalization of the
upwind scheme: one solves exactly (6.3) on [t n , t n+1 ], obtaining a piecewise constant
1992 DENISE AREGBA-DRIOLLET AND ROBERTO NATALINI
function -
w# . w n+1
# is then calculated by taking the average of -
w# on each cell. We
obtain the following explicit formulations:
For D=1,
where
For D=2, we set
.
For D=3, we set
i-u,j-v,k-w
i-u,j-v,k
i-u,j,k .
A second order MUSCL type method can also be applied, generalizing the one-dimensional
schemes, by the following steps:
(1) given the piecewise constant function w n
# we construct a piecewise linear
function -
#,d for x #]x #- 1, x #+ 1[ ;
(2) we solve on [t n , t n+1 ] the linear system (6.2) exactly with initial data:
(3) we compute cell average of the resulting solution to obtain w n+1 .
The method depends on the choice of # n
# . For example we can choose
#x#
#+ed - w n
#-ed
d (w n
#-ed , w n
#+ed
where the minmod function is defined by
and e
This choice corresponds to a linear interpolation of the piecewise constant function
# on the neighboring cells of I # with slope limiter. The resulting formulas for
are, respectively, (6.4), (6.5) to which one adds the following correction
terms:
For D=1,
For D=2 and
A similar formula holds for D=3.
Let us write down the one-dimensional numerical flux for the relaxed scheme
A n
l,# l <0
l,# l >0
Note that each Maxwellian function appears individually in the computation of the
slopes #.
6.2. Application to some models. Let us apply the above formulas to some
of the approximations given in section 3. One has to write down (6.4), (6.8), (6.5),
for each of the N blocks of K equations
and to apply (5.17)-(5.19).
For the approximation DRM in one space dimension, actually the Jin-Xin ap-
proximation, (6.4) gives the following scheme:
#x
#t
#x
As already pointed out in [8] and [2], in the scalar case, if #t
Lax-Friedrichs scheme.
Actually, at first order the approximation OVM of Example 3 in section 3 may
be thought of as a generalization of the relaxation approximation. A straightforward
calculation gives the following scheme associated with (6.4):
#x
#t
#x
i-1,
1994 DENISE AREGBA-DRIOLLET AND ROBERTO NATALINI
where
if N is even, and
if N is odd. The viscosity coe#cient is #N #t
#x and it is easy to verify that for a scalar
linear conservation law the monotonicity condition, as well as the stability condition,
ensure L 2 stability.
The model OVM is an interesting case where the monotonicity condition (3.19) is
strictly stronger than the stability condition (3.18) issued from the Chapman-Enskog
analysis. Actually if we just impose (3.18) the TVD property is lost. Nevertheless
the numerical experiments are satisfying (see also Remark 6.1 below). Another point
is that (3.19) is not the minimal condition for (6.12) to be monotone. It is su#cient
to impose the intermediate condition
if N # 3 is odd. With this condition (6.12) can be also viewed as scheme (6.11) where
# is replaced by #N # |A # (u)|.
Refer now to the second example of section 3, the FDM. The numerical scheme
corresponding to the discretization (6.4) is given by
Remark that # does not appear explicitly but is needed to ensure that the CFL condition
is satisfied. As already pointed out in [8], in the convex scalar (one-dimensional)
case we recover the Engquist-Osher's scheme. Our construction improves this scheme
(in the general case) by the correction terms (6.8) of the MUSCL discretization.
As far as one is concerned with first-order approximations our method retrieves
some known schemes. This is not true for the second-order MUSCL discretizations:
formula (6.10) shows that the computation of the slopes # involves the Maxwellian
functions. Therefore we do not recover in a direct way any known scheme. However
our work shares the spirit of the central, Riemann solver free schemes of [56, 34] and
a closer comparison should be useful.
We write the two-dimensional schemes only for the FDM, which has been found
to be the most e#cient. Set We have
#t
second order correction terms.
Here again # does not appear explicitly.
6.3. The numerical tests. In this section we perform some numerical tests
with our schemes. The systems considered are a simple p-system, the one-dimensional
Euler equations, a two-dimensional scalar conservation law, and the two-dimensional
Euler equations.
As we have observed that the relaxed numerical solution is in all cases
better than the numerical solution of (1.3), we present here only computations performed
with The HS is always chosen to be the MUSCL type one exposed
above.
(1) The p-system. We consider the system
In all our computations we take Exact solutions are known
for this system; see, for example, [63]. We compute the 1-shock, 2-rarefaction solution
of the Riemann problem with data:
0.4 for x > 0,
Here we discretize the di#erent approximations: DRM, 4 velocities, and FDM, 6
velocities, and we compute the L 1 error between exact and calculated solutions. We
have also computed the solution for Example 4 in section 3, Suliciu's method, but
there is almost no di#erence with that of DRM. For the same system we also test the
approximation OVM, with 16 and 26 velocities (N=8 and N=13, respectively).
Let us recall that this system has the eigenvalues - # (u). At each time step
we take the minimal value provided by the stability condition
(2.8). Here we have
so that the stability condition (3.18) for the OVM reads
The computation has been performed on a space interval [-2, 2] and with a maximal
1. The space step has been kept constant and equal to 0.01 and the
ratio #t/#x varies between 1 and 0.1. The results are given in Table 6.1.
Remark 6.1. In view of (6.12) one may think that for #t small it is possible
to take # smaller than # (u) # 3(N-1)
N+1 . We tried to perform the computation with
# (u). For ratios # > 0.5 we observed oscillations around discontinuities and
rarefactions, but then when # 0.5 the results are correct and even better than those
obtained with the right condition.
1996 DENISE AREGBA-DRIOLLET AND ROBERTO NATALINI
Table
exact and numerical solution,
1.0 8.35E-03 2.984E-02 6.302E-03 6.122E-03
0.7 7.34E-03 7.262E-03 6.045E-03 6.035E-03
0.4 6.85E-03 5.281E-03 6.019E-03 5.925E-03
Then we observe the evolution of the L 1 error when the space length step varies
and the ratio #t/#x is kept constant and equal to 0.5. Here the convergence
speed is defined by
#k
where e k is the L 1 error. The results are the following.
Table
exact and numerical solution.
2.00E-02 1.226E-02 .87 9.854E-03 .94 1.058E-02 1.031E-02
5.00E-03 4.035E-03 .84 3.177E-03 .84 3.314E-03 3.287E-03
2.50E-03 2.237E-03 .85 1.769E-03 .84 1.777E-03 1.760E-03
Tables
6.1 and 6.2 show that the relaxation model is the worst. The other three
seem to be comparable but the computation is faster for the FDM, since there are
fewer equations in the model and also because the value of # is smaller, allowing a
greater time step to reach the same value of t. Actually for the same value of #t the
results are better for the FDM. We complete this results by the graphic representation
of the computed and exact solutions (Figures 6.1-6.2). we represent the v
component of the solution, the u component being similar. It appears that the OVM
with 26 velocities is nearly the same as the relaxation approximation. The scheme
FDM improves the approximation of the one-shock but not that of the rarefaction,
while the OVM improves both slightly.
(2) One-dimensional Euler system. We now consider the one-dimensional Euler
system
where #, v, p and E are, respectively, the density, velocity, pressure, and total energy
of a perfect gas:
(DRM)
exact solution
Fig. 6.1. p-system: diagonal relaxation method (DRM) and orthogonal velocities method (OVM).0.450.550.65-1 -0.5 0 0.5 1
Fig. 6.2. p-system: flux decomposition method (FDM) and orthogonal velocities method (OVM).
We have tested our schemes with a Sod shock tube. Here the solution is constituted
by three constant states connected by a one-rarefaction, a two-contact discontinuity,
and a three-shock. We have analyzed the L 1 error for the same three models as for
the p-system. In Table 6.3 #t
is defined as above.
Table
exact and numerical solution.
2.50E-03
exact solution0.20.61-0.4 -0.3 -0.2 -0.1 0
exact solution
Fig. 6.3. One-dimensional Euler system: density and velocity with flux decomposition method
exact solution
Fig. 6.4. One-dimensional Euler system: pressure with flux decomposition method (FDM).
In view of this table, it appears that the approximation FDM and the 39 velocities
model of the OVM give similar results. Actually the graphic representation shows an
oscillation around the three-shock for the 39 velocities model while the first is stable
(see
Figures
6.3, 6.4, 6.5(a)). On Figure 6.5(b) we have suppressed these oscillations
by taking #t
All the computations show that the contact discontinuity is
not well approximated. This is a general feature of kinetic schemes (see, for example,
[24]). A smaller time step does not improve the contact discontinuity very much but
a smaller space step does, as shown in Figure 6.6.
(3) A two-dimensional scalar conservation law. In order to compare exact solutions
with numerical computations we also consider a one-dimensional conservation
law with one-dimensional initial data v 0 . This problem is solved on a two-dimensional
cartesian grid not parallel to the axis. We solve
u(x, y,
where # [0, #/2[. Putting u(t, x, R is the rotation of angle
# we have
v(X, Y,
exact solution0.20.61-0.4 -0.3 -0.2 -0.1 0
(a) (b)
Fig. 6.5. One-dimensional Euler system: density with orthogonal velocities method (OVM),
-0.4 -0.3 -0.2 -0.1 0.1 0.2 0.3 0.4
exact solution
Fig. 6.6. One-dimensional Euler system: density with flux decomposition method (FDM), two
di#erent space steps,
Here we take #/5 and a rectangular mesh such that #y = 0.8#x. The CFL
number is 0.8 for x and y. We consider Burgers' equation:
a shock wave: v
As the relaxation model has been proved to be worse, we concentrate our attention
on the second-order model FDM, here with five velocities, and on the model
OVM. Recall that for these last models one-dimensional tests show that it is not very
e#cient to take a large number J of modulus for the velocities. The two-dimensional
computations confirm this analysis. Figure 6.7 represents the solution for at
In the first we compare the
cases 1. In the second we compare the
cases It appears that the best choice
for this model is to take the minimal values N
Figure
6.8 represents the same solution with, respectively,
but here we compare the model FDM to the one with
Here again the five velocities model is the best one. We want to point out that the
computation durations are nearly the same in both cases.
2000 DENISE AREGBA-DRIOLLET AND ROBERTO NATALINI0.20.61
exact solution0.20.61
Fig. 6.7. 2D scalar conservation law at
Fig. 6.8. Two-dimensional scalar conservation law at
Two-dimensional Euler system. We end this section with the two-dimensional
Euler system
where #, v, p and E are, respectively, the density, velocity, pressure, and total energy
of a perfect gas:
The initial data are chosen in order to represent a "double Sod tube":
We have chosen here the approximation FDM, here with 20 velocities. The calculation
has been performed with a rectangular mesh where and the CFL
Fig. 6.9. Two-dimensional Euler system: density and velocity x-component.
Fig. 6.10. Two-dimensional Euler system: velocity y-component and energy.
condition has been fixed to 0.4 in x and y. Figures 6.9 and 6.10 represent the isolines
of, respectively, the density, the first and second component of the velocity, and the
energy at time 0.16.
Acknowledgment
. The authors would like to thank Vuk Milisic for performing
the numerical tests for the two-dimensional Euler system and for many useful
discussions.
--R
An extended thermodynamic framework for the hydrodynamic modeling of semicon- ductors
Convergence of relaxation schemes for conservation laws
A model for collision processes in gases.
Construction of BGK models with a family of kinetic entropies for a given system of conservation laws
A kinetic formulation for multi-branch entropy solutions of scalar conservation laws
The unique limit of the Glimm scheme
The fluid-dynamical limit of nonlinear model of Boltzmann equations
The Boltzmann Equation and Its Applications
The Mathematical Theory of Dilute Gases
Hyperbolic conservation laws with sti
Zero relaxation and dissipation limits for hyperbolic conservation laws
Convergence of the relaxation approximation to a scalar nonlinear hyperbolic equation arising in chromatography.
Relaxation of energy and approximate Riemann solvers for general pressure laws in fluid dynamics
Numerical passage from kinetic to fluid equations
Monotone di
Special issue on lattice gas methods for PDEs: Theory
A kinetic construction of global solutions of first order quasilinear equations
Hyperbolic Systems of Conservation Laws
Numerical Approximation of Hyperbolic Systems of Conservation Laws
Weakly coupled systems of quasilinear hyperbolic equations
Stability of traveling wave solutions for a rate-type viscoelastic system
Zero relaxation limit to centered rarefaction waves for a rate-type viscoelastic system
Nonlinear stability of rarefaction waves for a rate-type viscoelastic system
Kinetic formulation of the chromatography and some other hyperbolic systems
The relaxation schemes for systems of conservation laws in arbitrary space dimensions
Convergence and error estimates of relaxation schemes for multidimensional conservation laws
Contractive relaxation systems and the scalar multidimensional conservation law
New high-resolution central schemes for nonlinear conservation laws and convection-di#usion equations
Convergence of a relaxation scheme for n-n hyperbolic systems of conservation laws
Shock waves and entropy
A linear hyperbolic system with a sti
Moment closure hierarchies for kinetic theories
Zero Relaxation Limit to Piecewise Smooth Solutions for a Rate-Type Viscoelastic System in the Presence of Shocks
Existence and stability of entropy solutions for the hyperbolic systems of isentropic gas dynamics in Eulerian and Lagrangian coordinates
A kinetic formulation of multidimensional scalar conservation laws and related equations
Kinetic formulation of the isentropic gas dynamics and p-systems
Stability of a relaxation model with a nonconvex flux
Hyperbolic conservation laws with relaxation
Positive schemes for solving multi-dimensional hyperbolic systems of conservation laws
Asymptotic stability of planar rarefaction waves for the relaxation approximation of conservation laws in several dimensions
BV solutions and relaxation limit for a model in viscoelasticity
Linear stability of shock profiles for a rate-type viscoelastic system with relaxation
Nonlinear stability of shock fronts for a relaxation system in several space dimensions
stability of travelling waves for a hyperbolic system with relaxation
Springer Tracts Nat.
Convergence to equilibrium for the relaxation approximations of conservation laws
A discrete kinetic approximation of entropy solutions to multidimensional scalar conservation laws
Recent results on hyperbolic relaxation problems
Nonoscillatory central di
Numerical methods for hyperbolic conservation laws with sti
Numerical methods for hyperbolic conservation laws with sti
A kinetic equation with kinetic entropy functions for scalar conservation laws
Discrete velocity models of the Boltzmann equation: A survey on the mathematical aspects of the theory
Recent advances in lattice Boltzmann computing
Relaxation semi-lin-eaire et cin-etique des syst-emes de lois de conservation
On modelling phase transitions by means of rate-type constitutive equations
Some stability-instability problems in phase transitions modelled by piecewise linear elastic or viscoelastic constitutive equations
Pointwise error estimates for relaxation approximations to conservation laws
On the rate of convergence to equilibrium for a system of conservation laws with a relaxation term
Viscosity and relaxation approximations for hyperbolic systems of conservation laws
Convergence of relaxing schemes for conservation laws
Linear and Nonlinear Waves
The fluid dynamical limit of the Broadwell model of the nonlinear Boltzmann equation in the presence of shocks
Numerical Analysis of Relaxation Schemes for Scalar Conservation Laws
--TR
--CTR
Mapundi K. Banda, Variants of relaxed schemes and two-dimensional gas dynamics, Proceedings of the international conference on Computational methods in sciences and engineering, p.56-59, September 12-16, 2003, Kastoria, Greece
Mapundi Kondwani Banda, Variants of relaxed schemes and two-dimensional gas dynamics, Journal of Computational and Applied Mathematics, v.175 n.1, p.41-62, 1 March 2005
Ansgar Jngel , Shaoqiang Tang, Numerical approximation of the viscous quantum hydrodynamic model for semiconductors, Applied Numerical Mathematics, v.56 n.7, p.899-915, July 2006
D. Aregba-Driollet , R. Natalini , S. Tang, Explicit diffusive kinetic schemes for nonlinear degenerate parabolic systems, Mathematics of Computation, v.73 n.245, p.63-94, January 2004
Ansgar Jngel , Shaoqiang Tang, A relaxation scheme for the hydrodynamic equations for semiconductors, Applied Numerical Mathematics, v.43 n.3, p.229-252, November 2002 | conservation laws;hyperbolic systems;kinetic schemes;BGK models;numerical convergence |
359822 | An architecture for more realistic conversational systems. | In this paper, we describe an architecture for conversational systems that enables human-like performance along several important dimensions. First, interpretation is incremental, multi-level, and involves both general and task- and domain-specific knowledge. Second, generation is also incremental, proceeds in parallel with interpretation, and accounts for phenomena such as turn-taking, grounding and interruptions. Finally, the overall behavior of the system in the task at hand is determined by the (incremental) results of interpretation, the persistent goals and obligations of the system, and exogenous events of which it becomes aware. As a practical matter, the architecture supports a separation of responsibilities that enhances portability to new tasks and domains. | INTRODUCTION
Our goal is to design and build systems that approach
human performance in conversational interaction. We limit
our study to practical dialogues: dialogues in which the conversants
are cooperatively pursuing specific goals or tasks.
Applications involving practical dialogues include planning
(e.g. designing a kitchen), information retrieval (e.g. finding
out the weather), customer service (e.g. booking an airline
advice-giving (e.g. helping assemble modular furni-
ture) or crisis management (e.g. a 911 center). In fact, the
class of practical dialogues includes almost anything about
which people might want to interact with a computer.
TRIPS, The Rochester Interactive Planning System [6], is
an end-to-end system that can interact robustly and in near
real-time using spoken language and other modalities. It has
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for pro-t or commercial advantage and that copies
bear this notice and the full citation on the -rst page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior speci-c
permission and/or a fee.
IUI'00, January 14-17, 2001, Sante Fe, New Mexico.
participated successfully in dialogues with untrained users
in several di#erent simple problem solving domains. Our
experience building this system, however, revealed several
problems that motivated the current work.
1.1 Incrementality
Like most other dialogue systems that have been built,
TRIPS enforces strict turn taking between the user and
system, and processes each utterance sequentially through
three stages: interpretation-dialogue management-generation.
Unfortunately, these restrictions make the interaction un-natural
and stilted and will ultimately interfere with the
human's ability to focus on the problem itself rather than on
making the interaction work. We want an architecture that
allows a more natural form of interaction; this requires incremental
understanding and generation with flexible turn-taking
Here are several examples of human conversation that illustrate
some of the problems with processing in stages. All
examples are taken from a corpus collected in an emergency
management task set in Monroe County, NY [17]. Plus signs
(+) denote simultaneous speech, and "#" denotes silence.
First, in human-human conversation participants frequently
ground (confirm their understanding of) each other's contributions
using utterances such as "okay" and "mm-hm".
Clearly, incremental understanding and generation are required
if we are to capture this behavior. In the following
example, A acknowledges each item in B's answers about
locations where there are road outages.
Excerpt from Dialogue s16
A: can you give me the first uh # outage
B: okay
B: so Elmwood bridge
A: okay
um # Thurston road
A: mm-hm
Three at Brooks
A: mm-hm
B: and Four Ninety at the inner # loop
A: okay
Second, in human-human dialogues the responder frequently
acknowledges the initiator's utterance immediately after it
is completed and before they have performed the tasks they
need to do to fully respond. In the next excerpt, A asks
for problems other than road outages. B responds with an
immediate acknowledgment. Evidence of problem solving
activity is revealed by the user smacking their lips ("lips-
mack") and silence, and then B starts to respond to the
request.
Excerpt from Dialogue s16
A: and what are the # other um # did you have
just beside road +1 outages +1
B: +1 okay +1 # um Three Eighty
Three and Brooks +2 # is +2 a # road out
# and an electric line down
A: +2 Brooks mm hm +2
A: okay
A sequential architecture, requiring interpretation and problem
solving to be complete before generation begins, cannot
produce this behavior in any principled way.
A third example involves interruptions, where the initiator
starts to speak again after the responder has formulated a
response and possibly started to produce it. In the example
starts to respond to A's initial statement but then
A continues speaking.
Excerpt from Dialogue s6
A: and he's going to pull the tree #
B: mm hm
done at # um # he's going
to be done at # in forty minutes #
We believe e#ective conversational systems are going to
have to be able to interact in these ways, which are perfectly
natural (in fact are the usual mode of operation) for humans.
It may be that machines will not duplicate human behavior
exactly, but they will realize the same conversational goals
using the communication modalities they have. Rather than
saying "uh-huh," for instance, the system might ground a
referring expression by highlighting it on a display. Note
also that the interruption example requires much more than
a "barge-in" capability. B needs to interpret A's second
utterance as a continuation of the first, and does not simply
abandon its goal of responding to the first. When B gets
the turn, B may decide to still respond in the same way, or
to modify its response to account for new information.
1.2 Initiative
Another reason TRIPS does not currently support completely
natural dialogue is that, like most other dialogue
systems, it is quite limited in the form of mixed-initiative
interaction it supports. It supports taking discourse-level
initiative (cf. [3]) for clarifications and corrections, but does
not allow shifting of task-level initiative during the inter-
action. The reason is that system behavior is driven by
the dialogue manager, which focuses on interpreting user
input. This means that the system's own independent goals
are deemphasized. The behavior of a conversational agent
should ideally be determined by three factors, not just one:
the interpretation of the last user utterance (if any), the sys-
tem's own persistent goals and obligations, and exogenous
events of which the agent becomes aware.
For instance, in the Monroe domain, one person often
chooses to ignore the other person's last utterance and leads
the conversation to discuss some other issue. Sometimes
they explicitly acknowledge the other's request and promise
to address it later, as in the following:
Excerpt from Dialogue s16
A: can you # +1 can +1 you go over the # the
thing +2 for me again +2
B: +2 yeah in one +2 minute
have to # clarify at the end here .
In other cases, they simply address some issue they apparently
think is more important than following the other's
lead, as in the following example where B does not address
A's suggestion about using helicopters in any explicit way.
Excerpt from Dialogue s12
A: we can we # we can either uh
guess we have to decide how to break up #
this
A: we can # make three trips with a helicopter
B: so i guess we should send one ambulance straight
o# to # marketplace right # now # right
1.3 Portability
Finally, on a practical note, while TRIPS was designed
to separate discourse interpretation from task and domain
reasoning, in practice domain- and task-specific knowledge
ended up being used directly in the dialogue manager. This
made it more di#cult to port the system to di#erent domains
and also hid the di#erence between general domain-independent
discourse behavior and task-specific behavior
in a particular domain.
To address these problems, we have developed a new architecture
for the "core" of our conversational system that
involves asynchronous interpretation, generation, and system
planning/acting processes. This design simplifies the
incremental development of new conversational behaviors.
In addition, our architecture has a clean separation between
discourse/dialogue modeling and task/domain levels of rea-
soning, which (a) enhances our ability to handle more complex
domains, (b) improves portability between domains;
and (c) allows for richer forms of task-level initiative.
The remainder of this paper describes our new architecture
in detail. The next section presents an overview of the
design and detailed descriptions of the major components.
A brief but detailed example illustrates the architecture in
action. We conclude with a discussion of related work on
conversational systems and the current status of our implementation
2. ARCHITECTURE DESCRIPTION
As mentioned previously, we have been developing conversational
agents for some years as part of the TRAINS [7] and
TRIPS [6] projects. TRIPS is designed as a loosely-coupled
collection of components that exchange information by passing
messages. There are components for speech processing
(both recognition and synthesis), language understanding,
dialogue management, problem solving, and so on.
In previous versions of the TRIPS system, the Dialogue
Manager component (DM) performed several functions:
. Interpretation of user input in context
. Maintenance of discourse context
. Planning the content (but not the form) of system response
. Managing problem solving and planning
Having all these functions performed by one component led
to several disadvantages. The distinction between domain
planning and discourse planning was obscured. It became
di#cult to improve interpretation and response planning,
because the two were so closely knit. Incremental processing
was di#cult to achieve, because all input had to pass
through the DM (even if no domain reasoning was going to
occur, but only discourse planning). Finally, porting the system
to new tasks and domains was hampered by the inter-connections
between the various types of knowledge within
the DM.
The new core architecture of TRIPS is shown in Figure 1.
There are three main processing components. The Interpretation
Manager (IM) interprets user input as it arises. It
broadcasts the recognized speech acts and their interpretation
as problem solving actions, and incrementally updates
the Discourse Context. The Behavioral Agent (BA) is most
closely related to the autonomous "heart" of the agent. It
plans system behavior based on its goals and obligations,
the user's utterances and actions, and changes in the world
state. Actions that involve communication and collaboration
with the user are sent to the Generation Manager (GM).
The GM plans the specific content of utterances and display
updates. Its behavior is driven by discourse obligations
(from the Discourse Context), and directives it receives from
the BA. The glue between the layers is an abstract model
of problem solving in which both user and system contributions
to the collaborative task can be expressed.
All three components operate asynchronously. For in-
stance, the GM might be generating an acknowledgment
while the BA is still deciding what to do. And if the user
starts speaking again, the IM will start interpreting these
new actions. The Discourse Context maintains the shared
state needed to coordinate interpretation and generation.
In the remainder of this section, we describe the major
components in more detail, including descriptions of the Discourse
Context, Problem Solving Model, and Task Manager.
2.1 Discourse Context
The TRIPS Discourse Context provides the information
to coordinate the system's conversational behavior. First,
it supplies su#cient information to generate and interpret
anaphoric expressions and to interpret forms of ellipsis. Given
the real-time nature of the interactions, and the fact that the
system may have its own goals and receive reports about
external events, the discourse context must also provide information
about the status of the turn (i.e. can I speak now
or should I wait?), and what discourse obligations are currently
outstanding (cf. [19]). The latter is especially important
when the system chooses to pursue some other goal
(e.g. notifying the user of an accident) rather than perform
the expected dialogue act (e.g. answering a question); to
be coherent and cooperative, the system should usually still
outstanding discourse obligations, even if this is done
Behavioral
Agent
Interpretation
Manager
Generation
Manager
Parser
Speech
Planner Scheduler Monitors Events
Task- and Domain-specific
Knowledge Sources Exogenous Event Sources
Response
Planner
Graphics
Speech
Task
Manager
Reference
Discourse
Context
Interpretation
Generation
Behavior
Task
Interpretation
Requests
Problem-Solving
Acts recognized
from user
Problem-Solving
Acts
to perform
Task
Execution
Requests
Figure
1: New Core Architecture
simply by means of an apology. Finally, as we move towards
open-mike interactive systems, we must also identify
and generate appropriate grounding behaviors. To support
these needs, the TRIPS discourse context contains the following
information:
1. A model of the current salient entities in the discourse,
to support interpretation and generation of anaphoric
2. The structure and interpretation of the immediately
preceding utterance, to support ellipsis resolution and
clarification questions;
3. The current status of the turn-whether it is assigned
to one conversant or currently open.
4. A discourse history consisting of the speech-act interpretations
of the utterances in the conversation so far,
together with an indication of what utterances have
been grounded;
5. The current discourse obligations, typically to respond
to the other conversant's last utterance. Obligations
may act as a stack during clarification subdialogues, or
short-term interruptions, but this stack never becomes
very large.
This is a richer discourse model than found in most systems
(although see [12] for a model of similar richness).
2.2
Abstract
Problem Solving Model
The core modules of the conversational agent, the IM, BA
and GM, use general models of collaborative problem solv-
ing, but these models remain at an abstract level, common
to all practical dialogues. This model is formalized as a set
of actions that can be performed on problem solving objects.
The problem solving objects include objectives (goals being
pursued), solutions (proposed courses of action or structures
that may achieve an objective), resources (objects used in
solutions, such as trucks for transportation, space in kitchen
design), and situations (settings in which solutions are used
to attain objectives).
In general, there are a number of di#erent actions agents
can perform as they collaboratively solve problems. Many
of these can apply to any problem solving object. For exam-
ple, agents may create new objectives, new solutions, new
situations (for hypothetical reasoning) and new resources
(for resource planning). Other actions in our abstract problem
solving model include select (e.g.focus on a particular
objective), evaluate (e.g. determine how long a solution
might take), compare (e.g. compare two solutions to the
same objective), modify (e.g. change some aspect of a so-
lution, change what resources are available), repair (e.g. fix
an old solution so that it works) and abandon (e.g. give up
on an objective, throw out a possible solution) 1 . Note that
because we are dealing with collaborative problem solving,
not all of these actions can be accomplished by one agent
alone. Rather, one agent needs to propose an action (the
agent is said to initiate the collaborative act), and the other
accept it (the other agent is said to complete the collaborative
act).
There are also explicit communication acts involved in
collaborative problem solving. Like all communicative acts,
these acts are performed by a single agent, but are only successful
if the other agent understands the communication.
The main communication acts for problem solving include
describe (e.g. elaborate on an objective, describe a particular
solution), explain (e.g. provide a rationale for a solution
or decision), and identify (e.g. communicate the existence of
a resource, select a goal to work on). These communication
acts, of course, may be used to accomplish other problem
solving goals as well. For instance, one might initiate the
creation of an objective by describing it.
2.3 Task Manager
The behaviors of the IM, BA and GM are defined in terms
of the abstract problem solving model. The details of what
these objects are in a particular domain, and how operations
are performed, are specified in the Task Manager (TM). The
TM supports operations intended to assist in both the recognition
of what the user is doing with respect to the task at
hand and the execution of problem solving steps intended
to further progress on the task at hand.
Specifically, the Task Manager must be able to:
1. Answer queries about objects and their role in the
task/domain (e.g. is an ambulance a resource? Is loading
a truck an in-domain plannable/executable action?
Is "evacuating a city" a possible in-domain goal?)
2. Provide the interface between the generic problem solving
acts used by the BA (e.g. create a solution) and
the actual task-specific agents that perform the tasks
(e.g. build a course of action to evacuate the city using
two trucks)
3. Provide intention recognition services to the IM (e.g.
can "going to Avon" plausibly be an extension of the
current course of action?)
This list is not meant to be exhaustive, although it has
been developed based on our experiences building systems
in several problem-solving domains.
In our architecture, the Task Manager maps abstract problem
solving acts onto the capabilities of the knowledge-based
agents at its disposal. For example, in one of our planning
domains, the Task Manager uses a planner, router, sched-
uler, and temporal knowledge base to answer queries and
create or modify plans.
2.4 Interpretation Manager
The Interpretation Manager (IM) interprets incoming parsed
utterances and generates updates to the Discourse Context.
First, it produces turn-taking information. With a push-
to-talk interface this is simple. When the user presses the
button they have taken the turn; when they release it they
have released the turn. As we move to open-mike, identifying
turn-taking behavior will require more sophisticated in-
terpretation. TRIPS uses an incremental chart parser that
will assist in this process by broadcasting constituents as
they are recognized.
The principal task of the IM, however, is to identify the
intended speech act, the collaborative problem solving act
that it furthers, and the system's obligations arising from
the interaction. For instance, the utterance "The bridge
over the Genesee is blocked" would be interpreted in some
circumstances as a problem statement, the intention being
to initiate replanning. The IM would broadcast a discourse-level
obligation to respond to a statement, and announce
that the user has initiated the collaborative problem solving
act of identifying a problem as a means of initiating replanning
(say, to change the route currently planned). In other
circumstances, the same utterance might be recognized as
the introduction of a new goal (i.e. to reopen the bridge).
The rules to construct these interpretations are based on
the abstract problem solving model and specific decisions
are made by querying the Task Manager. For instance, in
the above example, key questions might be "is there an existing
plan using the bridge?" (an a#rmative answer indicates
the replanning interpretation) and "is making the bridge
available a reasonable high-level goal in this domain?" (an
a#rmative answer indicates the introduce-goal interpreta-
tion).
2.5 Generation Manager
The Generation Manager (GM), which performs content
planning, receives problem solving goals requiring generation
from the Behavioral Agent (BA) and discourse obligations
from the Discourse Context. The GM's task is to
synthesize these input sources and produce plans (sequences
of discourse acts) for the system's discourse contributions.
Because the GM operates asynchronously from the IM, it
can be continuously planning. For instance, it is informed
when the user's turn ends and can plan simple take-turn and
keep-turn acts even in the absence of further information
from the IM or the BA, using timing information.
In the case of grounding behaviors and some conventional
interactions (e.g. greetings), the GM uses simple rules based
on adjacency pairs; no reference to the problem solving state
is necessary. In other cases, it may need information from
the BA in order to satisfy a discourse obligation. It may
also receive goals from the Behavioral Agent that it can
plan to satisfy even in the absence of a discourse obligation,
for instance when something important changes in the world
and the BA wants to notify the user.
The GM can also plan more extensive discourse contributions
using rhetorical relations expressed as schemas, for
instance to explain a fact or proposal or to motivate a proposed
action. It has access to the discourse context as well
as to sources for task and domain-level knowledge.
When the GM has constructed a discourse act or set of
acts for production, it sends the act(s) and associated content
to the Response Planner, which performs surface gen-
eration. The RP comprises several subcomponents; some
are template-based, some use a TAG-based grammar, and
one performs output selection and coordination. It can realize
turn-taking, grounding and speech acts in parallel and
in real-time, employing di#erent modalities where useful. It
can produce incremental output at two levels: it can produce
the output for one speech act before others in a plan
are realized; and where there is propositional content, it can
produce incremental output within the sentence [10]. If a
discourse act is realized and produced successfully, the GM
is informed and sends an update to the Discourse Context.
2.6 Behavioral Agent
As described above, the Behavioral Agent (BA) is responsible
for the overall problem solving behavior of the system.
This behavior is a function of three aspects of the BA's
environment: (1) the interpretation of user utterances and
actions in terms of problem solving acts, as produced by the
Interpretation Manager; (2) the persistent goals and obligations
of the system, in terms of furthering the problem solving
task; (3) Exogenous events of which the BA becomes
aware, perhaps by means of other agents monitoring the
state of the world or performing actions on the BA's behalf.
As we noted previously, most dialogue systems (including
previous versions of TRIPS) respond primarily to the first
of these sources of input, namely the user's utterances. In
some systems (including previous versions of TRIPS) there
is some notion of the persistent goals and/or obligations of
the system. Often this is implicit and "hard-coded" into the
rules governing the behavior of the system. In realistic conversational
systems, however, these would take on a much
more central role. Just as people do, the system must juggle
its various needs and obligations and be able to talk about
them explicitly.
Finally, we think it is crucial that conversational systems
get out into the world. Rather than simply looking up answers
in a database or even conducting web queries, a conversational
system helping a user with a real-world task is
truly an agent embedded in the world. Events occur that
are both exogenous (beyond its control) and asynchronous
(occurring at unpredictable times). The system must take
account of these events and integrate them into the conver-
sation. Indeed in many real-world tasks, this "monitoring"
function constitutes a significant part of the system's role.
The Behavioral Agent operates by reacting to incoming
events and managing its persistent goals and obligations. In
the case of user-initiated problem solving acts, the BA determines
whether to be cooperative and how much initiative
to take in solving the joint problem. For example, if the user
initiates creating a new objective, the system can complete
the act by adopting a new problem solving obligation to find
a solution. It could, however, take more initiative, get the
Task Manager to compute a solution (perhaps a partial or
tentative one), and further the problem solving by proposing
the solution to the user.
The BA also receives notification about events in the world
and chooses whether to communicate them to the user and/or
adopt problem solving obligations about them. For exam-
ple, if the system receives a report of a heart attack victim
needing attention, it can choose to simply inform the user
of this fact (and let them decide what to do about it). More
likely, it can decide that something should be done about the
situation, and so adopt the intention to solve the problem
(i.e. get the victim to a hospital).
Thus the system's task-level initiative-taking behavior is
determined by the BA, based on the relative priorities of
its goals and obligations. These problem-solving obligations
determine how the system will respond to new events, including
interpretations of user input.
2.7 Infrastructure
The architecture described in this paper is built on an extensive
infrastructure that we have developed to support
e#ective communication between the various components
making up the conversational system. Space precludes an
extended discussion of these facilities, but see [1] for further
details.
System components communicate using the Knowledge
Query and Manipulation Language (KQML [11]), which provides
a syntax and high-level semantics for messages exchanged
between agents. KQML message tra#c is mediated
by a Facilitator that sits at the hub of a star topology
network of components. While a hub may seem to be a
bottleneck, in practice this has not been a problem. On
the contrary, the Facilitator provides a variety of services
that have proven indispensable to the design and development
of the overall system. These include: robust initial-
ization, KQML message validation, naming and lookup ser-
vices, broadcast facilities, subscription (clients can subscribe
in order to receive messages sent by other clients), and advertisement
(clients may advertise their capabilities).
The bottom line is that an architecture for conversational
systems such as the one we are proposing in this paper would
be impractical, if not impossible, without extensive infrastructure
support. While these may seem like "just implementation
details," in fact the power and flexibility of the
TRIPS infrastructure enables us to design the architecture
to meet the needs of realistic conversation and to make it
work.
3. EXAMPLE
An example will help clarify the relationships between the
various components of our architecture and the information
that flows between them, as well as the necessity for each.
Consider the situation in which the user asks "Where are
the ambulances?" First, the speech recognition components
notice that the user has started speaking. This is interpreted
by the Interpretation Manager as taking the turn, so
it indicates that a TAKE-TURN event has occurred:
(tell (done (take-turn :who user)))
The Generation Manager might use this information to cancel
or delay a planned response to a previous utterance. It
can also be used to generate various grounding behaviors
(e.g. changing a facial expression, if such a capability is sup-
ported). When the utterance is completed, the IM interprets
the user's having stopped speaking as releasing the turn:
(tell (done (release-turn :who user)))
At this point, the GM may start planning (or executing) an
appropriate response.
The Interpretation Manager also receives a logical form
describing the surface structure of this request for infor-
mation. It performs interpretation in context, interacting
with the Task Manager. In this case, it asks the Task Manager
if ambulances are considered resources in this domain.
With an a#rmative response, it interprets this question as
initiating the problem solving act of identifying relevant re-
sources. Note that contextual interpretation is critical-the
user wants to know where the usable ambulances are, not
where all known ambulances might be. The IM then generates
1. A message to the Discourse Context recording the user's
utterance in the discourse history together with its
structural analysis from the parser.
2. A message to the Discourse Context that the system
now has an obligation to respond to the question:
(tell
(introduce-obligation
:id OBLIG1
:who system
:what (respond-to
(wh-question
:id
:who user
:what (at-loc (the-set ?x
(type ?x ambulance))
(wh-term ?l
(type ?l location)))
:why (initiate PS1)))))
This message includes the system's obligation, a representation
of the content of the question, and a connection
to the recognized problem solving act (defined in
the message described next). The IM does not specify
how the obligation to respond to the question should
be discharged.
3. A message to the Behavioral Agent that the user has
initiated a collaborative problem solving act, namely
attempting to identify a resource:
(tell
(done
(initiate
:who user
:what (identify-resource
:id PS1
:what (set-of ?x
(type ?x ambulance))))))
This message includes the problem solving act recognized
by the IM as the user's intention, and a representation
of the content of the question.
When the Discourse Context receives notification of the
new discourse obligation, this fact is broadcast to any subscribed
components, including the Generation Manager. The
GM cannot answer the question without getting a response
from the Behavioral Agent. So it adopts the goal of answer-
ing, and waits for information from the BA. While waiting,
it may plan and produce an acknowledgment of the question.
When the Behavioral Agent receives notification that the
user has initiated a problem solving act, one of four things
can happen depending on the situation. We will consider
each one in sequence.
Do the Right Thing It may decide to "do its part" and
try to complete (or at least further) the problem solv-
ing. In this case, it would communicate with other
components to answer the query about the location of
the ambulances, and then send the GM a message like:
(request
(identify-resource
:who system
:what (and
(at-loc amb-1 rochester)
.)
:why (complete :who system :what PS1)))
The BA expects that this will satisfy its problem solving
goal of completing the identify-resources act initiated
by the user, although it can't be sure until it
hears back from the IM that the user understood the
response.
Clarification The BA may try to identify the resource but
fail to do so. If a specific problem can be identified as
having caused the failure, then it could decide to initiate
a clarification to obtain the information needed.
For instance, say the dialogue has so far concerned a
particular subtask involving a particular type of am-
bulances. It might be that the BA cannot decide if
it should identify just the ambulances of the type for
this subtask, or whether the user wants to know where
all usable ambulances are. So it might choose to tell
the GM to request a clarification. In this case, the BA
retains its obligation to perform the identify-resources
act.
Failure On the other hand, the BA may simply fail to identify
the resources that the user needs. For instance, the
agents that it uses to answer may not be responding,
or it may be that the question cannot be answered. In
this case, it requests the GM to notify the user of fail-
ure, and abandons (at least temporarily) its problem
solving obligation.
Ignoring the Question Finally, the BA might decide that
some other information is more important, and send
that information to the GM (e.g. if a report from the
world indicates a new and more urgent task for the user
and system to respond to). In this case, the BA retains
the obligation to work on the pending problem solving
action, and will return to it when circumstances
permit.
Whatever the situation, the Generation Manager receives
some abstract problem solving act to perform. It then needs
to reconcile this act with its discourse obligation OBLIG1.
Of course, it can satisfy OBLIG1 by answering the ques-
tion. It can also satisfy OBLIG1 by generating a clarification
request, since the clarification request is a satisfactory
response to the question. (Note that the obligation to answer
the original question is maintained as a problem solving
goal, not a discourse obligation). In the case of a failure,
OBLIG1 could be satisfied by generating an apology and a
description of the reason the request could not be satisfied.
If the BA ignores the question, the GM might apologize
and add a promise to address the issue later, before producing
the unrelated information. The apology would satisfy
OBLIG1. For a very urgent message (e.g. a time critical
warning), it might generate the warning immediately, leaving
the discourse obligation OBLIG1 unsatisfied, at least
temporarily.
The GM sends discourse acts with associated content to
the Response Planner, which produces prosodically-annotated
text for speech synthesis together with multimodal display
commands. When these have been successfully (or partially
in the case of a user interruption) produced, the GM is informed
and notifies the Discourse Context as to which discourse
obligations should have been met. It also gives the
Discourse Context any expected user obligations that result
from the system's utterances.
The Interpretation Manager uses knowledge of these expectations
to aid subsequent interpretation. For example,
if an answer to the user's question is successfully produced,
then the user has an obligation to acknowledge the answer.
Upon receiving an acknowledgment (or inferring an implicit
acknowledge), the IM notifies the Discourse Context that
the obligation to respond to the question has truly been
discharged, and might notify the BA that the collaborative
"Identify-Resource" act PS1 has been completed.
4. IMPLEMENTATION
The architecture described in this paper arose from a long-term
e#ort in building spoken dialogue systems. Because
we have been able to easily port most components from our
previous system into the new one, the system itself has a
wide range of capabilities that were already present in earlier
versions. Specifically, it handles robust, near real-time
spontaneous dialogue with untrained users as they solve simple
tasks such as trying to find routes on a train map and
planning evacuation of personnel from an island (see [1] for
an overview of the di#erent domains we have implemented).
The system supports cooperative, incremental development
of plans with clarifications, corrections, modifications and
comparison of di#erent options, using unrestricted, natural
language (as long as the user stays focussed on the task at
hand). The new architecture extends our capabilities to better
handle the incremental nature of interpretation, the fact
that interpretation and generation must be interleaved, and
the fact that realistic dialogue systems must also be part
of a broader outside world that is not static. A clean separation
between linguistic and discourse knowledge on one
hand, and task and domain knowledge on the other, both
clarifies the role of the individual components and improves
portability to new tasks.
We produced an initial demonstration of our new architecture
in August 2000, in which we provided the dialogue
capabilities for an emergency relief planning domain which
used simulation, scheduling, and planning components built
by research groups at other institutions. Current work involves
extending the capabilities of individual components
(the BA and GM in particular) and porting the system to
the TRIPS-911 domain.
5. RELATED WORK
Dialogue systems are now in use in many applications.
Due to space constraints, we have selected only some of these
for comparison to our work. They cover a range of domains,
modalities and dialogue management types:
. Information-seeking systems [2, 5, 8, 9, 13, 15, 16] and
planning systems [4, 14, 18];
. Speech systems [13, 14, 15, 16], multi-modal systems
[5, 8, 18] and embodied conversational agents [2, 9];
. Systems that use schemas or frames to manage the
dialogue [9, 13, 14, 16], ones that use planning [4],
ones that use models of rational interaction [15], and
ones that use dialogue grammars or finite state models
[5, 8, 18].
Most of the systems we looked at use a standard interpretation-
dialogue management-generation core, with the architecture
being either a pipeline or organized around a message-passing
hub with a pipeline-like information flow. Our architecture
uses a more fluid processing model, which enables
the di#erences we outline below.
5.1 Separation of domain/task reasoning from
discourse reasoning
Since many dialogue systems are information-retrieval sys-
tems, there may be fairly little task reasoning to perform.
For that reason, although many of these systems have domain
models or databases separate from the dialogue manager
[5, 8, 9, 13, 15, 16], they do not have separate task
models. By contrast, our system is designed to be used in
domains such as planning, monitoring, and design, where
task-level reasoning is crucial not just for performing the
task but also for interpreting the user's actions and utter-
ances. Separation of domain knowledge and task reasoning
from discourse reasoning - through the use of our Task
Manager, various world models, the abstract problem solving
model and the Behavioral Agent - allows us access to
this information without compromising portability and flexibility
CommandTalk [18], because it is a thin layer over a stand-alone
planner-simulator, has little direct involvement in task
reasoning. However, the dialogue manager incorporates some
domain-dependent task reasoning, e.g. in the discourse states
for certain structured form-filling dialogues.
In the work of Cassell et al [2], the response planner performs
deliberative task and discourse reasoning to achieve
communicative and task-related goals. In our architecture,
there is a separation between task- and discourse-level plan-
ning, with the Behavioral Agent handling the first type of
goal and the Generation Manager the other.
Chu-Carroll and Carberry's CORE [4] is not a complete
system, but does have a specification for input to the response
planner that presumably would come from a dialogue
manager. The input specification allows for domain,
problem solving, belief and discourse-level intentions. Our
generation manager reasons over discourse-level intentions;
it obtains information about domain, problem solving and
belief intentions from other modules.
The CMU Communicator systems have a dialogue man-
ager, but use a set of domain agents to "handle all domain-specific
information access and interpretation, with the goal
of excluding such computation from the dialogue management
component" [14]. However, the dialogue manager uses
task or domain-dependent schemas to determine its behavior
5.2 Separation of interpretation from response-
planning
Almost all the systems we examined combine interpretation
with response planning in the dialogue manager. The
architecture outlined by Cassell et al [2], however, separates
the two. It includes an understanding module (performing
the same kinds of processing performed by our Interpretation
Manager); a response planner (performing deliberative
reasoning); and a reaction module (which performs action
coordination and handles reactive behaviors such as turn-
taking). We do not have a separate component to process
reactive behaviors; we get reactive behaviors because different
types of goals take di#erent paths through our sys-
tem. Cassell et al's ``interactional'' goals (e.g. turn-taking,
grounding) are handled completely by the discourse components
of our system (the discourse interpretation and response
planner); the handling of their "propositional" goals
may involve domain or task reasoning and therefore will involve
our Behavioral Agent and problem-solving modules.
Fujisaki et al [8] divide discourse processing into a user
model and a system model. As in other work [4, 15], this is
an attempt to model the beliefs and knowledge of the agents
participating in the discourse, rather than the discourse it-
self. However, interpretation must still be completed before
response planning begins. Furthermore, the models of
user and system are finite-state models; for general conversational
agents more flexible models may be necessary.
6. CONCLUSIONS
We have described an architecture for the design and implementation
of conversational systems that participate effectively
in realistic practical dialogues. We have emphasized
the fact that interpretation and generation must be
interleaved and the fact that dialogue systems in realistic
settings must be part of and respond to a broader "world
outside." These considerations have led us to an architecture
in which interpretation, generation, and system behavior are
functions of autonomous components that exchange information
about both the discourse and the task at hand. A
clean separation between linguistic and discourse knowledge
on the one hand, and task- and domain-specific information
on the other hand, both clarifies the roles of the individual
components and improves portability to new tasks and
domains.
7.
ACKNOWLEDGMENTS
This work is supported by ONR grant no. N00014-95-1-
1088, DARPA grant no. F30602-98-2-0133, and NSF grant
no. IRI-9711009.
8.
--R
An architecture for a generic dialogue shell.
Requirements for an architecture for embodied conversational characters.
Initiative in collaborative interactions - its cues and e#ects
Collaborative response generation in planning dialogues.
An architecture for multi-modal natural dialogue systems
TRIPS: An integrated intelligent problem-solving assistant
The design and implementation of the TRAINS-96 system: A prototype mixed-initiative planning assistant
Principles and design of an intelligent system for information retrieval over the internet with a multimodal dialogue interface.
The August spoken dialogue system.
Incremental generation for real-time applications
A proposal for a new KQML specification.
Modelling grounding and discourse obligations using update rules.
Design strategies for spoken dialog systems.
Creating natural dialogs in the Carnegie Mellon Communicator system.
ARTIMIS: Natural dialogue meets rational agency.
The Monroe corpus.
The CommandTalk spoken dialogue system.
Discourse obligations in dialogue processing.
--TR
TRIPs
The Design and Implementation of the TRAINS-96 System: A Prototype Mixed-Initiative Planning Assistant
The Monroe Corpus
--CTR
Javier Calle-Gmez , Ana Garca-Serrano , Paloma Martnez, Intentional processing as a key for rational behaviour through Natural Interaction, Interacting with Computers, v.18 n.6, p.1419-1446, December, 2006
Emerson Cabrera Paraiso , Jean-Paul A. Barths, Une interface conversationnelle pour les agents assistants appliqus des activits professionnelles, Proceedings of the 16th conference on Association Francophone d'Interaction Homme-Machine, p.243-246, August 30-September 03, 2004, Namur, Belgium
Robert Porzel , Iryna Gurevych, Towards context-adaptive utterance interpretation, Proceedings of the 3rd SIGdial workshop on Discourse and dialogue, p.154-161, July 11-12, 2002, Philadelphia, Pennsylvania
James Allen , Nate Blaylock , George Ferguson, A problem solving model for collaborative agents, Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 2, July 15-19, 2002, Bologna, Italy
Judith Hochberg , Nanda Kambhatla , Salim Roukos, A flexible framework for developing mixed-initiative dialog systems, Proceedings of the 3rd SIGdial workshop on Discourse and dialogue, p.60-63, July 11-12, 2002, Philadelphia, Pennsylvania
Kenneth D. Forbus , Thomas R. Hinrichs, Companion cognitive systems: a step toward human-level AI, AI Magazine, v.27 n.2, p.83-95, July 2006
Ana-Maria Popescu , Oren Etzioni , Henry Kautz, Towards a theory of natural language interfaces to databases, Proceedings of the 8th international conference on Intelligent user interfaces, January 12-15, 2003, Miami, Florida, USA
Meriam Horchani , Laurence Nigay , Franck Panaget, A platform for output dialogic strategies in natural multimodal dialogue systems, Proceedings of the 12th international conference on Intelligent user interfaces, January 28-31, 2007, Honolulu, Hawaii, USA
Sheila Garfield , Stefan Wermter, Call classification using recurrent neural networks, support vector machines and finite state automata, Knowledge and Information Systems, v.9 n.2, p.131-156, February 2006
Michelle X. Zhou , Keith Houck , Shimei Pan , James Shaw , Vikram Aggarwal , Zhen Wen, Enabling context-sensitive information seeking, Proceedings of the 11th international conference on Intelligent user interfaces, January 29-February 01, 2006, Sydney, Australia
Michelle X. Zhou , Vikram Aggarwal, An optimization-based approach to dynamic data content selection in intelligent multimedia interfaces, Proceedings of the 17th annual ACM symposium on User interface software and technology, October 24-27, 2004, Santa Fe, NM, USA
Iryna Gurevych , Rainer Malaka , Robert Porzel , Hans-Pter Zorn, Semantic coherence scoring using an ontology, Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, p.9-16, May 27-June 01, 2003, Edmonton, Canada
Ryuichiro Higashinaka , Mikio Nakano , Kiyoaki Aikawa, Corpus-based discourse understanding in spoken dialogue systems, Proceedings of the 41st Annual Meeting on Association for Computational Linguistics, p.240-247, July 07-12, 2003, Sapporo, Japan
G. Aist , J. Dowding , B. A. Hockey , M. Rayner , J. Hieronymus , D. Bohus , B. Boven , N. Blaylock , E. Campana , S. Early , G. Gorrell , S. Phan, Talking through procedures: an intelligent space station procedure assistant, Proceedings of the tenth conference on European chapter of the Association for Computational Linguistics, April 12-17, 2003, Budapest, Hungary
Nate Blaylock , James Allen , George Ferguson, Synchronization in an asynchronous agent-based architecture for dialogue systems, Proceedings of the 3rd SIGdial workshop on Discourse and dialogue, p.1-10, July 11-12, 2002, Philadelphia, Pennsylvania
Ryuichiro Higashinaka , Noboru Miyazaki , Mikio Nakano , Kiyoaki Aikawa, Evaluating discourse understanding in spoken dialogue systems, ACM Transactions on Speech and Language Processing (TSLP), 1, p.1-20, 2004
Jeremy Goecks , Dan Cosley, NuggetMine: intelligent groupware for opportunistically sharing information nuggets, Proceedings of the 7th international conference on Intelligent user interfaces, January 13-16, 2002, San Francisco, California, USA
Phillipa Oaks , Arthur Hofstede, Guided interaction: A mechanism to enable ad hoc service interaction, Information Systems Frontiers, v.9 n.1, p.29-51, March 2007
Emerson Cabrera Paraiso , Jean-Paul A. Barths, An intelligent speech interface for personal assistants applied to knowledge management, Web Intelligence and Agent System, v.3 n.4, p.217-230, October 2005
Emerson Cabrera Paraiso , Jean-Paul A. Barths, An intelligent speech interface for personal assistants applied to knowledge management, Web Intelligence and Agent System, v.3 n.4, p.217-230, January 2005
Alexander Yates , Oren Etzioni , Daniel Weld, A reliable natural language interface to household appliances, Proceedings of the 8th international conference on Intelligent user interfaces, January 12-15, 2003, Miami, Florida, USA
M. Turunen , J. Hakulinen , K.-J. Rih , E.-P. Salonen , A. Kainulainen , P. Prusi, An architecture and applications for speech-based accessibility systems, IBM Systems Journal, v.44 n.3, p.485-504, August 2005
Dawn N. Jutla , Dimitri Kanevsky, Adding User-Level SPACe: Security, Privacy, and Context to Intelligent Multimedia Information Architectures, Proceedings of the 2006 IEEE/WIC/ACM international conference on Web Intelligence and Intelligent Agent Technology, p.77-84, December 18-22, 2006
Jamal Bentahar , Karim Bouzoubaa , Bernard Moulin, A Computational Framework for Human/Agent Communication Using Argumentation, Implicit Information, and Social Influence, Proceedings of the 2006 IEEE/WIC/ACM international conference on Web Intelligence and Intelligent Agent Technology, p.372-377, December 18-22, 2006
James Allen , George Ferguson , Nate Blaylock , Donna Byron , Nathanael Chambers , Myroslava Dzikovska , Lucian Galescu , Mary Swift, Chester: towards a personal medication advisor, Journal of Biomedical Informatics, v.39 n.5, p.500-513, October 2006
Martin Beveridge , John Fox, Automatic generation of spoken dialogue from medical plans and ontologies, Journal of Biomedical Informatics, v.39 n.5, p.482-499, October 2006 | architectures for intelligent;cooperative;distributed;conversational systems;multimodal interfaces |
360091 | Separation of Transparent Layers using Focus. | Consider situations where the depth at each point in the scene is multi-valued, due to the presence of a virtual image semi-reflected by a transparent surface. The semi-reflected image is linearly superimposed on the image of an object that is behind the transparent surface. A novel approach is proposed for the separation of the superimposed layers. Focusing on either of the layers yields initial separation, but crosstalk remains. The separation is enhanced by mutual blurring of the perturbing components in the images. However, this blurring requires the estimation of the defocus blur kernels. We thus propose a method for self calibration of the blur kernels, given the raw images. The kernels are sought to minimize the mutual information of the recovered layers. Autofocusing and depth estimation in the presence of semi-reflections are also considered. Experimental results are presented. | Introduction
The situation in which several (typically two) linearly superimposed contributions exist is often
encountered in real-world scenes. For example [12, 20], looking out of a car (or room) window,
we see both the outside world (termed real object [35, 36, 41, 42, 43, 45]), and a semi-reflection
of the objects inside, termed virtual objects. The treatment of such cases is important, since
the combination of several unrelated images is likely to degrade the ability to analyze and
understand them. The detection of the phenomenon is of importance itself, since it indicates
the presence of a clear, transparent surface in front of the camera, at a distance closer than the
imaged objects [35, 42, 45].
The term transparent layers has been used to describe situations in which a scene is semi-
reflected from a transparent surface [6, 12, 58]. It means that the image is decomposed into
depth ordered layers, each with an associated map describing its intensity (and, if applicable, its
motion [58]). We adopt this terminology, but stress the fact that this work does not deal with
imaging through an object with variable opacity. Approaches to recovering each of the layers
by nulling the others relied mainly on triangulation methods like motion [6, 12, 13, 22, 36, 49],
and stereo [7, 48]. Algorithms were developed to cope with multiple superimposed motion
fields [6, 49] and ambiguities in the solutions were discovered [47, 60]. Another approach to
the problem has been based on polarization cues [18, 20, 35, 41, 42, 43, 45]). However, that
approach needs a polarizing filter to be operated with the camera, may be unstable when the
angle of incidence is very low, and is di#cult to generalize to cases in which more than two
layers exist.
In recent years, range imaging relying on the limited depth of field (DOF) of lenses has
been gaining popularity. An approach for depth estimation using a monocular system based on
focus sensing [14, 16, 25, 31, 32, 33, 34, 52, 53, 61] is termed Depth from Focus (DFF) in the
computer-vision literature. In that approach, the scene is imaged with di#erent focus settings
(e.g., by axially moving the sensor, the object or the lens), thus obtaining image slices of the
scene. In each slice, a limited range of depth is in focus. Depth is extracted by a search for
the slice that maximizes some focus criterion [21, 25, 31, 32, 34, 52, 55, 62] (usually related to
the two dimensional intensity variations in the region), and corresponds to the plane of best
focus. DFF and image-based rendering based on focused slices has usually been performed on
opaque (and occluding) layers. In particular, just recently a method has been presented for
generating arbitrarily focused images and other special e#ects performed separately on each
occluding layer [5].
Physical modeling of DOF as applied to processing images of transparent objects has long
been considered in the field of microscopy [2, 3, 8, 10, 15, 17, 19, 23, 29, 30, 37, 51], where the
defocus e#ect is most pronounced. An algorithm for DFF was demonstrated [23] on a layered
microscopic object, but due to the very small depth of field used, the interfering layer was very
blurred so no reconstruction process was necessary. Note that microscopic specimens usually
contain detail in a continuum of depth, and there is correlation between adjacent layers, so
their crosstalk is not as disturbing as in semi-reflections. Fundamental consequences of the
imaging operation (e.g. the loss of biconic regions in the three dimensional frequency domain)
that pose limits on the reconstruction ability, and the relation to tomography, were discovered
[9, 29, 51, 50, 54]. Some of the three dimensional reconstruction methods used in microscopy
[2, 3, 10] may be applicable to the case of discrete layers as well.
We study the possibility of exploiting the limited depth of field to detect, separate and
recover the intensity distribution of transparent, multi-valued layers. Focusing yields an initial
separation, but crosstalk remains. The layers are separated based on the focused images, or by
changing the lens aperture. The crosstalk is attenuated by mutual blurring of the disturbing
components in the images (Section 2). Proper blurring requires the point spread functions (PSF)
in the images to be well estimated. A wrong PSF will leave each recovered layer contaminated
by its complementary. We therefore study the e#ect of error in the PSFs. Then, we propose
a method for estimating the PSFs from the raw images (Section 3). It is based on seeking
the minimum of the mutual information between the recovered layers. Recovery experiments
are described in Section 4. We also discuss the implication of semi-reflections on the focusing
process and the depth extracted from it (Section 5). Preliminary and partial results were
presented in [40, 44].
2 Recovery from focused slices
2.1 Using two focused slices
Consider a two-layered scene. Suppose that either manually or by some automatic procedure
(see Section 5), we acquire two images, such that in each image one of the layers is in focus.
Assume for the moment that we also have an estimate of the blur kernel operating on each layer,
when the camera is focused on the other one. This assumption may be satisfied if the imaging
system is of our design, or by calibration. Due to the change of focus settings, the images may
undergo a scale change. If a telecentric imaging system (Fig. 1) is used, this problem is avoided
[33, 59]. Otherwise, we assume that the scale change 1 is corrected during preprocessing [27].
Let layer f 1 be superimposed 2 on layer f 2 . We consider only the slices g a and g b , in which
either layer f 1 or layer f 2 , respectively, is in focus. The other layer is blurred. Modeling the
blur as convolution with blur kernels,
(The assumption of a space-invariant response to constant depth objects is very common in
analysis of defocused images, and is approximately true for paraxial systems or in systems
corrected for aberrations). If a telecentric system is used, h
1 The depth dependence of the scale change can typically be neglected.
2 The superposition is linear, since the real/virtual layers are the images of the objects multiplied by the
transmission/reflection coe#cients of the semi-reflecting surface, and these coe#cients do not depend on the light
intensities. The physical processes in transparent/semi-reflected scenes are described in Refs. [41, 42, 43, 45].
Nonlinear transmission and reflection e#ects (as appear in photorefractive crystals) are negligible at intensities
and materials typical to imaging applications.
d
F
Figure
1: A telecentric imaging system [59]. An aperture D is situated at distance F (the focal
length) in front of the lens. An object point at distance u is at best focus if the sensor is at v. If the
sensor is at - v, the image of the point is a blurred spot parameterized by its e#ective diameter d.
In the frequency domain Eqs. (1) take the form
Assuming that the kernels are symmetric, ImH so the real components
of G a and G b are respectively
ReG
with similar expressions for the imaginary components of the images. These equations can be
visualized as two pairs of straight lines (see Fig. 2). The solution, which corresponds to the
line intersection, uniquely exists for H 2a H 1b #= 1. Since the imaging system cannot amplify any
component (H 1b , H 2a # 1), a unique intersection exists unless H
To gain insight, consider a telecentric system (the generalization is straightforward). In
this case, H H, and the slopes of the lines in Fig. 2 (representing the constraints)
are reciprocal to each other. As H # 1 the slopes of the two lines become similar, hence the
solution is more sensitive to noise in G a and G b . As the frequency decreases, H # 1, hence at
frequency
Transversal domain+H
F
G
G a
F FG a
G a F
Figure
2: Visualization of the constraints on reconstruction from focused slices and the convergence of
a suggested algorithm. For each frequency, the relations (3) between the real components of G a , G b , F 1
and F 2 take the form of two straight lines. The visualization of the imaginary parts is similar.
low frequencies the recovery is ill conditioned. Due to energy conservation, the average gray level
(DC) is not a#ected by defocusing. Thus, at DC, H = 1. In the noiseless case the constraints
on the DC component coincide into a single line, implying an infinite number of solutions.
In the presence of noise the lines become parallel and there is no solution. The recovery of
the DC component is thus ill posed. This phenomenon is also seen in the three dimensional
frequency domain. The image space is band limited by a missing cone of frequencies [9, 29],
whose axis is in the axial frequency direction # v and its apex is at the origin. Recovery of the
average intensity in each individual layer is impossible since the information about inter-layer
variations of the average transversal intensity is in the missing cone [46]. A similar conclusion
may be derived from observing the three dimensional frequency domain support that relies on
di#raction limited optics [54].
In order to obtain another point of view on these di#culties, consider the naive inverse
filtering approach to the problem given by Eq. (2). In the transversal spatial frequency domain,
the reconstruction is
where
As hence the solution is instable. Note, however, that the problem is well
posed and stable at the high frequencies. Since H is a LPF, then B # 1 at high frequencies.
As seen in Eqs. (4), the high frequency contents of the slice in which a layer is in focus are
retained, while those of the other slice are diminished. Even if high frequency noise is added
during image acquisition, it is amplified only slightly in the reconstruction. This behavior is
quite opposite to typical reconstruction problems, in which instability and noise amplification
appear in the high frequencies.
Iterative solutions have been suggested to similar inversion problems in microscopy [2, 3, 15]
and in other fields. A similar approach was used in [5] to generate special e#ects on occluding
layers, when the inverse filtering needed special care in the low frequency components. The
method that we consider can be visualized as progression along vectors in alternating directions
parallel to the axes in Fig. 2. It converges to the solution from any initial hypothesis for |H| < 1.
As |H| decreases (roughly speaking, as the frequency increases), the constraint lines approach
orthogonality, thus convergence is faster. A single iteration is described in Fig. 3. This is
a version of the Van-Cittert restoration algorithm [24]. With slices g a and g b as the initial
hypotheses for
respectively, at the l'th iteration
SchechnerIJCVfig3.ps
Figure
3: A step in the iterative process. Initial hypotheses for the layers serve as input images for a
processing step, based on Eq. (1). The new estimates are fed back as input for the next iteration.
for odd l, where
. (7)
B(m) has a major e#ect on the amplification of noise added to the raw images g a and g b (with
the noise of the unfocused slice attenuated by H). Again, we see that at high frequencies the
amplification of additive noise approaches 1. As the frequency decreases, noise amplification
increases. The additive DC error increases linearly with m.
Let us define the basic solution as that result of using indicates that we can
do the recovery directly, without iterations, by calculating the kernel (filter) beforehand. m is
a parameter that controls how close the filter
B(m) is to the inverse filter, and is analogous to
regularization parameters in typical inversion methods.
In the spatial domain, Eq. (7) turns into a convolution kernel
| {z }
once
| {z } # h 1b # h 2a
| {z }
| {z }
twice
| {z } #h 1b # h 2a # h 1b # h 2a
| {z }
| {z }
times
The spatial support of - b m is approximately 2dm pixels wide, where d is the blur diameter
(assuming for a moment that both kernels have a similar support). Here, the finite support
of the image has to be taken into account. The larger m is, the larger the disturbing e#ect
of the image boundaries. The unknown surroundings a#ect larger portions of the image. It is
therefore preferable to limit m even in the absence of noise.
This di#culty seems to indicate at a basic limit to the ability to recover the layers. If the
blur diameter d is very large, only a small m can be used, and the initial layer estimation
achieved only by focusing cannot be improved much. In this case the initial slices already show
a good separation of the individual layers, since in each of the two slices, one layer is very
blurred and thus hardly disturbs the other one. On the other hand, if d is small, then in each
slice one layer is focused, while the other is nearly focused - creating confusing images. But
then, we are able to enhance the recovery using a larger m with only a small e#ect of the image
boundaries. Using a larger m leads, however, to noise amplification and to greater sensitivity
to errors in the assumed PSF (see subsection 2.4).
Example
A simulated scene consists of the image of Lena, as the close object, seen reflected through a
window out of which Mt. Shuksan 3 is seen. The original layers appear in the top of Fig. 4.
While any of the layers is focused, the other is blurred by a Gaussian kernel with standard
deviation (STD) of 2.5 pixels. The slices in which each of the layers is focused are shown in the
second row of Fig. 4 (all the images in this work are presented contrast-stretched).
During reconstruction, "mirror" [4] extrapolation was used for the surroundings of the image
in order to reduce the e#ect of the boundaries. The basic solution removes the crosstalk
between the images, but it lacks contrast due to the attenuation of the low frequencies. Using
which is equivalent to 13 iterations, improves the balance between the low frequency
components to the high ones. With larger m's the results are similar.
3 Courtesy of Bonnie Lorimer
SchechnerIJCVfig4.ps
solution
Basic
Originals
Far layer
Close layer
solution
Enhanced
slices
Focused
Figure
4: Simulation results. In the focused slices one of the original layers is focused while the
other is defocus blurred. The basic solution with the correct kernel removes the crosstalk, but the low
frequency content of the images is too low. Approximating the inverse filter with 6 terms
amplifies the low frequency components.
2.2 Similarity to motion-based separation
In separating transparent layers, the fact that the high frequencies can be easily recovered, while
the low ones are noisy or lost, is not unique to this approach. It also appears in results obtained
using motion. Note that, like focus changes, motion leaves the DC component unvaried. In [6],
the results of motion-based recovery of semi-reflected scenes are clearly highpass filtered versions
of the superimposing components. An algorithm presented in [22] was demonstrated in a setup
similar to [6]. In [22], one of the objects is "dominant". It can easily be seen there that even
as the dominant object is faded out in the recovery, considerable low-frequency contamination
remains.
Shizawa and Mase [49] have shown that, in regions of translational motion, the spatiotemporal
energy of each layer resides in a plane, which passes through the origin in the spatiotemporal
frequency domain. This idea was used [12, 13] to generate "nulling" filters to eliminate the contribution
of layers, thus isolating a single one. However, any two of these frequency planes have
a common frequency "line" passing through the origin (the DC), whose components are thus
generally inseparable.
These similarities are examples of the unification of triangulation and DOF approaches
discussed in [38]. In general, Ref. [38] shows that the depth from focus or defocus approaches
are manifestations of the geometric triangulation principle. For example, it was shown that
for the same system dimensions, the depth sensitivity of stereo, motion blur and defocus blur
systems are basically the same. Along these lines, the similarity of the inherent instabilities of
separation based on motion and focus is not surprising.
2.3 Using a focused slice and a pinhole image
Another approach to layer separation is based on using as input a pinhole image and a focused
slice, rather than two focused slices. Acquiring one image via a very small aperture ("pinhole
camera") leads to a simpler algorithm, since just a single slice with one of the layers in focus is
needed. The advantage is that the two images are taken without changing the axial positions
of the system components, hence no geometric distortions arise. Acquisition of such images is
practically impossible in microscopy (due to the significant di#raction e#ects associated with
small objects) but is possible in systems inspecting macroscopic objects.
The "pinhole" image is described by
where 1/a is the attenuation of the intensity due to contraction of the aperture. This image
is used in conjunction with one of the focused slices of Eq. (1), for example g a . The inverse
filtering solution is
where
As in subsection 2.1, S can be approximated by
2a . (12)
2.4 E#ect of error in the PSF
The algorithm suggested in subsection 2.1 computes
B(m)[G a -G b H 2a ]. We normally
assume (Eq. (2)) that G a . If the assumption holds,
B(m) . (13)
Note that, regardless of the precise form of the PSFs, had the imaging PSFs and the PSFs used
in the recovery been equal, the reconstruction would have converged to F 1 as m # when
|H 1b |, |H 2a | < 1. In practice, the imaging PSFs are slightly di#erent, i.e., G a
f
f
f
are some functions of the spatial frequency. This di#erence may be due to inaccurate
prior modeling of the imaging PSFs or due to errors in depth estimation. The reconstruction
process leads to
e
similar relation is obtained for the other layer.
An error in the PSF leads to contamination of the recovered layer by its complementary. The
larger
is, the stronger is the amplification of this disturbance. Note that -
monotonically
increases with m, within the support of the blur transfer function if H 1b H 2a > 0, as is the case
when the recovery PSF's are Gaussians. Note that usually in the low frequencies (which is the
regime of the crosstalk) H 1b , H 2a > 0. Thus, we may expect that the best sense of separation
will be achieved using a small m, actually, one iteration should provide the least contamination.
This is so although the uncontaminated solution obeys -
increases. In other words,
decreasing the reconstruction error does not necessarily lead to less crosstalk.
Both H and
f
H (of any layer) are low-pass filters that conserve the average value of the
images. Hence, E # 0 at the very low and at the very high frequencies, i.e., E is a bandpass
filter. However,
B(m) amplifies the low frequencies. At the low frequencies, their combined
e#ect may have a finite or infinite limit as m #, depending on the PSF models used.
Continuing with the example shown in Fig. 4, where the imaging PSF had an STD of
2.5 pixels, the e#ects of using a wrong PSF in the reconstruction are demonstrated in
Fig. 5. When the PSF used in the reconstruction has STD of 1.25 pixels, negative traces
remain (i.e., brighter areas in one image appear as darker areas in the other). When the PSF
used in the reconstruction has STD of 5 pixels, positive traces remain (i.e., brighter areas in
one image appear brighter in the other). The contamination is slight in the basic solution
but is more noticeable with larger m's, that is, when -
B. So, the separation
seems worse, even though each of the images has a better balance (due to the enhancement of
the low frequencies).
SchechnerIJCVfig5.ps
r =1.25
r =5
r =5
Far layer
Close layer
r =1.25
Figure
5: Simulated images when using the wrong PSF in the reconstruction. The original blur kernel
had a STD of r = 2.5 pixels. Crosstalk between the recovered layers is seen clearly if the STD of the
kernels used is 1.5 or 5 pixels. The contamination increases with m.
We can perform the same analysis for the method described in subsection 2.3. Now there
is only one filter involved, H 2a , since the layer f 1 is focused. Suppose that, in addition to using
H 2a in the reconstruction rather than the true imaging transfer function
f
H 2a , we inaccurately
use the scalar a rather than the true value - a used in the imaging process. Let e denote the
relative error in this parameter, e = (a - a)/-a. We obtain that
e
e
where here
are the results had the imaging defocus kernel been the same as
the one used in the reconstruction and had a. Note the importance of the estimation of
e
defocused layer) is recovered uncontaminated by F 1 . However, even in
this case
e
focused layer) will have a contamination of F 2 , amplified by
3 Seeking the blur kernels
The recovery methods outlined in Section 2 are based on the use of known, or estimated blur
kernels. If the imaging system is of our design, or if it is calibrated, and in addition we have
depth estimates of the layers obtained during the focusing process (e.g., as will be described
in Section 5), we may know the kernels a-priori. Generally, however, the kernels are unknown.
Even a-priori knowledge is sometimes inaccurate. We thus wish to achieve self-calibration, i.e.,
to estimate the kernels out of the images themselves. This will enable blind separation and
restoration of the layers.
To do that, we need a criterion for layer separation. Note that the method for estimating
the blur kernels based on minimizing the fitting error in di#erent layers as in [5] may fail in this
case as the layers are transparent and there is no unique blur kernel at each point. Moreover,
the fitting error is not a criterion for separation. Assume that the statistical dependence of
the real and virtual layers is small (even zero). This is reasonable since they usually originate
from unrelated scenes. The Kullback-Leibler distance measures how far the images are from
statistical independence, indicating their mutual information [11]. Let the probabilities for
certain values -
In practice these probabilities are
estimated by the histograms of the recovered images. The joint probability is
is in practice estimated by the joint histogram of the images, that is, the relative number of
pixels in which -
f 1 has a certain value -
f 2 has a certain value -
f 2 at corresponding pixels.
The mutual information is then
. (18)
In this approach we assume that if the layers are correctly separated, each of their estimates
contains minimum information about the other. Mutual information was suggested and used as
a criterion for alignment in [56, 57], where its maximum was sought. We use this measure to look
for the highest discrepancy between images, thus minimizing it. The distance (Eq. 18) depends
on the quantization of -
, and on their dynamic range, which in turn depends on the
brightness of the individual layers f 1 and f 2 . To decrease the dependence on these parameters,
we performed two normalizations. First, each estimated layer was contrast-stretched to a
standard dynamic range. Then, I was normalized by the mean entropy of the estimated layers,
when treated as individual images. The self information [11] (entropy) of -
f 1 is
and the expression for -
f 2 is similar. The measure we used is
I
indicating the ratio of mutual information to the self information of a layer.
The recovered layers depend on the kernels used. Therefore, the problem of seeking the
kernels can be stated as a minimization problem:
According to subsection 2.4, errors in the kernels lead to crosstalk (contamination) of the
estimated layers, which is expected to increase their mutual information.
There are generally many degrees of freedom in the form of the kernels. On the other hand,
the kernels are constrained: they are non-negative, they conserve energy etc. To simplify the
problem, the kernels can be assumed to be Gaussians. Then, the kernels are parameterized
only by their standard deviations (proportional to the blur radii). This limitation may lead to
a solution that is suboptimal but easier to obtain.
Another possible criterion for separation is decorrelation. Decorrelation was a necessary
condition for the recovery of semi-reflected layers by independent components analysis in [18],
and by polarization analysis in [42, 43]. Note that requiring decorrelation between the estimated
layers is based on the assumption that the original layers are decorrelated: that assumption is
usually only an approximation.
To illustrate the use of these criteria, we search for the optimal blur kernels to separate the
images shown in the second row of Fig. 4. Here we simplified the calculations by restricting
both kernels to be isotropic Gaussians of the same STD, as these were indeed the kernels used in
the synthesis. Hence, the correlation and mutual information are functions of a single variable 4 .
As seen in Fig. 6, using the correct kernel (with STD of 2.5 pixels) yields decorrelated basic
solutions 1), with minimal mutual information (I n is plotted). The positive correlation
for larger values of assumed STD, and the negative correlation for smaller values, is consistent
with the visual appearance of positive and negative traces in Fig. 5. Observe that, as expected
from the theory, in Fig. 5 the crosstalk was stronger for larger m. Indeed, in Fig. 6 the absolute
correlation and mutual information are greater for when the wrong
kernel is used.
In a di#erent simulation, the focused slices corresponding to the original layers shown in the
top of Fig. 4 were created using an exponential imaging kernel rather than a Gaussian, but the
4 The STD was sampled on a grid in our demonstrations. A practical implementation will preferably use
e#cient search algorithms [28] to optimize the mutual information [56, 57].
SchechnerIJCVfig6.ps
normalized information
-0.4
-0.20.20.6correlation
2.5 4
-0.2
-0.4
m0.60.2
Mutual information
Figure
At the assumed kernel STD of 2.5 pixels the basic solutions are decorrelated and
have minimal mutual information (shown normalized), in consistency with the true STD. [Dashed]
The absolute correlation and the mutual information are larger for a large value of m.
STD was still 2.5. The recovery was done with Gaussian kernels. The correlation and mutual
information curves (as a function of the assumed STD) were similar to those seen in Fig. 6.
The minimal mutual information was however at STD of r = 2.2 pixels. There was no visible
crosstalk in the resulting images.
The blurring along the sensor raster rows may be di#erent than the blurring along the
columns. This is because blurring is caused not only by the optical processes, but also from
interpixel crosstalk in the sensors, and the raster reading process in the CCD. Moreover, the
inter-pixel spacing along the sensor rows is generally di#erent than along the columns, thus
even the optical blur may a#ect them di#erently. We assigned a di#erent blur "radius" to each
axis: r row and r column . When two slices are used, as in subsection. 2.1, there are two kernels,
with a total of four parameters. Defining the parameter vector p # (r row
the estimated vector -
p is
When a single focused slice is used in conjunction with a "pinhole" image, as described
in subsection 2.3, the problem is much simpler. There are three parameters to determine:
r row
2a and a. The parameter a is easier to obtain as it indicates the ratio of the light
energy in the wide-aperture image relative to the pinhole image. Ideally, it is the square of the
reciprocal of the ratio of the f-numbers of the camera, in the two states. If, however, the optical
system is not calibrated, or if there is automatic gain control in the sensor, this ratio is not
an adequate estimator of a. a can then be estimated by the ratio of the average values of the
images, for example. Such an approximation may serve as a starting point for better estimates.
When using the decorrelation criterion in the multi-parameter case, there may be numerous
parameter combinations that lead to decorrelation, but will not all lead to the minimum mutual
information, or to good separation. If p is N-dimensional, the zero-correlation constraint defines
a dimensional hypersurface in the parameter space. It is possible to use this criterion
to obtain initial estimates of p, and search for minimal mutual information within a lower
dimensional manifold. For example, for each combination of r row and r column , a that leads to
decorrelation can be found (near the rough estimate based on intensity ratios). Then the search
for minimum mutual information can be limited to a subspace of only two parameters.
4 Recovery experiments
4.1 Recovery from two focused slices
A print of the "Portrait of Doctor Gachet" (by van-Gogh) was positioned closely behind a glass
window. The window partly reflected a more distant picture, a part of a print of the "Parasol"
(by Goya). The f# was 5.6. The two focused slices 5 are shown at the top of Fig. 7. The cross
correlation between the raw (focused) images is 0.98. The normalized mutual information is
I n # 0.5 indicating that significant separation is achieved by the focusing process, but that
substantial crosstalk remains.
The optimal parameter vector -
p in the sense of minimum mutual information is [1.9, 1.5,
1.5, 1.9] pixels, where r 1b corresponds to the blur of the close layer, and r 2a corresponds to the
blur of the far layer. With these parameters, the basic solution shown at the middle
row of Fig. 7 has I n # 0.006 (two orders of magnitude better than the raw images). Using
better balance between the low and high frequency components, but I n increased
to about 0.02. We believe that this is due to the error in the PSF model, as discussed above.
In another example, a print of the "Portrait of Armand Roulin" (by van-Gogh) was positioned
closely behind a glass window. The window partly reflected a more distant picture, a
print of a part of the "Miracle of San Antonio" (by Goya). As seen in Fig. 8, the "Portrait"
is hardly visible in the raw images. The cross correlation between the raw (focused) images
is 0.99, and the normalized mutual information is I n # 0.6. The optimal parameter vector -
5 The system was not telecentric, so there was slight magnification with change of focus settings. This was
compensated for manually by resizing one of the images.
solution
Basic
Close layer Far layer
slices
Focused
Figure
7: [Top] The slices in which either of the transparent layers is focused. [Middle row] The basic
solution 1). [Bottom row]: Recovery with
Close layer Far layer
Focused
slices
Basic
solution
Figure
8: [Top] The slices in which either of the transparent layers is focused. [Middle row]
The basic solution. [Bottom row]: Recovery with
here is [1.7, 2.4, 1.9, 2.1] pixels. With these parameters I n # 0.004 at the basic solution, rising
to about
In a third example, the scene consisted of a distant "vase" picture that was partly-reflected
from the glass-cover of a closer "crab" picture. The imaging system was telecentric [33, 59],
so no magnification corrections were needed. The focused slices and the recovered layers are
shown in Fig. 9. For the focused slices I n # 0.4, and the cross correlation is 0.95. The
Far layer
Close layer
Figure
9: [Top] The slices in which either of the transparent layers is focused. [Bottom] The basic
solution for the recovery of the "crab" (left) and "vase" (right) layers.
optimal parameter vector -
p in the sense of minimum mutual information is [4,4,11,1] pixels.
The basic recovery, using -
B(1), are shown in the bottom of Fig. 9. The crosstalk is significantly
reduced. The mutual information I n and correlation decreased dramatically to 0.009 and 0.01,
respectively.
4.2 Recovery from a focused slice and a pinhole image
The scene consisted of a print of the "Portrait of Armand Roulin" as the close layer and a
print of a part of the "Miracle of San Antonio" as the far layer. The imaging system was not
telecentric, leading to magnification changes during focusing. Thus, in such a system it may be
preferable to use a fixed focus setting, and change the aperture between image acquisitions. The
"pinhole" image was acquired using the state corresponding to the mark on the lens,
layer
focused
layer
defocused
layer
defocused
layer
focused
Figure
10: [Top left] The slice in far layer is focused, when viewed with the wide aperture. [Top right]
The "pinhole" image. [Middle row]: The basic recovery. [Bottom row]: Recovery with
while the wide aperture image was acquired using the state corresponding to the
We stress that we have not calibrated the lens, so these marks do not necessarily correspond to
the true values. The slice in which the far layer is focused (using the wide aperture) is shown
in the top left of Fig. 10. In the "pinhole" image (top right), the presence of the "Portrait"
layer is more noticeable.
According to the ratio of the f#'s, the wide aperture image should have been brighter than
the "pinhole" image by (11/4) 2
# 7.6. However, the ratio between the mean intensity of the
wide aperture image to that of the pinhole image was 4.17, not 7.6. This could be due to
poor calibration of the lens by its manufacturer, or because of some automatic gain control in
the sensor. We added a to the set of parameters to be searched in the optimization process.
In order to get additional cues for a, we calculated ratios of other statistical measures: the
ratios of the STD, median, and mean absolute deviation were 4.07, 4.35 and 4.22, respectively.
We thus let a assume values between 4.07 and 4.95. In this example we demonstrate the
possibility of using decorrelation to limit the minimum mutual information search. First, for
each hypothesized pair of blur diameters, the parameter a that led to decorrelation of the basic
solution was sought. Then, the mutual information was calculated over the parameters that
cause decorrelation. The blur diameters that led to minimal mutual information at
pixels, with the best parameter a being 4.28. The reconstruction results
are shown in the middle row of Fig. 10. Their mutual information (normalized) is 0.004.
Using a larger m with these parameters increased the mutual information, so we looked
for a better estimate, minimizing the mutual information after the application of -
B(m). For
the resulting parameters were di#erent: r row = r pixels, with a = 4.24. The
recovered layers are shown in the bottom row of Fig. 10. Their mutual information (normalized)
is 0.04. As discussed before, the increase is probably due to inaccurate modeling of the blur
kernel.
5 Obtaining the focused slices
5.1 Using a standard focusing technique
We have so far assumed that the focused slices are known. We now consider their acquisition
using focusing as in Depth from Focus (DFF) algorithms. Depth is sampled by changing the
focus settings, particularly the sensor plane. According to Refs. [1, 26, 38, 39], the sampling
should be at depth of field intervals, for which d #x, where #x is the inter-pixel period
(similar to stereo [38]). An imaging system telecentric on the image side [33, 59] is a preferred
configuration, since it ensures constant magnification as the sensor is put out of focus. For such
a system it is easy to show that the geometrical-optics blur-kernel diameter is
where D is the aperture width, F is the focal length (see Fig. 1), and #v is the distance of the
sensor plane from the plane of best focus. The axial sampling period is therefore #v # F#x/D.
The sampling period requirement can also be analyzed in the frequency domain, as in [54].
Focus calculations are applied to the image slices acquired. The basic requirement from the
focus criterion is that it will reach a maximum when the slice is in focus. Most criteria suggested
in the literature [23, 25, 32, 34, 52, 55, 62] are sensitive to two dimensional variations in the
slice 6 . Local focus operators yield "slices of local focus-measure", FOCUS (x, y, -
v), where - v is
the axial position of the sensor (see Fig. 1). If we want to find the depth at a certain region
(patch) [31], and the scene is composed of a single layer, we can average FOCUS (x, y, -
v) over
the region, to obtain FOCUS (-v) from which a single valued depth can be estimated. This
approach is inadequate in the presence of multiple layers. Ideally, each of them alone would
lead to a main peak 7 in FOCUS (-v). But, due to mutual interference, the peaks can move
from their original positions, or even merge into a single peak in some "average" position, thus
spoiling focus detection.
This phenomenon can be observed in experimental results. The scene, the focused slices of
which are shown in Fig. 9, had the "crab"and the "vase" objects at distances of 2.8m and 5.3m
from the lens, respectively. The details of the experimental imaging system are described in
[40]. Depth variations within these objects were negligible with respect to the depth of field.
6 It is interesting to note that a mathematical proof exists [21] for the validity of a focus criterion that is
completely based on local calculations which do not depend on transversal neighbors: As a function of axial
position, the intensity at each transversal point has an extremum at the plane of best focus.
7 There are secondary maxima, though, due to the unmonotonicity of the frequency response of the blur
operator, and due to edge bleeding. However, the misleading maxima are usually much smaller than the
maximum associated with the focusing on feature-dense regions, as edges.
Extension of the STD of the PSF by about 0.5 pixels was accomplished by moving the sensor
array 0.338mm from the plane of best focus 8 . This extended the e#ective width of the kernel by
about 1 pixel (#d # 1pixel), and was also consistent with our subjective sensation of DOF. The
results of the focus search, shown by the dashed-dotted line in Fig. 11, indicate that the focus
measure failed to detect the layers, as it yielded a single (merged) peak, somewhere between
the focused states of the individual layers. This demonstrates the confusion of conventional
autofocusing devices when applied to transparent scenes.
5.2 A voting scheme
Towards solving the merging problem, observe that the layers are generally unrelated and that
edges are usually sparse. Thus, the positions of brightness edges in the two layers will only
sporadically coincide. Since edges (and other feature-dense regions) are dominant contributors
to the focus criterion, it would be wise not to mix them by brute averaging of the local focus
measurements over the entire region. If point (x, y) is on an edge in one layer, but on a smooth
region in the other layer, then the peak in FOCUS (x, y, - v) corresponding to the edge will not
be greatly a#ected by the contribution of the other layer.
The following approach is proposed. For each pixel (x, y) in the slices, the focus measure
FOCUS (x, y, - v) is analyzed as a function of - v, to find its local maxima. The result is expressed
as a binary vector of local maxima positions. Then, a vote table analogous to a histogram of
maxima locations over all pixels is formed by summing all the "hits" in each slice-index. Each
vote is given a weight that depends on the corresponding value of FOCUS (x, y, -
v), to enhance
the contribution of high focus-measure values, such as those arising from edges, while reducing
the random contribution of featureless areas. The results of the voting method are shown
as a solid line in Fig. 11, and demonstrate its success in creating significant, separate peaks
8 Near the plane of best focus, the measured rate of increase of the STD as a function of defocus was much
lower than expected from geometric considerations. We believe that this is due to noticeable di#raction and
spherical aberration e#ects in that regime.
SchechnerIJCVfig11.ps
Slice index
votes
Weighted
Traditional
measure
focus
Figure
11: Experimental results. [Dashed-dotted line]: The conventional focus measure as a function
of the slice index. It mistakenly detects a single focused state at the 6th slice. [Solid line]: The
locations histogram of detected local maxima of the focus measure (the same scene). The highest
numbers of votes (positions of local maxima) are correctly accumulated at the 4th and 7th slices - the
true focused slices.
corresponding to the focused layers. Additional details can be found in [40]. The estimated
depths were correct, within the uncertainty imposed by the depth of field of the system. Optimal
design and rigorous performance evaluation of DFF methods in the presence of transparencies
remains an open research problem.
6 Conclusions
This paper presents an approach based on focusing to separate transparent layers, as appear
in semi-reflected scenes. This approach is more stable with respect to perturbations [38] and
occlusions than separation methods that rely on stereo or motion. We also presented a method
for self calibration of the defocus blur kernels given the raw images. It is based on minimizing
the mutual information of the recovered layers. Note that defocus blur, motion blur, and
stereo disparity have similar origins [38] and di#er mainly in the scale and shape of the kernels.
Therefore, the method described here could possibly be adapted to finding the motion PSFs or
stereo disparities in transparent scenes.
In some cases the methods presented here are also applicable to multiplicative layers [49]: If
the opacity variations within the close layer are small (a "weak" object), the transparency e#ect
may be approximated as a linear superposition of the layers, as done in microscopy [2, 10, 29, 37].
In microscopy and in tomography, the suggested method for self calibration of the PSF can
improve the removal of crosstalk between adjacent slices.
In the analysis and experiments, depth variations within each layer have been neglected.
This approximation holds as long as these depth variations are small with respect to the depth
of field. Extending our analysis and recovery methods to deal with space-varying depth and
blur is an interesting topic for future research. A simplified interim approach could be based
on application of the filtering to small domains in which the depth variations are su#ciently
small. Note that the mutual information recovery criterion can still be applied globally, leading
to a higher-dimensional optimization problem. We believe that fundamental properties, such
as the inability to recover the DC of each layer, will hold in the general case. Other obvious
improvements in the performance of the approach can be achieved by incorporating e#cient
search algorithms to solve the optimization problem [28], with e#cient ways to estimate the
mutual information [56, 57].
Semi-reflections can also be separated using polarization cues [18, 41, 42, 43, 45]. It is
interesting to note that polarization based recovery is typically sensitive to high frequency
noise at low angles of incidence [45]. On the other hand, DC recovery is generally possible
and there are no particular di#culties at the low frequencies. This nicely complements the
characteristics of focus-based layer separation, where the recovery of the high frequencies is
stable but problems arise in the low frequencies. Fusion of focus and polarization cues for
separating semi-reflections is thus a promising research direction.
The ability to separate transparent layers can be utilized to generate special e#ects. For
example, in Ref. [5] images were rendered with each of the occluding (opaque) layers defocused,
moved and enhanced arbitrarily. The same e#ects, and possibly other interesting ones can now
be generated in scenes containing semireflections.
Acknowledgments
This work was conducted while Yoav Y. Schechner was at the Department of Electrical Engi-
neering, Technion - Israel Institute of Technology, Haifa. We thank Joseph Shamir and Alex
Bekker for their advice, support, and significant help, and for making the facilities of the Elec-
trooptics Laboratory of the Electrical Engineering Department, Technion, available to us. We
thank Bonnie Lorimer for the permission to use her photograph of Mt. Shuksan. This research
was supported in part by the Eshkol Fellowship of the Israeli Ministry of Science, by the Ol-
lendor# Center of the Department of Electrical Engineering, Technion, and by the Tel-Aviv
University Research Fund. The vision group at the Weizmann Institute is supported in part
by the Israeli Ministry of Science, Grant No. 8504. Ronen Basri is an incumbent of Arye
Dissentshik Career Development Chair at the Weizmann Institute.
--R
Active stereo: integrating disparity
Optical sectioning microscopy: Cellular architecture in three dimensions.
Reduction of boundary artifacts in image restoration.
Producing object-based special e#ects by fusing multiple di#erently focused images
A three-frame algorithm for estimating two-component image motion
Estimating multiple depths in semi-transparent stereo images
Digital image processing.
Three dimensional radiographic imaging with a restricted view angle.
Enhanced 3-D reconstruction from confocal scanning microscope images
Elements of information theory.
Separation of transparent motion into layers using velocity-tuned mechanisms
'Nulling' filters and the separation of transparent mo- tions
Pyramid based depth from focus.
3D representation of biostructures imaged with an optical microscope.
Acquisition of 3-D data by focus sensing
Reconstructing 3-D light-microscopic images by digital image processing
Separating reflections from images by use of independent components analysis.
Distribution of actinin in single isolated smooth muscle cells
Simple focusing criterion.
Computing occluding and transparent motions.
Digitized optical microscopy with extended depth of field.
Resolution enhancement of spectra.
A perspective on range-finding techniques for computer vision
Panoramic image acquisition.
Registration and blur estimation methods for multiple di
Linear and nonlinear programming
The missing cone problem and low-pass distortion in optical serial sectioning microscopy
Artifacts in computational optical-sectioning microscopy
Robust focus ranging.
Shape from focus system.
Real time focus range sensor.
Microscopic shape from focus using active illumination.
A theory of specular surface geometry.
Regularized linear method for reconstruction of three-dimensional microscopic objects from optical sections
Depth from defocus vs. Stereo: How di
The optimal axial interval in estimating depth from defocus.
Separation of transparent layers using focus.
Separation of transparent layers by polarization analysis.
Vision through semireflecting media: Polarization analysis.
Blind recovery of transparent and semireflected scenes.
Polarization and statistical analysis of scenes containing a semireflector.
Three dimensional optical transfer function for an annular lens.
On visual ambiguities due to transparency in motion and stereo.
Direct estimation of multiple disparities for transparent multiple surfaces in binocular stereo.
Simultaneous multiple optical flow estimation.
Fundamental restrictions for 3-D light distributions
The optimal focus measure for passive autofocusing and depth from focus.
Digital composition of images with increased depth of focus considering depth information.
Are textureless scenes recoverable?
Defocus detection using a visibility criterion.
III, <Year>1997</Year>. Alignment by maximization of mutual information.
Layered representation for motion analysis.
"Telecentric optics for computational vision"
Perception of multiple transparent planes in stereo vision.
Depth from focusing and defocusing.
Jayasooriah and Sinniah
--TR
--CTR
Thanda Oo , Hiroshi Kawasaki , Yutaka Ohsawa , Katsushi Ikeuchi, The separation of reflected and transparent layers from real-world image sequence, Machine Vision and Applications, v.18 n.1, p.17-24, January 2007
Javier Toro , Frank Owens , Rubn Medina, Using known motion fields for image separation in transparency, Pattern Recognition Letters, v.24 n.1-3, p.597-605, January
D.-M. Tsai , C.-C. Chou, A fast focus measure for video display inspection, Machine Vision and Applications, v.14 n.3, p.192-196, July
Amit Agrawal , Ramesh Raskar , Shree K. Nayar , Yuanzhen Li, Removing photography artifacts using gradient projection and flash-exposure sampling, ACM Transactions on Graphics (TOG), v.24 n.3, July 2005
Morgan McGuire , Wojciech Matusik , Hanspeter Pfister , John F. Hughes , Frdo Durand, Defocus video matting, ACM Transactions on Graphics (TOG), v.24 n.3, July 2005
Sarit Shwartz , Michael Zibulevsky , Yoav Y. Schechner, Fast kernel entropy estimation and optimization, Signal Processing, v.85 n.5, p.1045-1058, May 2005
Zhang , Shree Nayar, Projection defocus analysis for scene capture and image display, ACM Transactions on Graphics (TOG), v.25 n.3, July 2006
Marc Levoy , Ren Ng , Andrew Adams , Matthew Footer , Mark Horowitz, Light field microscopy, ACM Transactions on Graphics (TOG), v.25 n.3, July 2006 | blur estimation;semireflections;enhancement;optical sectioning;blind deconvolution;depth from focus;inverse problems;signal separation;image reconstruction and recovery;decorrelation |
360328 | The effect of reconfigurable units in superscalar processors. | This paper describes OneChip, a third generation reconfigurable processor architecture that integrates a Reconfigurable Functional Unit (RFU) into a superscalar Reduced Instruction Set Computer (RISC) processor's pipeline. The architecture allows dynamic scheduling and dynamic reconfiguration. It also provides support for pre-loading configurations and for Least Recently Used (LRU) configuration management.To evaluate the performance of the OneChip architecture, several off-the-shelf software applications were compiled and executed on Sim-OneChip, an architecture simulator for OneChip that includes a software environment for programming the system. The architecture is compared to a similar one but without dynamic scheduling and without an RFU. OneChip achieves a performance improvement and shows a speedup range from 2.16 up to 32 for the different applications and data sizes used. The results show that dynamic scheduling helps performance the most on average, and that the RFU will always improve performance the best when most of the execution is in the RFU. | INTRODUCTION
Recently, the idea of using reconfigurable resources along
with a conventional processor has led to research in the area
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
FPGA 2001, February 11-13, 2001, Monterey, CA, USA.
of reconfigurable computing. The main goal is to take advantage
of the capabilities and features of both resources.
While the processor takes care of all the general-purpose
computation, the reconfigurable hardware acts as a specialized
coprocessor that takes care of specialized applica-
tions. With such platforms, specific properties of applica-
tions, such as parallelism, regularity of computation, and
data granularity can be exploited by creating custom oper-
ators, pipelines, and interconnection pathways.
There has been research done in the Department of
Electrical and Computer Engineering at the University of
Toronto on such reconfigurable processors, namely, the
OneChip processor model has been developed. At first, this
model tightly integrated reconfigurable logic resources and
memory into a fixed-logic processor core. By using the re-configurable
units of this architecture, the execution time
of specialized applications was reduced. The model was
mapped into the Transmogrifier-1 field-programmable sys-
tem. This work was done by Ralph Wittig [22].
A follow-on model, called OneChip-98, then integrated a
memory-consistent interface. It is a hardware implementation
that allows the processor and the reconfigurable array
to operate concurrently. It also provides a scheme for specifying
reconfigurable instructions that are suitable for typical
programming models. This model was partially mapped into
the Transmogrifier-2 field-programmable system. This work
was done by Je# Jacob [11].
OneChip's architecture has now been extended to a super-scalar
processor that allows multiple instructions to issue
simultaneously and perform out-of-order execution. This
leads to much better performance, since the processor and
the reconfigurable logic can execute several instructions in
parallel. Most of the performance improvement that this
architecture shows comes from memory streaming applica-
tions, that is, those applications that read in a block of data
from memory, perform some computation on it, and write
it back to memory. Multimedia applications have this characteristic
and are used to evaluate the architecture.
Previous subsets of the OneChip architecture 1 have been
modeled by implementing them in hardware. The purpose
of this work is to properly determine the feasibility of the
architecture by building a full software model capable of
simulating the execution of real applications.
We will use the term OneChip from now on to refer to the
latest version of the OneChip architecture.
1.1 Related Work
In general, a system that combines a general-purpose
processor with reconfigurable logic is known as a Field-Programmable
Custom Computing Machine (FCCM). Re-search
on FCCMs done by other groups [2, 5, 7, 14, 16, 18,
19] has reported speedup obtained by combining these two
techniques, however, most of the research in these groups
is focused on aspects of the reconfigurable fabric and the
compilation system. Much of the OneChip work is focused
toward the interface between the two technologies. As a re-
sult, the applications are modified by hand; no modification
was done to the compiler; and our simulations model only
the functionality and latency of the reconfigurable fabric,
not the specifics of the fabric architecture.
In our work, we study the e#ect of combining reconfigurability
with an advanced technique to speedup processors, a
superscalar pipeline, by focusing on the interplay between
them. With the use of out-of-order issue and execution,
one can further exploit instruction-level parallelism in ap-
plications, without incurring the overheads involved in re-configuring
a specialized hardware. Previously, performance
reports by other groups were done using application kernels
such as the DCT, FIR filters, or some small kernel-oriented
applications. Only recently have some groups [2, 18, 23]
reported on the performance using complete applications,
which give more meaningful results. In this work, we are
focused on the architecture's performance with full applications
2. ONECHIP ARCHITECTURE
In this section we give a brief overview of the OneChip
architecture, including the more recently added features.
The processor's main features, as proposed in [22, 11] are:
. MIPS-like RISC architecture - simple instruction encoding
and pipelining.
. Dynamic scheduling - allows out-of-order issue and
completion.
. Dynamic reconfiguration - can be reconfigured at
run-time.
. Reconfigurable Functional Unit (RFU) integration -
programmable logic in the processor's pipeline.
In addition, OneChip has now been extended to include:
. a Superscalar pipeline - allows multiple instructions
to issue per cycle.
. Configuration pre-loading support - allows loading
configurations ahead of time.
. Configuration compression support - reduces configuration
size.
. LRU configuration management support - reduces
number of reconfigurations.
2.1 Processor pipeline
The original OneChip pipeline described in [11] is based on
the DLX RISC processor described by Hennessy & Patterson
[9]. It consists of five stages: Instruction Fetch (IF), Instruction
Decode (ID), Execute (EX), Memory Access (MEM)
RFU
Figure
1: OneChip's Pipeline
FPGA
Instruction
Buffer RFU-RS Memory
Interface
RBT
Controller
Storage
Context
Memory
FPGA
Storage
Context
Memory
Figure
2: RFU Architecture
and Writeback (WB). A diagram of the pipeline is shown
in
Figure
1. The RFU is integrated in parallel with the
EX and MEM stages. It performs computations as the EX
stage does and has direct access to memory as the MEM
stage does. The RFU contains structures such as the Memory
Interface, an Instruction Bu#er, a Reconfiguration Bits
Table
(RBT) and Reservation Stations.
OneChip is now capable of executing multiple instructions
in parallel. The EX stage consists of multiple functional
units of di#erent types, such as integer units, floating point
units and a reconfigurable unit. Due to the flexibility of
the reconfigurable unit to implement a custom instruction,
a programmer or a compiler can generate a configuration
for the reconfigurable unit to be internally pipelined, parallelized
or both. Dynamic scheduling of RFU instructions is
implemented in OneChip. Data dependencies between RFU
and CPU instructions are handled using RFU Reservation
Stations.
2.2 RFU Architecture
The RFU in OneChip contains one or more FPGAs and
an FPGA Controller as shown in Figure 2. The FPGAs have
multiple contexts and are capable of holding more than one
configuration for the programmable logic [4]. These configurations
are stored in the Context Memory, which makes
the FPGA capable of rapidly switching among configura-
tions. Each context of the FPGAs is configured independently
from the others and acts as a cache for configurations.
Only one context may be active at any given time.
Instructions that target the RFU in OneChip are forwarded
to the FPGA Controller, which contains the reservation
stations and a Reconfiguration Bits Table (RBT).
The FPGA Controller is responsible for programming the
FPGAs, the context switching and selecting configurations
to be replaced when necessary. The FPGA Controller also
contains a bu#er for instructions and the memory interface.
The RBT acts as the configuration manager that will keep
track of where the FPGA configurations are located. The
memory interface in the FPGA Controller consists of a DMA
controller that is responsible for transferring configurations
from memory into the context memory according to the values
in the RBT. It also transfers the data that an FPGA will
operate on into the local storage. The local storage may be
considered as the FPGA data cache memory. The multiple
FPGAs in the RFU share the same FPGA Controller and
each FPGA has its own context memory and local storage.
OneChip has been enhanced to support configuration
compression and reduce the overhead involved in configuring
the FPGA. An algorithm for compressing configurations
is proposed by Hauck et al. [8]. This feature has not been
modeled in our simulator for these results since the internal
architecture of the FPGA fabric is not yet defined, therefore
the actual size of the configuration bitstreams is unknown.
Futhermore, our benchmarks only use one configuration and
the e#ect of the overhead can be easily managed by pre-loading
the configuration.
The architecture has also been extended to support configuration
management. Although the FPGAs can hold multiple
configurations, there is a hardware limit on the number
of configurations it can hold. OneChip uses the Least Recently
Used (LRU) algorithm as a mechanism for swapping
configurations in and out of the FPGA. LRU is implemented
in OneChip by using a table of configuration reference bits.
The approach is similar to the Additional-Reference-Bits Algorithm
described by Silberschatz & Galvin in [20]. A fixed-width
shift register is used to keep track of each loaded
configuration's history. On every context switch, all shift
registers are shifted 1 bit to the right. On the high-order
bit of each register, a 0 is placed for all inactive configurations
and a 1 for the active one. If the shift register contains
00000000, it means that it hasn't been used in a long time.
If it contains 10101010, it means that it has been used every
other context switch. A configuration with a history register
value of 01010000 has been used more recently than another
with the value of 00101010, and this later one was used more
recently than one with a value of 00000100. Therefore, the
configuration that should be selected for replacement is the
one that has the smallest value in the history register. Notice
that the overall behavior of these registers is to keep
track of the location of configurations in a queue, where a
recently used configuration will come to the front and the
last one will be the one to be replaced. Our simulator does
not have this feature implemented at this time as it was not
required in the benchmarks.
The Reconfiguration Bits Table (RBT) acts as the configuration
manager that will keep track of where the FPGA
configurations are located. The information in this table
includes the address of each configuration and flags that
keep track of loaded and active configurations. The RBT
described in [11] has been enhanced to support the algorithm
for configuration management [3]. The history of each
configuration is also stored in the table to allow LRU configuration
management and to select configurations to be
replaced.
Table
1: Memory Consistency Scheme
Hazard Hazard
Number Type Actions Taken
1 RFU rd 1. Flush RFU source addresses from CPU cache when
after instruction issues.
CPU wr 2. Prevent RFU reads while pending CPU store
instructions are outstanding.
2 CPU rd 3. Invalidate RFU destination addresses in CPU cache
after when RFU instruction issues.
Prevent CPU reads from RFU destination addresses
until RFU writes its destination block.
3 RFU wr 5. Prevent RFU writes while pending CPU load
after instructions are outstanding.
CPU rd
4 CPU wr 6. Prevent CPU writes to RFU source addresses until
after RFU reads its source block.
RFU rd
5 RFU wr 7. Prevent RFU writes while pending CPU store
after instructions are outstanding.
6 CPU wr 8. Prevent CPU writes to RFU destination addresses until
after RFU writes its destination block.
7 RFU rd 9. Prevent RFU reads from locked RFU destination
after addresses.
8 RFU wr 10. Prevent RFU writes to locked RFU source addresses.
after
RFU rd
9 RFU wr 11. Prevent RFU writes to locked RFU destination
after addresses
2.3 Instruction specification
OneChip is designed to obtain speedup mainly from memory
streaming applications in the same way vector coprocessors
do. In general, RFU instructions take a block of data
that is stored in memory, perform a custom operation on
the data and store it back to memory.
Previously, OneChip supported only a two-operand RFU
instruction. To have more flexibility for a wider range of
applications, it has now been extended to support a three-
operand RFU instruction. In the two-operand instruction,
one can specify the opcode, the FPGA function, one source
and one destination register that hold the respective memory
addresses, and the block sizes. In this instruction, the source
and destination block sizes can be di#erent. In the three-
operand instruction, one of the block sizes is replaced by
another source register. This allows the RFU to get source
data from two di#erent memory locations, which need not
be continuous. In this instruction, all three blocks should
be the same size.
In OneChip, there are two configuration instructions. One
of them is the Configure Address instruction, which is used
for assigning memory addresses in the RBT. The other configuration
instruction is the Pre-load instruction, which is
used for pre-fetching instructions into the FPGA and reducing
configuration overhead. Some compiler prefetching
techniques have been previously published for other reconfigurable
systems [6, 21].
2.4 Memory controller
OneChip allows superscalar dynamic scheduling, hence instructions
with di#erent latencies may be executed in paral-
lel. The RFU in OneChip has direct access to memory and
is also allowed to execute in parallel with the CPU. When
there are no data dependencies between the RFU and the
CPU, the system will act as a multiprocessor system, providing
speed up. However, when data dependencies exist
between them, there is a potential for memory inconsistency
that must be prevented.
The memory consistency scheme previously proposed for
OneChip, as described in [11], allows parallel execution between
one FPGA and the CPU. The scheme has now been
extended to support more than one FPGA in the RFU. The
nine possible hazards that OneChip may experience along
with the actions taken to prevent them, are listed in Table 1.
This scheme preserves memory consistency when the CPU
and an FPGA, or when two or more FPGAs, are allowed to
execute concurrently.
OneChip implements the memory consistency scheme by
using a Block Lock Table (BLT). The BLT is a structure that
contains four fields for each entry and locks memory blocks
to prevent undesired accesses. The information stored in the
table includes the block address, block size, instruction tag
and a src/dst flag.
3. SIM-ONECHIP
This section will describe the implementation of sim-
onechip, the simulator that models the architecture of
OneChip. It is a functional, execution-driven simulator derived
from sim-outorder from the simplescalar tool set [1].
To model the behaviour of OneChip, we needed an already
existing simulator capable of doing out-of-order execution
and that was easily cutomizable to be used a basis to add
OneChip's features. Two existing architecture simulators [1,
were considered for modification and SimpleScalar was
the chosen platform. Besides being a complete set of tools,
the annotations capability was an attractive feature, since
it would allow the addition of new instructions in a very
simple manner.
3.1 Modifications to sim-outorder
Modifications were done to sim-outorder to model
OneChip's reservation stations, reconfiguration bits table,
block lock table and the reconfigurable unit. The overall
functionality of sim-outorder was preserved.
The reservation stations for Sim-OneChip were implemented
as a queue. Besides the already existing scheduler
queues for Basic Functional Unit (BFU) instructions and for
Memory (MEM) instructions, a third scheduler queue was
implemented to hold RFU instructions. This queue is referred
to as the Reconfigurable Instructions Queue (RecQ).
The dispatch stage detects instructions that target the RFU
and places them in the RecQ for future issuing.
The RBT is implemented as a linked list. The RBT models
the FGPA controller by performing dynamic reconfiguration
and configuration management. Functions are provided
for assigning configuration addresses, loading configurations
and to perform context switching.
The BLT is implemented as a linked list. Each entry holds
the fields for the two sources and the destination memory
blocks for each RFU instruction. It ensures the OneChip
memory consistency scheme by modeling the actions taken
for each of the hazards presented. By keeping track of the
memory locations currently blocked, conflicting instructions
are properly stalled.
The RFU was included with the rest of the functional
units and in the resource pool in the functional unit resource
configuration.
3.2 Pipeline description
To be able to adapt OneChip to the SimpleScalar ar-
chitecture, several modifications were done to the original
pipeline. Sim-OneChip's pipeline, as in sim-outorder, consists
of six stages: fetch, dispatch, issue, execute, writeback
and commit. This section will describe the modifications
I-Cache D-Cache
Fetch Dispatch Issue BFU Writeback Commit
Mem
Main Memory
RFU
Execute
Figure
3: Sim-OneChip's Pipeline
done to each stage in sim-outorder and the places where
each of OneChip's structures were included. Sim-OneChip's
pipeline is shown in Figure 3.
The fetch stage remained unmodified and fetches instructions
from the I-cache into the dispatch queue.
The dispatch stage decodes instructions and performs register
renaming. It moves instructions from the dispatch
queue into the reservation stations in the three scheduler
queues: the Register Update Unit (RUU), the Load Store
Queue (LSQ) and the Reconfigurable Instructions Queue
(RecQ). This stage adds entries in the BLT to lock memory
blocks when RFU instructions are dispatched.
The issue stage identifies ready instructions from the
scheduler queues (RUU, LSQ and RecQ) and allows them
to proceed in the pipeline. This stage also checks the BLT
to keep memory consistency and stalls the corresponding
instructions.
The execute stage is where instructions are executed in
the corresponding functional units. Completed instructions
are scheduled on the event queue as writeback events. This
stage is divided into three parallel stages: BFU stage, MEM
stage and RFU stage. The BFU stage is where all operations
that require basic functional units, such as integer
and floating point are executed; the MEM stage is where all
memory access operations are executed and has access to
the D-cache, and; the RFU stage is where RFU instructions
are executed.
The writeback stage remained unmodified and moves completed
operation results from the functional units to the
RUU. Dependency chains of completing instructions are also
scanned to wake up any dependent instructions.
The commit stage retires instructions in-order and frees
up the resources used by the instructions. It commits the
results of completed instructions in the RUU to the register
file and stores in the LSQ will commit their result data to
the data cache. This stage clears BLT entries to remove
memory locks once the corresponding RFU instruction is
committed.
The BLT is accessed by the dispatch, issue and commit
stages. The memory consistency scheme requires that
instructions are entered in the BLT and removed from it
in program order. In the pipeline, the issue, execute and
writeback stages do not necessarily follow program order
since out-of order issue, execution and completion is allowed.
Hence, memory block locks and the corresponding entries in
the BLT need to be entered when an RFU instruction is dis-
patched, since dispatching is done in program order. Like-
sim-onechip
simulator
oc-lib.h fpga.conf
ssgcc
compiler
source
code
(*.c)
onechip
binary
(*.oc)
gcc
compiler
sim-onechip
source code
simulation
statistics
Figure
4: Sim-OneChip's Simulation Process
wise, entries from the BLT need to be removed when RFU
instructions commit, since committing is also performed in
program order.
All actions in the memory consistency scheme are taken in
the issue stage. The issue stage is allowed to probe the BLT
for memory locks. Instructions that conflict with locked
memory blocks are prevented from issuing at this point. All
others are allowed to proceed provided there are no dependencies
3.3 RFU instructions
Annotation of instructions in SimpleScalar are useful for
creating new instructions. They are attached to the opcode
in assembly files for the assembler to translate them and append
them in the annotation field of assembled instructions.
Taking advantage of this feature, new instructions can
be created without the need to modify the assembler.
OneChip's RFU instructions will be disguised as already
existing, but annotated instructions that the simulator will
recognize as an RFU instruction and model the corresponding
operation. Without the annotation, instructions are
treated as regular ones; with the annotation they become
instructions that target the reconfigurable unit.
The four instructions defined for OneChip (i.e. two RFU
operation instructions and two configuration instructions)
were created for Sim-OneChip. Macros are used to translate
from a C specification to the corresponding annotated
assembly instruction.
3.4 Programming model
Currently, the programming model for OneChip is the use
of circuit libraries. Programming for Sim-OneChip is done
in C. The user may use existing configurations from a library
of configurations, or create custom ones. Configurations are
defined in C and several macros are available for accessing
memory or instruction fields.
The complete simulation process is shown in Figure 4. A
C program that includes calls to RFU instructions is compiled
by the simplescalar gcc compiler ssgcc along with the
OneChip Library oc-lib.h. This will produce a binary file
that can be executed by the simulator sim-onechip. All the
program configurations specified in fpga.conf must be previously
compiled by gcc along with the simulator source code
to produce the simulator. Once both binaries are ready, the
simulator can simulate the execution of the binary and produce
the corresponding statistics.
Sim-OneChip's processor specification can be defined as
command-line arguments. One can specify the processor
core parameters, such as fetch and decode bandwidth, internal
queues sizes and number of execution units. The memory
hierarchy and the branch predictor can also be modified.
3.4.1 OneChip library
The library defines the following five macros:
oc configAddress(func, addr) is used for specifying
the configuration address for a specified function. It will associate
the function func with the address addr where the
FPGA configuration bits will be taken from and will enter
the corresponding entry in the BLT.
oc preLoad(func) is used for pre-loading the configuration
associated with the specified function func into an
available FPGA context.
rec 2addr(func, src addr, dst addr, src size,
dst size) is the two-operand reconfigurable instruction.
func is the FPGA function number, src addr and dst addr
are the source and destination block addresses, src size and
dst size are source and destination block sizes encoded.
rec 3addr(func, src1 addr, src2 addr, dst addr,
blk size) is the three-operand reconfigurable instruction.
func is the FPGA function number, src1 addr, src2 addr
and dst addr are the source-1, source-2 and destination
block addresses, and blk size is the block size encoded.
Both reconfigurable instructions will perform the context
switch to activate function func and will execute the corresponding
operation associated with it. They will also lock
their respective source and destination blocks of memory
by entering the corresponding fields in the BLT for as long
as the function takes to execute. When finished, the BLT
entries corresponding to the instruction will be cleared.
oc encodeSize(size) is a macro used for encoding the
size of memory blocks. It obtains the encoded value from
a table that is defined by the function log2 (size) - 1. This
macro should be used to encode block sizes in reconfigurable
instructions.
For example,
where a, b and c are defined as
unsigned char a[16], b[16], c[16];
will activate function 2 and perform the operation with arrays
a and b as source data and array c as destination data.
The encoded size passed to the reconfigurable instruction
will be log2(16)
3.4.2 Configuration definition
The behavior of the RFU is modeled with a high-level
functional simulation. It is given some inputs, and a function
produces the corresponding outputs without performing
a detailed micro-architecture simulation of the programmable
logic.
Configurations are defined as follows,
{
where
Configuration address (i.e. the location of
the configuration bits in memory).
Operation latency (i.e. the number of
cycles until result is ready for use).
Issue latency (i.e. the number of
cycles before another operation can
be issued on the same resource).
Expression that describes the
reconfigurable function.
The separation of the instruction latency into operation
and issue latencies, allows the specification of pipelined con-
figurations. For example, assume one configuration takes 20
cycles to complete one instruction, but the configuration is
pipelined and one instruction can be started every 4 cycles.
In this case, the operation latency will be 20 and the issue
latency will be 4. Hence, the configuration can have 20
executing instructions at a time and the throughput for the
configuration is implied as 20
per instruction.
The expression field is where the semantics of the configuration
will be specified. It is a C expression that implements
the configuration being defined, the expression must modify
all the processor state a#ected by the instruction's execution
All memory accesses in the DEFCONF() expression must
be done through the memory interface. There are macros
available for doing memory reads and writes; for accessing
general purpose registers, floating point and miscellaneous
registers; for accessing the value of the RFU instruction
operand field values, and; for creating a block mask and
decoding a block size. Some configuration examples are included
in [3].
4. PROGRAMMING FOR SIM-ONECHIP
This section will present an example of how to port an application
so that it uses the RFU in OneChip to get speedup.
The application to be implemented is an 8-tap FIR filter.
Consider that you have this C code in a file called fir.c.
1: /* FILE: fir.c */
2:
3: #include <stdio.h>
4:
5: #define TAPS 8
#define MAX_INPUTS 1024
7:
8: int
9: int inputs[MAX_INPUTS];
10:
11: void main(){
12: int i, j;
13: int *x;
14: int y[MAX_INPUTS];
15:
the inputs to some random numbers */
17: for
23: /* FIR Filter kernel */
24: for
25: for (j
27: }
28: x++;
30:
The inner loop in the FIR filter kernel on lines 25-27 of
fir.c can be ported to be executed entirely on the OneChip
RFU. For that, we need to do some modifications to the C
code. The file fir.oc.c that reflects this changes is shown
below.
1: /* FILE: fir.oc.c */
2:
3: #include <stdio.h>
4: #include "oc-lib.h"
5:
7: #define MAX_INPUTS 1024
8:
9: int
10: int inputs[MAX_INPUTS];
12: void main(){
13: int i, j;
14: int *x;
15: int y[MAX_INPUTS];
17: oc_configAddress(0, 0x7FFFC000);
19:
20: /* Set the inputs to some random numbers */
21: for
27: /* FIR Filter kernel */
28: for
29: rec_3addr(0, x, coef, &y[i], oc_encodeSize(8));
30: x++;
33: printf("\nFIR filter done!\n");
34: }
The first step was including the OneChip library in the
code as shown on line 4 in fir.oc.c. The second step was
defining the address of the configuration bitstream for the
FIR filter. In this case, we are using configuration #0 and
the memory address is 0x7FFFC000, as shown on line 17. As
a third step, notice that lines 25-27 on fir.c have been
removed and replaced by a 3-operand RFU instruction in
line 29 on fir.oc.c. This instruction is using configuration
#0 and is passing the address of the two source memory
blocks, x and coef, which are pointers, as well as the address
of destination memory block, which for each iteration will
be &y[i]. The block size, 8, is passed using the function
oc encodeSize.
The previous three changes are necessary. Furthermore,
if we want to reduce configuration overhead, we would introduce
a pre-load instruction as in line 18. This instruction
tells the processor that configuration #0 will be used soon.
This way, by the time it gets to execute the RFU instruc-
tion, the configuration is already loaded and no time is spent
waiting for the configuration to be loaded. This instruction
is not necessary, because if the configuration is not loaded
in the FPGA, the processor will automatically load it.
Now that the C code has been modified to use the RFU,
we need to define the FPGA configuration that will perform
the FIR filter. Configurations are defined in fpga.conf. The
fir filter definition used is shown below.
1: /* This configuration is for a 3-operand instruction.
2: It is used for a fir filter program. */
3:
4: DEFCONF(0x7FFFC000, 17, 17,
5: {
int oc_index; /* for indexing */
7: unsigned int oc_word; /* for storing words */
8: unsigned int oc_result; /* for storing result */
9:
12: oc_index <= OC_MASK(OC_3A_BS);
13: oc_index++)
14: {
17: oc_result += oc_word;
19: WRITE_WORD(oc_result, GPR(OC_3A_DR));
This configuration is the equivalent of the inner loop in
the FIR filter kernel on lines 25-27 in fir.c. Note that
in the configuration, each memory access is done through
the memory interface. Line 4 defines the configuration address
0x7FFFC000 and the operation and issue latencies of
17. Lines 11-13 define the iteration loop for the FIR fil-
ter. Line 15 reads a word from the memory location defined
by the address stored in the general purpose register that
contains one source address plus the corresponding memory
o#set. In the same way, line 16 reads a word from the other
source block and multiplies it with the data previously read
and stored in the oc word variable. Line 17 simply accumulates
the multiplied values across loop iterations. When
the loop is finished, line 19 writes the result into the memory
location defined by the address stored in the general purpose
register that has the destination block address.
The simulator will generate statistics for the number of instructions
executed in each program. The speedup obtained
with Sim-OneChip can be verified.
5. APPLICATIONS
To evaluate the performance of the OneChip architecture,
several benchmark applications were compiled and executed
on Sim-OneChip.
5.1 Experimental Setup
To do the experiments, four steps were performed for each
application. Step one is the identification of which parts of
each application are suitable for implementation in hard-
ware. Step two is modeling the hardware implementation
of the identified parts of the code. Step three is the replacement
of the identified code in the application, with
the corresponding hardware function call. And step four
is the execution and verification of both the original and the
ported versions of the application.
The pipeline configuration used for both simulations was
the default used in SimpleScalar. Among the most relevant
characteristics are an instruction fetch queue size of
4 instructions; instruction decode, issue and commit bandwidths
of 4 instructions per cycle; a 16-entry register update
unit (RUU) and an 8-entry load/store queue (LSQ). The
number of execution units available in the pipeline are 4 integer
ALU's, 1 integer multiplier/divider, 2 memory system
ports available (to CPU), 4 floating-point ALU's, 1 floating-point
multiplier/divider. Also, in the case of sim-onechip, 1
reconfigurable functional unit (RFU), an 8-entry RBT and
a 32-entry BLT were used. The branch predictor and cache
configuration remained unmodified as well.
5.2 Benchmark applications
There is currently no standard benchmark suite for reconfigurable
processors. C. Lee et al.[13] from the University
of California at Los Angeles have proposed a set of benchmarks
for evaluating multimedia and communication sys-
tems, which is called MediaBench. Since current reconfigurable
processors available are used mostly for communications
applications, MediaBench was taken as the suite for
evaluating OneChip. Not all of the applications were used
for the evaluation. Some of them could not be ported to
SimpleScalar, due to the complexity of the makefiles or due
to some missing libraries. However, the rest of the applications
can provide good feedback on the architecture's performance
5.3 Profiles
Profiling the execution of an application helps to identify
the parts of the application take a lot of time to execute
and hence, being candidates for rewriting to make it execute
faster. Profiling of the applications was performed
using GNU's profiler gprof included in GNU's binutils 2.9.1
package.
From the profiling information, we identified specific functions
in each application that are worth improving by executing
them in specialized hardware implemented in the
OneChip reconfigurable unit. To port an application to
OneChip, a piece of code must have a long execution time
and perform memory accesses in a regular manner, as in
applications suitable for vector processors. In general, any
application that can be sped up by a vector processor, will
be also suitable for OneChip.
5.4 Analysis and modifications
Four applications met our requirements and were ported
to OneChip [3]. JPEG Image compression, ADPCM Audio
coding, PEGWIT Data encryption and MPEG2 Video
encoding. The encoder and the decoder for each one was
ported. The modifications to the applications are done by
hand (i.e. no compiler technologies are used). For the RFU
timing in each of the applications, we assume that memory
accesses dominate the computational logic and that our
bottleneck is the memory bandwidth. If we also assume
that one memory access is perfomed in one cycle, the latency
of an operation will be obtained from counting the
total number of memory accesses performed by the opera-
tion. This timing approach may not be precise for highly
compute intensive operations, but it is not the case on these
applications.
6. RESULTS
The original and the modified versions of the eight chosen
applications were executed on the simulator. Each application
was tested with three di#erent sizes of data, one
small, one medium and one large. Four experiments were
done for each application. Our first experiment was executing
the original applications with in-order issue (A) to
verify how many cycles each one takes to execute. As a second
experiment, we executed the OneChip version of each
Table
2: Speedup
Application Data size
Onechip
inorder
OneChip
outorder
(C/D)
Outorder
original
Outorder
OneChip
Total
JPEG encode Small 1.37X 1.34X 2.29X 2.25X 3.08X
Medium 1.36X 1.33X 2.29X 2.24X 3.05X
Large 1.38X 1.35X 2.33X 2.29X 3.15X
JPEG decode Small 1.29X 1.20X 2.47X 2.29X 2.96X
Medium 1.29X 1.19X 2.52X 2.34X 3.01X
Large 1.25X 1.16X 2.53X 2.35X 2.93X
ADPCM encode Small 22.38X 17.04X 1.54X 1.18X 26.31X
Medium 26.25X 17.85X 1.62X 1.10X 28.94X
Large 29.92X 20.57X 1.56X 1.07X 32.02X
Large 24.43X 16.27X 1.61X 1.07X 26.13X
PEGWIT encrypt Small 1.46X 1.43X 2.09X 2.06X 3.00X
Medium 1.33X 1.36X 2.20X 2.26X 3.00X
Large 1.16X 1.24X 2.48X 2.65X 3.07X
PEGWIT decrypt Small 1.40X 1.42X 2.08X 2.11X 2.95X
Medium 1.28X 1.32X 2.27X 2.35X 3.00X
Large 1.13X 1.18X 2.62X 2.72X 3.08X
Medium 5.07X 5.70X 2.07X 2.33X 11.82X
Large 5.23X 5.91X 2.08X 2.36X 12.33X
MPEG2 encode Small 1.16X 1.14X 1.90X 1.87X 2.16X
Large 1.28X 1.24X 1.87X 1.81X 2.32X
application also with in-order issue (B). The third experiment
was executing the original version of the applications
with out-of-order issue (C). And the fourth and last experiment
was executing again the OneChip version, but now
with out-of-order issue (D). This way we could verify the
speedup obtained by using both features, the reconfigurable
unit and the out-of order issue, in the OneChip pipeline. All
output files were verified to have the correct data after being
encoded and decoded with the simulator.
The speedup obtained from the experiments is shown in
Table
2. The first column (A/B) shows the speedup obtained
by only using the reconfigurable unit. The second
column (C/D) shows the speedup obtained by introducing
a reconfigurable unit to an out-of-order issue pipeline. The
third column (A/C) shows the speedup obtained by only
using out-of-order issue. The fourth column (B/D) shows
the speedup obtained by introducing out-of-order issue to a
pipeline with a reconfigurable unit as OneChip. The fifth
column (A/D) shows the total speedup obtained by using
the reconfigurable unit and out-of order issue at the same
time.
Further analyzing the simulation statistics, we note that
there are no BLT instruction stalls (i.e. instructions stalled
due to to memory locks) in the applications, except for
JPEG. This means that either the RFU is fast enough to
keep up with the program execution or there are no memory
accesses performed in the proximity of the RFU instruction
execution. The second one is the actual case. It is important
not to confuse BLT stalls, which prevent data hazards,
with stalls due to unavailable resources, which are structural
hazards. If there are two consecutive RFU instructions with
no reads or writes in between, there will most likely be a
structural hazard. Since there is only one RFU, the trailing
RFU instruction will be stalled until the RFU is available.
This is not considered a BLT stall.
In the case of JPEG, there are CPU reads and writes
performed in the proximity of RFU instructions. These are
shown in Table 3. RFU instructions shows the total dynamic
Table
3: JPEG RFU instructions
Application Data size
RFU
instructions
(X)
instruction
stalls
Stalls per
RFU
instruction
RFU
Overlapping
JPEG encode Small 851 99531 116.96 11.04
Large 18432 2156508 117.00 11.00
Large 18432 2264375 122.85 5.15
count of RFU instructions in the program, BLT instruction
stalls is the number of CPU reads and writes stalled after
an RFU write is executing (this was the only type of hazard
present). The next column shows the Stalls per RFU
instruction and the last one shows the average RFU instruction
overlap with CPU execution. Note that 128 is the operation
latency for JPEG. We can see that for the JPEG encoder
there is an overlap of approximately 11 instructions,
and for the JPEG decoder an overlap of approximately 5
instructions.
6.1 Discussion
In
Table
3 we can see that there is an approximate overlap
of 11 instructions for the JPEG decoder. This means that
when an RFU instruction is issued, 11 following instructions
are also allowed to issue out-of-order because there are no
data dependencies. Then, even if the RFU issue and operation
latencies are improved (i.e. reduced) by new hardware
technologies, the maximum improvement for this application
will be observed if the configuration has a latency of 11
cycles. That is, any latency lower than 11 will not improve
performance because the other 11 overlapping instructions
will still need to be executed and the RFU will need to wait
for them. The same will be observed for the JPEG decoder
with a latency of 5 cycles. For the rest of the applications
there is no overlap, so any improvement in the RFU latency
will be reflected in the overall performance.
ADPCM shows a fairly large speedup from OneChip. This
is because the application does not perform any data validation
or other operations besides calling the encoder kernel.
The data is simply read from standard input, encoded on
blocks of 1000 bytes at a time, and written to standard out-
put, so the behaviour of the application is more like that of a
kernel. It is expected that ADPCM is the application with
the most speedup due to Amdahl's Law [9], which states
that the performance improvement to be gained from using
a faster mode of execution is limited by the fraction of time
the faster mode can be used. ADPCM's performance clearly
depends on the size of the data. The larger the data, the
less time the application reads and writes data, and most of
the time the RFU executes instructions. There are no BLT
instruction stalls in this application.
PEGWIT's performance also shows a dependence on the
data size. However, di#erent behavior is observed for the
RFU and the out-of-order issue features. The RFU shows
better performance with small data, while out-of-order issue
shows a better performance improvement with larger
data. The overall speedup with both features is greater as
the input data is larger. This is because for the decoder,
the application makes a number of RFU instruction calls
independent of the data size, and even if the data size for
each call is di#erent, the latency is the same for every call.
With the encoder, almost the same thing happens. No BLT
instruction stalls were originated in this application.
MPEG2's performance is also data size dependent. In
the case of the decoder, the larger the data, the greater the
performance improvement. This applies for both, the RFU
feature and the out-of-order issue feature. In the case of
the encoder, the performance improvement is shown to be
larger, as the frame sizes get larger. There is a a higher
performance improvement between the tests with small and
medium input data, which have a di#erent frame size and
almost the same number of frames, than between the tests
with medium and large data, which have the same frame size
and di#erent number of frames. In the application, there are
no BLT instruction stalls.
For all the applications, we can see that out-of-order issue
by itself produces a big gain (A/C). Using an RFU still adds
more speedup to the application. Speedup obtained from
dynamic scheduling ranges from 1.60 up to 2.53. Speedup
obtained from an RFU (A/B)ranges from 1.13 up to 29.
When using both at the same time, even when each technique
limits the potential gain that the other can produce,
the overall speedup is increased. Dynamic scheduling seems
to be more e#ective with the applications, except for AD-
PCM, where the biggest gain comes from using the RFU.
This leads us to think that for kernel-oriented applications,
it is better to use an RFU without the complexity of out-of-
order issue and for the other applications it is better to use
dynamic scheduling possibly augmented with an RFU.
7. CONCLUSION AND FUTURE WORK
In this work, the behavior of the OneChip architecture
model was studied. Its performance was measured by executing
several o#-the-shelf software applications on a software
model of the system. The results obtained confirm the
performance improvement by the architecture on DSP-type
applications.
From the work, a question arises whether the additional
hardware cost of a complex structure, such as the Block
Lock Table, is really necessary in reconfigurable processors.
It has been shown that the concept of the BLT does accomplish
its purpose, which is maintaining memory consistency
when closely linking reconfigurable logic with memory and
when parallel execution is desired between the CPU and the
PFU. However, considering that only one of the four applications
(i.e. JPEG) used in this research actually uses the BLT
and takes advantage of it, we conclude that by removing it
and simply making the CPU stall when any memory access
occurs while the RFU is executing, will not degrade performance
significantly on the types of benchmarks studied.
In JPEG there is an average of 11 overlapping instructions,
which is only 8.6% of the configuration operation latency
slot of 128. If the RFU is used approximately 20% of the
time in the JPEG encoder, the performance improvement by
the overlapping is only 1.72%. This is a small amount compared
to the performance improvement of dynamic schedul-
ing, which is approximately 56% (i.e. 2.29 speedup). Hence,
dynamic scheduling improves performance significantly only
when used with relatively short operation delay instructions,
as opposed to OneChip's RFU instructions, which have large
operation delays.
Based on the four applications in this work, it appears to
be that the number of contexts does not need to be large
to achieve good performance improvement with an RFU.
In these applications, only one context was used for each
application and a considerable speedup was obtained. For
some applications, a second context could have provided an
increase in this improvement, but not as much as for the
first context. This is because, based on our profiles, we could
implement a di#erent routine in a second context in the same
way the first one was, but it would not be so frequently used.
Another question that arises from this work is whether
the configurations are small enough to fit on today's re-configurable
hardware, or if they can be even implemented.
Hardware implementations of DSP structures done in our
and other groups [22, 11, 12], and which have even been
shown to outperform digital signal processors, have been
proven to fit on existing Altera and Xilinx devices with a
maximum of 36,000 logic gates. Today's FPGAs have more
than 1 million system gates available.
To estimate the silicon area of this version of OneChip,
we can start with the area of the processor that is required.
It will be much larger than the simple processor used in
the previous version of OneChip. A similar processor is the
MIPS R10000 processor core [15], which is a 4-way super-scalar
processor that supports out-of-order execution and includes
a 32KB instruction cache and a 32KB data cache. Using
a CMOS 0.35-m process, the die area is approximately
. We can estimate that as fabrication technology
approaches a 0.13-m process, the size of the processor core
would be approximately 41mm 2 . The OneChip-98 processor
[10] includes a small processor core of insignificant area,
an eight-context FPGA structure with about 85K gates of
logic, and 8 MBytes of SRAM. In a 0.18-m process, this
was estimated to take about 550mm 2 . Scaling to 0.13-m
brings this to 287mm 2 to which we can add the 41mm 2
for the processor. The complete OneChip device would be
about which is quite manufacturable. Obviously,
it would be desirable to add more gates of FPGA logic and
as the process technology continues to shrink, this would be
easy to do.
We also conclude that dynamic scheduling is important
to achieve good performance. By itself it produces a big
gain for a number of applications. With kernel oriented
applications, the gain obtained by an RFU is bigger, but
with complete applications, the biggest gain is obtained from
out-of-order issue and execution.
Further investigation is necessary in the area of compilers
for reconfigurable processors. Specifically, a compiler
designed for the OneChip architecture is needed to fully exploit
it and to better estimate the advantages and disadvantages
of the architecture's features. Developing a compilation
system that allows automatic detection of structures
suitable for the OneChip RFU, as well as generating the
corresponding configuration and replacing the structure in
the program, will allow further investigation of the optimal
number of contexts for the RFU. The compiler should be
able to pre-load configurations to reduce delays, and should
also make an optimal use of the BLT by scheduling as many
instructions in the RFU delay slot.
Also, future work should investigate the architecture of
the RFU. In our work we have assumed an optimal RFU. No
work has been done on what type of logic blocks or interconnection
resources should be used in the FPGA. The simulator
should be extended to properly simulate the FPGA fabric
and any configuration latency issues. Ye et. al. [23] have
modeled RFU execution latencies using simple instruction-level
and transistor-level models. However, their architecture
target fine-grain instructions, while OneChip targets
coarse-grain instructions.
At this point, it becomes di#cult to make a detailed comparison
between OneChip's performance and other current
reconfigurable systems. This is because there are no standard
application benchmarks available for reconfigurable
processors. However, other groups have reported performance
improvement results similar to the ones presented
in this paper, using Mediabench and SPEC benchmarks [2,
23]. Although Onechip shares certain similarities with other
systems [2, 19] that target memory-streaming applications
and focus on loop-level code optimizations, a standardized
set of benchmarks and metrics for reconfigurable processors
is needed to properly evaluate the di#erences between them.
8.
ACKNOWLEDGEMENTS
We would like to akcnowlegde Chameleon Systems Inc. for
financially supporting the OneChip project. Jorge Carrillo
was also supported by a UofT Open Fellowship. We would
also like to thank the reviewers for their helpful comments.
9.
--R
The SimpleScalar tool set
The Garp architecture and C compiler.
A reconfigurable architecture and compiler.
Configuration prefetch for single context reconfigurable coprocessors.
The Chimaera reconfigurable functional unit.
Configuration compression for the Xilinx XC6200 FPGA.
Computer Architecture A Quantitative Approach.
Memory interfacing for the OneChip reconfigurable processor.
Memory interfacing and instruction specification for reconfigurable processors.
A tool for evaluating and synthesizing multimedia and communications systems.
The MorphoSys parallel reconfigurable system.
A quantitative analysis of reconfigurable coprocessors for multimedia applications.
RSIM: An execution-driven simulator for ILP-based shared-memory multiprocessors and uniprocessors
The NAPA adaptive processing architecture.
Operating System Concepts.
A compiler directed approach to hiding configuration latency in chameleon processors.
OneChip: An FPGA processor with reconfigurable logic.
--TR
A high-performance microarchitecture with hardware-programmable functional units
MediaBench
Configuration prefetch for single context reconfigurable coprocessors
Computer architecture (2nd ed.)
Memory interfacing and instruction specification for reconfigurable processors
CHIMAERA
The Garp Architecture and C Compiler
PipeRench
The MorphoSys Parallel Reconfigurable System
A Compiler Directed Approach to Hiding Configuration Latency in Chameleon Processors
The Chimaera reconfigurable functional unit
Configuration Compression for the Xilinx XC6200 FPGA
The NAPA Adaptive Processing Architecture
A Quantitative Analysis of Reconfigurable Coprocessors for Multimedia Applications
FPGA-Based Structures for On-Line FFT and DCT
--CTR
Hamid Noori , Farhad Mehdipour , Kazuaki Murakami , Koji Inoue , Maziar Goudarzi, Interactive presentation: Generating and executing multi-exit custom instructions for an adaptive extensible processor, Proceedings of the conference on Design, automation and test in Europe, April 16-20, 2007, Nice, France
Scott Hauck , Thomas W. Fry , Matthew M. Hosler , Jeffrey P. Kao, The chimaera reconfigurable functional unit, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, v.12 n.2, p.206-217, February 2004
Paul Beckett , Andrew Jennings, Towards nanocomputer architecture, Australian Computer Science Communications, v.24 n.3, p.141-150, January-February 2002
Nathan Clark , Manjunath Kudlur , Hyunchul Park , Scott Mahlke , Krisztian Flautner, Application-Specific Processing on a General-Purpose Core via Transparent Instruction Set Customization, Proceedings of the 37th annual IEEE/ACM International Symposium on Microarchitecture, p.30-40, December 04-08, 2004, Portland, Oregon
Exploring the design space of LUT-based transparent accelerators, Proceedings of the 2005 international conference on Compilers, architectures and synthesis for embedded systems, September 24-27, 2005, San Francisco, California, USA
Shobana Padmanabhan , Phillip Jones , David V. Schuehler , Scott J. Friedman , Praveen Krishnamurthy , Huakai Zhang , Roger Chamberlain , Ron K. Cytron , Jason Fritts , John W. Lockwood, Extracting and improving microarchitecture performance on reconfigurable architectures, International Journal of Parallel Programming, v.33 n.2, p.115-136, June 2005
Philip Garcia , Katherine Compton , Michael Schulte , Emily Blem , Wenyin Fu, An overview of reconfigurable hardware in embedded systems, EURASIP Journal on Embedded Systems, v.2006 n.1, p.13-13, January 2006 | reconfigurable processors;onechip;superscalar processors |
360393 | Validation of an Augmented Lagrangian Algorithm with a Gauss-Newton Hessian Approximation Using a Set of Hard-Spheres Problems. | An Augmented Lagrangian algorithm that uses Gauss-Newton approximations of the Hessian at each inner iteration is introduced and tested using a family of Hard-Spheres problems. The Gauss-Newton model convexifies the quadratic approximations of the Augmented Lagrangian function thus increasing the efficiency of the iterative quadratic solver. The resulting method is considerably more efficient than the corresponding algorithm that uses true Hessians. A comparative study using the well-known package LANCELOT is presented. | Introduction
In recent years we have been involved with the development of algorithms based
on sequential quadratic programming [11] and inexact restoration [16, 18] for minimization
problems with nonlinear equality constraints and bounded variables.
The validation of these algorithms require their comparison with well established
computer methods for the same type of problems, which include methods of the
same family (as other SQP methods in the first case and GRG like methods in
the second) as well as methods that adopt a completely di#erent point of view,
as is the case of Penalty and Augmented Lagrangian algorithms. The most consolidated
practical Augmented Lagrangian method currently available is the one
implemented in the package LANCELOT, described in [4]. This was the method
used, for example, in [11], to test the reliability of a new large-scale sequential
quadratic programming algorithm.
* This author was supported by FAPESP (Grant 96/8163-9).
* These authors were supported by PRONEX, FAPESP (Grant 90-3724-6), CNPq and FAEP-
UNICAMP.
In the course of the above mentioned experimental studies we felt the necessity
of intervening in the Augmented Lagrangian code in a more active way than the
one permitted to users of LANCELOT. As a result of this practical necessity, we
became involved with the development of a di#erent Augmented Lagrangian code,
which preserves most of the principles of the LANCELOT philosophy, but also has
some important di#erences.
Following the lines of [4], a modern Augmented Lagrangian method is essentially
composed by three nested algorithms:
. The external algorithm updates the Lagrange multipliers and the penalty pa-
rameters, decides stopping criteria for the internal algorithm and the rules for
declaring convergence or failure of the overall procedure.
. An internal algorithm minimizes the augmented Lagrangian function with
bounds on the variables. Trust region methods, where the subproblem consists
of the minimization of a quadratic model on the intersection of two boxes,
the one that defines the problem and the trust-region box, are used both in [4]
and in our implementation.
. A third algorithm deals with the resolution of the quadratic subproblem. While
LANCELOT restricts its search to the face determined by an approximate
Cauchy point, our code explores the domain of the subproblem as a whole.
The second item, specifically where it deals with the formulation of the quadratic
subproblem, is the one in which we felt more strongly the desire to intervene. On
one hand, we tried many alternative sparse quasi-Newton schemes (without success,
up to now). On the other hand, we used a surprisingly e#ective simplification of the
true Hessian of the Lagrangian, called, in this paper, "the Gauss-Newton Hessian
approximation" by analogy with the Gauss-Newton method for nonlinear least-
squares, which can be interpreted as the result of excluding from the Hessian of a
sum of squares those terms involving Hessian of individual components.
In order to validate our augmented Lagrangian implementation we selected a
family of problems in which we have particular interest, known as the family of
Hard-Spheres problems.
The Hard-Spheres Problem belongs to a family of sphere packing problems, a
class of challenging problems dating from the beginning of the seventeenth cen-
tury. In the tradition of famous problems in mathematics, the statements of these
problems are elusively simple, and have withstood the attacks of many worthy
mathematicians (e.g. Newton, Hilbert, Gregory), while most of its instances remain
open problems. Furthermore, it is related to practical problems in chemistry ,
biology and physics, see, for instance, the list of examples in [19], concerning mainly
three-dimensional problems, or peruse the 1550-item-long bibliography in [5]. The
Hard-Spheres Problem is to maximize the minimum pairwise distance between p
points on a sphere in R n
. This problem may be reduced to a nonlinear optimization
problem that turns out, as might be expected from the mentioned history,
to be a particularly hard, nonconvex problem, with a potentially large number of
(nonoptimal) points satisfying KKT conditions. We have thus a class of problems
indexed by the parameters n and p, that provides a suitable set of test problems
for evaluating Nonlinear Programming codes.
Very convenient is the fact that the Hard-Spheres Problem may be regarded as
the feasibility problem associated with another famous problem in the area, the
Kissing Number Problem, which seeks to determine the maximum number K n of
nonoverlapping spheres of given radius in R n
that can simultaneously touch (kiss)
a central sphere of same radius. Thus, if the distance obtained in the solution of
the Hard-Spheres Problem, for given n and p, is greater than or equal to the radius
of the sphere on which the points lie, one may conclude that K n # p. We use the
known solution of the three-dimensional Kissing Number Problem to calibrate our
code, described below, and choose for testing the code values of n, p that might
bring forth new knowledge about the problem, or strengthen existing conjectures
about the true (but, alas, not rigorously established) value of K n , from the following
table of known values/bounds of K n given in [5]:
Table
1. Known values and
bounds for Kn .
9 306-380
This paper is organized as follows. In Section 2 we formulate the Hard-Spheres
Problem as a nonlinear programming problem and we relate the main characteristics
of ALBOX, our Augmented Lagrangian Algorithm. In Section 3 we explain how the
main algorithmic parameters of ALBOX were chosen. (Here we follow a previous
study in [15].) In Section 4 we introduce the Gauss-Newton Hessian approximation
and discuss the e#ect of its use in comparison with the use of true Hessians of
the Lagrangians. In Section 5 we describe the parameters used with LANCELOT.
The numerical experiments, obtained by running ALBOX and LANCELOT for a
large number of Hard-Spheres problems, are presented in Section 6. Finally, some
conclusions are drawn in Section 7.
2. ALBOX
The straightforward formulation of the Hard-Spheres Problem leads to the following
maxmin problem, where r is the radius of the sphere, centered at the origin, on
which the points lie:
s.t. #y k
(1)
The vectors y k belong to R n and # is the Euclidean norm. Since the answer to
the problem is invariant under the choice of positive r, we let
using the definition of #, the standard inner product in R n , and the constraints,
it is easy to see that (1) is equivalent to
s.t. #y k
(2)
Applying the classical trick for transforming minimax problems into constrained
minimization problems, we reduce (2) to the nonlinear program
min z
s.t. z #y i , y j
Adding slack variables to the first set of constraints and squaring the second set
of equations in order to avoid nonsmoothness in the first derivatives, we obtain
min z
which is of the general form
min f(x)
ALBOX, the augmented Lagrangian code developed, approximately solves
min L(x, #)
at each Outer Iteration, where
is the augmented Lagrangian function associated with (5), # is the current approximation
to the Lagrange multipliers and # 0) is the current vector of penalty
parameters. These are updated at the end of the Outer Iteration.
Subproblem (6) is solved using BOX, the box-constrained solver described in [10].
This iterative method minimizes a quadratic approximation to the objective function
on the intersection of the original feasible set, the box # x # u, and the
trust region (also a box), at each iteration. If the original objective function is sufficiently
reduced at the approximate minimizer of the quadratic, the corresponding
trial point is accepted as the new iterate. Otherwise, the trust region is reduced.
The main algorithmic di#erence between BOX and the method used in [2] is that in
BOX the quadratic is explored on the whole intersection of the original box and the
trust region whereas in [2] only the face determined by an "approximate Cauchy
point" is examined.
ALBOX is a Double Precision FORTRAN 77 code that aims to cope with large-scale
problems. For this reason, factorization of matrices is not used at all. The
quadratic solver used to solve the subproblems of the box-constraint algorithm,
QUACAN, visits the di#erent faces of its domain using conjugate gradients on the
interior of each face and "chopped gradients" as search directions to leave the faces.
We refer the reader to [1], [9] and [10], for details on the actual implementation of
QUACAN. In most iterations of this quadratic solver, a matrix-vector product of
the Hessian approximation and a vector is computed. Occasionally, an additional
matrix-vector product may be neccessary.
The performance of ALBOX, and, in fact, of most sophisticated algorithms, depends
on the choice of many parameters. The most sensitive parameters were
adjusted using the Kissing Problem with
We discuss these choices in the next section. A similar analysis was carried out for
LANCELOT, and is described in section 5.
3. Choice of parameters for ALBOX
3.1. Penalty parameters and Lagrange multipliers
The vector # of penalty parameters associated with the equality constraints
are updated after each Outer Iteration. We considered two possibilities: to update
each component according to the decrease of the corresponding component of h(x)
or using a global criterion based on h(x). The specific alternatives contemplated
were, assuming x to be the initial point at some Outer Iteration and - x the final
one:
1. increase # i only if |h(-x) i | is not su#ciently smaller than |h(x) i |;
2. increase # i only if #h(-x)# is not su#ciently smaller than #h(x)# .
Preliminary experiments revealed, perhaps surprisingly, that the "global strat-
egy" 2 is better than the first. In fact, when # i is not updated, but the other
components of # are, the feasibility level |h(-x) i | tends to deteriorate at the next
iteration and, consequentely, a large number of Outer Iterations becomes necessary.
In other words, it seems that a strategy based on 1 encourages a zigzagging be-
havior, with successive iterates alternatingly satisfying one constraint or another.
Thus, although the original formulation allows for one penalty parameter for each
equality constraint, in practice it is as if we worked with one parameter for all of
them, since they are all initialized at the same value (tests indicate that 10 is an
adequate initial value) and are all updated according to the same rule (once again
based on tests, they are increased by a factor of 10 when su#cient improvement of
feasibility is not detected). Here we considered that "a su#ciently smaller than b"
means that a # 0.01b.
It must be pointed out that the behavior of penalty parameters is not independent
of the strategy for updating the Lagrange multipliers. With algorithmic simplicity
in mind, we adopted a "first order formula". Letting - # be the Lagrange multiplier
at the start of a new Outer Iteration and # be the Lagrange multipliers and
penalty parameters at the previous iteration, we set
for all
3.2. Stopping criteria for box-constraint solver
Each Outer Iteration ends when one of the several stopping criteria for the algorithm
that solves the augmented Lagrangian box-constrained minimization problem (6)
is reached. There is the usual maximum number of iterations safeguard, which is
set at 100 for QUACAN calls.
Other than that, we consider that the box-constraint algorithm BOX converges
when
is the "continuous projected gradient" of the objective function of (6)
at the point x. This vector is defined as the di#erence between the projection of x-
#L(x, #) on the box and the point x. The tolerance # may change at each Outer
Iteration. We tested two strategies for #: one that defines # dynamically depending
on the degree of feasibility of the current iterate and another that fixes # at 10 -5 .
Althought not conclusive, results for the Icosahedron Problem were better when the
constant # strategy was used. This was, therefore, the strategy adopted for further
tests. Incidentally, the opposite was adopted in [8], where a similar Augmented
Lagrangian Algorithm was used to solve linearly constrained problems derived from
physical applications. Theoretical justifications for the inexact minimization of
subproblems in the augmented Lagrangian context can also be found in [12, 13].
The box-constraint code admits other stopping criteria. For instance, execution
may stop if the progress during some number of consecutive iterations is not good
enough or if the the radius of the trust region becomes too small. Nevertheless,
best results were obtained inhibiting these alternative stopping criteria.
3.3. Parameters for the quadratic solver
QUACAN is the code called to minimize quadratic functions (augmented Lagrangians
in this case) subject to box constraints. Its e#ciency, or lack thereof, plays a crucial
role in the overall behavior of the Augmented Lagrangian Algorithm. Its parameters
must therefore be carefully chosen.
Firstly we examine the convergence criterion. If the projected gradient of the
quadratic is null, the corresponding point is stationary. Accordingly, convergence
is considered achieved when the norm of this projected gradient is less than a
fraction of the corresponding norm at the initial point. In this case, we use "non-
continuous projected gradients," in which the projections are not computed on the
feasible box but on the active constraints. Fractions 1/10, 1/100 and 1/100000 were
tested on the Icosahedron Problem, and the first choice provided the best behavior,
being the one employed subsequently.
The maximum number of iterations allowed is also an important parameter, since
otherwise we may invest too much e#ort solving problems only distantly related to
the original one. We found that the number of variables of the problem, np+ - p
is a suitable delimiter in this case. Other non-convergence stopping criteria were
inhibited.
The radius of the trust region determines the size of the auxiliary box used in
QUACAN. The nonlinear programming algorithm is sensitive to the choice of #,
the first trust region radius. After testing di#erent values, we selected as an
appropriate choice.
Another important parameter is # (0, 1), the parameter that determines whether
the next iterate must belong to the same face as the current one, or not. Roughly
speaking, if # is small, the algorithm tends to leave the current face as soon as
a mild decrease of the quadratic is detected. On the other hand, if # 1, the
algorithm only abandons the current face when the current point is close to a stationary
point of the quadratic on that face. A rather surprising result was that, for
the Icosahedron Problem, the conservative value better than smaller
values.
Finally, when the quadratic solver hits the boundary of its feasible region, an
extrapolation step may be tried, depending on the value of the extrapolation parameter
# 1. If # is large, new points will be tried at which the number of active
bounds may be considerably increased. No extrapolation is tried when
indicated that convenient choice for the Hard-Spheres Problem.
4. Approximate Hessian
The nonlinear optimization problem (4) obtained in section 2 is the version of the
Hard-Spheres Problem that was chosen for our tests. It was pointed out that (4)
is of the general form
min f(x)
whose associated augmented Lagrangian is
.
Thus
and
Although # 2 L(x, #) tends to be positive definite when # is large, # is close to
the correct Lagrange multipliers and x is close to a solution, this is not the case
at the early stages of augmented Lagrangian calculations. On the other hand, the
simplified matrix obtained by neglecting the term involving second order derivatives
of the constraint functions
is always positive semidefinite in our case, independently of # and x. Of course,
this is always the case when f is a convex function.
Another insight into B(x, #) is provided by examining the problem
min f(x)
where z is the current point being used in a BOX iteration. Problem (8) is obtained
by replacing the original constraints with its first order (linear) approx-
imation. But B(z, #) happens to be the Hessian of the augmented Lagrangian
associated with (8) at z! Furthermore, both the augmented Lagrangian associated
with (8) and its gradient evaluated at z coincide with their counterparts associated
with the original problem (4), evaluated at z.
The matrix vector products # 2 L(x, #)v and B(x, #)v seem cumbersome to compute
at a first glance. But taking advantadge of their structure enables the computation
to be done in O(np) time.
In principle, using the true Hessian of the Lagrangian should be the best possible
choice, since it represents better the structure of the true problem. However, available
algorithms for minimizing quadratics in convex sets are much more e#cient
when the quadratic is convex than otherwise. QUACAN is not an exception to
this rule. Therefore, in the interest of improving the overall performance of the
augmented Lagrangian algorithm, we decided to use B(x, #) as Hessian Lagrangian
approximation.
The results were indeed impressive. Table 2 lists the average statistics obtained
for four of the eighteen test sets, where each (n, p) pair was run for fifty random
starting points. The average number of Outer iterations, BOX iterations, Function
evaluations, Matrix Vector Products, CPU time in seconds and minimum distance
are given for the runs using the exact Hessian (first row of each set) and the ones
using the approximate Hessian (second row). The minimum distances obtained
were very close and on some instances the minimum distance obtained using the
approximate Hessian was smaller than the one obtained using the exact Hessian.
While the number of Outer iterations does not di#er very much from one choice to
the other, the number of BOX iterations and, consequently, the number of Matrix
Vector Products sensibly decreases. The overall result is a marked decrease in
CPU time. In Figure 1 we plot the average CPU times, for all eighteen tests, using
the exact Hessian versus times using the approximate Hessian. Also shown is the
line that gives the best fit of the data by a linear (not a#ne) function, namely
that is, the approximate Hessian option implies in a decrease of
almost two thirds in CPU times.
Table
2. Running ALBOX with exact (first row) and approximate Hessian (second row).
Problem size Outer Box Funct.
MVP
CPU Min
it. it. eval. time dist.
4.86 37.06 52.14 1564.36 0.765 1.086487225412
22
4.56 160.02 193.14 67020.22 373.141 0.998675348042
-CPU times using
exact Hessian
CPU times using
approx. Hessian
500 1000 1500 2000 2500 3000400800-
Figure
1. CPU times using exact Hessian (x-axis) versus using approximate Hessian (y-axis).
5. Choice of parameters for LANCELOT
LANCELOT allows for the choice of exact or approximate first and second order
derivatives. However, LANCELOT's manual [3] (p.111) "strongly recommends the
use of exact second derivatives whenever they are available", and, on the other hand,
there is no provision for a user supplied Hessian approximation. In fact we ran a few
tests with the default approximation (SR1) but the results were worse than those
obtained using exact second derivatives, and thus this was the option adopted for
all further tests. In the light of the experiments described in the previous section,
this provides corroborating evidence to the e#ect that general purpose, consolidated
packages, designed to provide a good performance with little interference from the
user, may be more convenient to use than open ended, low-level interface codes,
such as ALBOX; but, for the user willing to "get his hands dirty" the latter rawer
code might not only prove to be competitive, it may actually outperform the former
code, with its more polished though restrictive finish.
We also experimented with several di#erent options for solving the linear equation
solver, namely, without preconditioner, with diagonal preconditioner and with
a band matrix preconditioner. The best results were obtained with the first option
(no preconditioner). Another choice that slowed the algorithm, without noticeable
improve the quality of solution, was requiring that the exact Cauchy point be com-
puted. We settled to use the inexact Cauchy point option. The maximum number
of iterations allowed was 1000. Finally, the gradient and constraints tolerances
were the same chosen for ALBOX, namely 10 -8 . The FORTRAN compiler option
adopted for LANCELOT and ALBOX was "-O".
6. Numerical experiments
Tests were run on a Sun SparcStation 20, with the following main characteristics:
128Mbytes of RAM, 70MHz, 204.7 mips, 44.4 Mflops. Results for the fifty runs for
each (n, p) pair are summarized in the following tables. Table 3 summarizes the
statistics that are "machine independent," typically involving number of iterations,
number of function evaluations, with the exception of the optimal distances found.
Quotes are needed because this is not completely accurate, since these numbers will
in fact depend on factors such as machine precision, compiler manufacturer, and so
on. Nevertheless, they certainly provide more independent grounds for comparison
than CPU times, presented in Table 4, along with optimal distances.
Table
3 presents the minimum, maximum and average amounts (the number
triple in each box) of Outer and BOX iterations, function evaluations, Quacan
iterations and matrix-vector products/conjugate-gradient iterations (for Box and
LANCELOT, respectively). First row of each set corresponds to ALBOX and
second to LANCELOT. Unfortunately the only statistics available for both are the
number of function evaluations and BOX iterations/Derivative evaluations. We
paired the number of matrix-vector products (MVP) output by ALBOX with the
number of conjugate-gradient iterations (CGI) produced by LANCELOT, since
each conjugate-gradient iteration involves a matrix-vector-product.
Table
3. ALBOX - LANCELOT test results
Problem size Outer BOX Function Quacan MVP
Derivative eval. iter. CGI
4,5,4.6 21, 55, 34.7 25, 77, 45.5 309, 2343, 1064 340, 2702, 1195
15, 61, 38.1 16, 71, 43.3 377, 1949, 992
20, 62, 38.0 21, 80, 43.4 511, 2709, 1032
22, 58, 39.7 24, 66, 45.3 553, 1776, 1069
4,5,4.8 27, 75, 47.9 34,104, 64.2 933, 4923, 2708 1017, 5382, 2963
27, 84, 52.3 29, 96, 60.1 967, 4248, 2313
4,5,4.6 32,110, 60.2 41,140, 77.8 1625, 8385, 4130 1751, 8742, 4444
30, 91, 56.8 33,112, 65.1 1107, 5652, 3015
22
4,5,4.3 52,115, 78.0 62,148, 97.4 5688, 16767, 10502 6097, 17871, 11222
45,225, 104.1 49,262, 120.0 5122, 37546, 12381
37,176, 108.9 39,208, 124.6 4799, 29367, 14607
2,5,4.1 45,141, 86.4 58,183, 107.9 6282, 28049, 14077 6769, 29825, 15009
4,5,4.2 63,180, 97.8 75,226, 120.6 10492, 35660, 17105 11034, 37639, 18143
54,225, 119.7 60,262, 137.5 6870, 38419, 18736
26
4,6,4.2 51,176, 95.4 63,216, 117.2 6765, 38932, 17185 7317, 40932, 18237
53,266, 131.4 59,311, 150.5 5094, 77233, 21796
4,5,4.3 62,206, 99.5 76,254, 122.1 11480, 45129, 19490 12169, 47121, 20616
62,215, 128.9 68,253, 147.6 9420, 41799, 21534
4,8,4.6 80,800, 160.0 102,984, 193.1 27836,471751, 63778 29476,497038, 67020
85,334,190.42 95,381, 218.6 9119, 96036, 56899
4,6,4.6 89,600, 166.2 107,717, 200.0 29224,326333, 67969 30804,340424, 71261
4,7,4.9 78,700, 195.8 89,815, 231.3 26692,448509, 88566 27892,472730, 92422
91,385,231.24 99,453,263.44 24178,160611, 85972
4,7,4.9 90,700, 202.9 106,880, 242.9 34936,463883, 98266 36614,485784,102816
4,8,4.9 93,800, 225.1 117,954, 271.6 36194,547421,117417 38311,577662,122924
4,7,4.6 109,700, 212.3 132,887, 256.1 47402,502810,109630 49993,529036,114749
102,440, 246.4 115,499, 281.3 34730,200558,
Although the algorithms behave very di#erently timewise, as we will shortly see,
this is not a direct consequence of the number of function evaluations each performs.
The best least-squares fit by a first degree polynomial gives
where y is the number of function evaluations of ALBOX and x is the corresponding
amount for LANCELOT, whereas a similar fit involving CPU times will give a
coe#cient of less than a third. On Figure 2 we plot the function evaluation pairs
for all eighteen instances along with the best fit obtained.
ALBOX
Figure
2. Number of function evaluations of LANCELOT versus ALBOX.
Further still from providing an explanation for the higher e#ciency of ALBOX is
the comparison of MVP versus CGI. In this case the best fit gives
1.10655x, where y is the number of MVP and x is the number of CGI. This suggests
that, although both iterations involve a matrix-vector-product, a CGI is substantially
costlier, timewise, than the MVP performed in ALBOX. A main factor for
this is that the matrix-vector-product in LANCELOT's conjugate gradient iteration
deals with the true Hessian, whereas the one in ALBOX involves the approximate
(and simpler) Hessian. Figure 3 contains the line corresponding to the best linear
fit and the position of the (CGI, MVP) pairs.
Next we have Table 4, that presents similar statistics involving the optimal distances
encountered and the CPU times, in seconds. The first (resp., second) row
for each (n, p) pair gives the numbers obtained by ALBOX (resp., LANCELOT).
ALBOX
Figure
3. Number of CGIs of LANCELOT versus number of MVPs of ALBOX.
Table
4. Minimum distances and CPU times for tests.
Problem size minimum distance between 2 points CPU time (seconds)
1.0514622 1.0914262 1.08323633 0.170 1.010 0.476
1.0514622 1.0514622 1.05146223 0.170 1.420 0.636
0.9463817 1.0514622 1.04515739 0.290 1.870 0.906
22
0.9529038 0.9619429 0.95809771 25.150 86.570 41.290
26
0.9606935 0.9779378 0.96928704 420.170 4527.652 1000.608
0.9599791 0.9798367 0.97025160 807.570 4664.
The information contained in Table 4 is depicted graphically below. The intervals
(min., max) of distances/CPU times are represented by vertical segments,
the averages are indicated with a diamond symbol for ALBOX and a bullet for
LANCELOT. Graphs on the left refer to distances whereas graphs on the right
refer to CPU times.-
min.
dist.
#-
time
Figure
4. ALBOX (#) and LANCELOT (.) results for
22 - 4
26 - 4
min.
dist.
#-
22 - 4
26 - 4
CPU
time
Figure
5. ALBOX (#) and LANCELOT (.) results for
The graphs in Figures 4-6 evidence the qualitative relative behavior of both codes.
Notice that the diamonds and bullets are always close together in the graphs on
the left, indicating that the quality of the optimal solutions obtained by both codes
min.
dist.
#-
CPU
time
Figure
6. ALBOX (#) and LANCELOT (.) results for
is similar. On the other hand, the bullets rise faster than the diamonds on the
graphs on the right, which means that the CPU times for LANCELOT tend to
be higher than those for ALBOX. The linear fit of ALBOX CPU times versus
x-the coe#cient is less than one third-,
ploted in Figure 7 confirms this.
-CPU times
CPU times
ALBOX
500 1000 1500 2000 2500400800-
Figure
7. CPU times of LANCELOT versus those of ALBOX.
Finally, it should be noted that CPU times increase sharply as a function of
problem size (represented, for instance, by the number of constraints). We tried
several fits (linear, quadratic, exponential) and, though none seemed to provide a
very good model for the data, the quadratic fit was the best one.
7. Conclusions
The main aspects of the Augmented Lagrangian methodology for solving large-scale
nonlinear programming problems have been consolidated after the works of Conn,
Gould and Toint which gave origin to the LANCELOT package. This algorithmic
framework has been very useful in the last ten years for solving practical problems
and for comparison purposes with innovative nonlinear programming methods.
Very likely, this tendency will be maintained in the near future.
The present research was born as a result of our need to have more freedom
in the formulation and resolution of the quadratic subproblems that arise in the
LANCELOT-like approach to the Augmented Lagrangian philosophy. On one hand,
we decided to exploit more deeply the whole trust region by means of the use of a
box-constraint quadratic solver. On the other hand, perhaps more importantly, we
tested a Gauss-Newton convex simplification of the quadratic model which turned
out to be much more e#cient than the straight Newton-like version of this model.
Behind this gain of e#ciency is the fact that the quadratic solver, though able to
deal with nonconvex models, is far more e#cient when the underlying quadratic
has a positive semidefinite Hessian. It is usual, in Numerical Analysis, that a
decision on the implementation of a high level algorithm depends on the current
technology for solving low-level subproblems. It must only be warned that such a
decision could change if new more e#cient algorithms for solving the subproblems
(nonconvex quadratic programming in our case) are found.
Our main objective is to use ALBOX, not only for solving real-life problems, but
also for testing alternative nonlinear programming methods against it. We feel
that having a deep knowledge of the implementation details of the code will enable
us to be much more exacting when testing new codes, since it will be possible to
fine tune the standard against which the new code is tested. The present study,
apart from calling the reader's attention to convex simplified Gauss-Newton like
subproblems, had the objective of validating our code, by means of its comparison
with LANCELOT, using a set of problems that have an independent interest.
The result of this comparison seems to indicate that ALBOX can be used as a
competitive tool for nonlinear programming calculations.
--R
"An adaptive algorithm for bound constrained quadratic minimization,"
"A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds,"
"Global convergence of a class of trust region algorithms for optimization with simple bounds,"
Lattices and Groups
Mathematics: The Science of Patterns
"Comparing the numerical performance of two trust-region algorithms for large-scale bound-constrained minimization,"
"Augmented Lagrangians with adaptive precision control for quadratic programming with equality constraints,"
"On the maximization of a concave quadratic function with box constraints,"
"A new trust-region algorithm for bound constrained minimization,"
"Nonlinear programming algorithms using trust regions and augmented Lagrangians with nonmonotone penalty parameters,"
"Analysis and implementation of a dual algorithm for constraint optimization,"
"Dual techniques for constraint optimization,"
"Bounds on the kissing numbers in R n : mathematical programming formulations,"
"Augmented Lagrangians and the resolution of packing problems,"
"A two-phase model algorithm with global convergence for nonlinear pro- gramming,"
"Preconditioning of truncated-newton methods,"
"Linearly constrained spectral gradient methods and inexact restoration sub-problems for nonlinear programming,"
"Distributing many points on a sphere,"
--TR
Dual techniques for constrained optimization
Global convergence of a class of trust region algorithms for optimization with simple bounds
A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds
Analysis and implementation of a dual algorithm for constrained optimization
Two-phase model algorithm with global convergence for nonlinear programming
Lancelot
Augmented Lagrangians with Adaptive Precision Control for Quadratic Programming with Equality Constraints
--CTR
Graciela M. Croceri , Graciela N. Sottosanto , Mara Cristina Maciel, Augmented penalty algorithms based on BFGS secant approximations and trust regions, Applied Numerical Mathematics, v.57 n.3, p.320-334, March, 2007
R. Andreani , A. Friedlander , M. P. Mello , S. A. Santos, Box-constrained minimization reformulations of complementarity problems in second-order cones, Journal of Global Optimization, v.40 n.4, p.505-527, April 2008
G. Birgin , Jos Mario Martnez, Large-Scale Active-Set Box-Constrained Optimization Method with Spectral Projected Gradients, Computational Optimization and Applications, v.23 n.1, p.101-125, October 2002
Nikhil Arora , Lorenz T. Biegler, A Trust Region SQP Algorithm for Equality Constrained Parameter Estimation with Simple Parameter Bounds, Computational Optimization and Applications, v.28 n.1, p.51-86, April 2004
G. Birgin , J. M. Martnez, Structured minimal-memory inexact quasi-Newton method and secant preconditioners for augmented Lagrangian optimization, Computational Optimization and Applications, v.39 n.1, p.1-16, January 2008 | numerical methods;augmented Lagrangians;nonlinear programming |
360547 | Selecting Examples for Partial Memory Learning. | This paper describes a method for selecting training examples for a partial memory learning system. The method selects extreme examples that lie at the boundaries of concept descriptions and uses these examples with new training examples to induce new concept descriptions. Forgetting mechanisms also may be active to remove examples from partial memory that are irrelevant or outdated for the learning task. Using an implementation of the method, we conducted a lesion study and a direct comparison to examine the effects of partial memory learning on predictive accuracy and on the number of training examples maintained during learning. These experiments involved the STAGGER Concepts, a synthetic problem, and two real-world problems: a blasting cap detection problem and a computer intrusion detection problem. Experimental results suggest that the partial memory learner notably reduced memory requirements at the slight expense of predictive accuracy, and tracked concept drift as well as other learners designed for this task. | Introduction
Partial memory learners are on-line systems that select and maintain a portion of
the past training examples, which they use together with new examples in subsequent
training episodes. Such systems can learn by memorizing selected new facts,
or by using selected facts to improve the current concept descriptions or to derive
new concept descriptions. Researchers have developed partial memory systems
because they can be less susceptible to overtraining when learning concepts that
change or drift, as compared to learners that use other memory models (Salganicoff,
1993; Maloof, 1996; Widmer & Kubat, 1996; Widmer, 1997).
The key issues for partial memory learning systems are how they select the most
relevant examples from the input stream, maintain them, and use them in future
learning episodes. These decisions affect the system's predictive accuracy, memory
requirements, and ability to cope with changing concepts. A selection policy might
keep each training example that arrives, while the maintenance policy forgets examples
after a fixed period of time. These policies strongly bias the learner toward
recent events, and, as a consequence, the system may forget about important but
rarely occurring events. Alternatively, the system may attempt to select proto-
A. MALOOF AND RYSZARD S. MICHALSKI
typical examples and keep these indefinitely. In this case, the learner is strongly
anchored to the past and may perform poorly if concepts change or drift.
This paper presents a method for selecting training examples for a partial memory
learner. Our approach extends previous work by using induced concept descriptions
to select training examples that lie at the extremities of concept boundaries, thus
enforcing these boundaries. The system retains and uses these examples during
subsequent learning episodes. This approach stores a nonconsecutive collection of
past training examples, which is needed for situations in which important training
events occur but do not necessarily reoccur in the input stream. Forgetting
mechanisms may be active to remove examples from partial memory that no longer
enforce concept boundaries or that become irrelevant for the learning task. As
new training examples arrive, the boundaries of the current concept descriptions
may change, in which case the training examples that lie on those boundaries will
change. As a result, the contents of partial memory will change. This continues
throughout the learning process.
After surveying relevant work, we present a general framework for partial memory
learning and describe an implementation of such a learner, called AQ-Partial Memory
(AQ-PM), which is based on the AQ-15c inductive learning system (Wnek,
Kaufman, Bloedorn, & Michalski, 1995). We then present results from a lesion
study (Kibler & Langley, 1990) that examined the effects of partial memory learning
on predictive accuracy and on memory requirements. We also make a direct
comparison to IB2 (Aha, Kibler, & Albert, 1991), since it is similar in spirit to AQ-
PM. In applying the method to the STAGGER Concepts (Schlimmer & Granger,
1986), a synthetic problem, and two real-world problems-the problems of blasting
cap detection in X-ray images (Maloof & Michalski, 1997) and computer intrusion
detection (Maloof & Michalski, 1995)-experimental results showed a significant
reduction in the number of examples maintained during learning at the expense
of predictive accuracy on unseen test cases. AQ-PM also tracks drifting concepts
comparably to STAGGER (Schlimmer & Granger, 1986) and the FLORA systems
(Widmer & Kubat, 1996).
2. Partial Memory Learning
On-line learning systems must have a memory model that dictates how to treat
past training examples. Three possibilities exist (Reinke & Michalski, 1988):
1. full instance memory, in which the learner retains all past training examples
2. partial instance memory, in which it retains some of the past training
examples, and
3. no instance memory, in which it retains none.
Researchers have studied and described learning systems with each type of memory
model. For example, STAGGER (Schlimmer & Granger, 1986) and Winnow (Lit-
tlestone, 1991) are learning systems with no instance memory, while GEM (Reinke
& Michalski, 1988) and IB1 (Aha et al., 1991) are learners with full instance mem-
ory. Systems with partial instance memory appear to be the least studied, but
examples include LAIR (Elio & Watanabe, 1991), HILLARY (Iba, Woogulis, &
Langley, 1988), IB2 (Aha et al., 1991), DARLING (Salganicoff, 1993), AQ-PM
(Maloof & Michalski, 1995), FLORA2 (Widmer & Kubat, 1996), and MetaL(B)
(Widmer, 1997).
On-line learning systems must also have policies that deal with concept memory,
which refers to the store of concept descriptions. Researchers have investigated a
variety of strategies in conjunction with different models of instance memory. For
example, IB1 (Aha et al., 1991) maintains all past training examples but does not
store generalized concept descriptions. In contrast, GEM (Reinke & Michalski,
1988) keeps all past training examples in addition to a set of concept descriptions
in the form of rules. ID5 (Utgoff, 1988) and ITI (Utgoff, Berkman, & Clouse,
1997) store training examples at the leaf nodes of decision trees, so they are also
examples of systems with full instance memory. Actually, ID5 stores a subset of
an example's attribute values at the leaves and is an interesting special case of full
instance memory. Finally, as an example of a system with no instance memory, ID4
(Schlimmer & Fisher, 1986) uses a new training example to incrementally refine a
decision tree before discarding the instance.
For systems with concept memory, learning can occur either in an incremental
mode or in a temporal batch mode. Incremental learners modify or adjust their
current concept descriptions using new examples in the input stream. If the learner
also maintains instance memory, then it uses these examples to augment those
arriving from the environment. FLORA2, FLORA3, and FLORA4 (Widmer &
Kubat, 1996) are examples of systems that learn incrementally with the aid of
partial instance memory.
Temporal batch learners, on the other hand, replace concept descriptions with
new ones induced from new training examples in the input stream and any held in
instance memory. DARLING (Salganicoff, 1993) and AQ-PM are examples of temporal
batch learners with partial instance memory. Any batch learning algorithm,
such as C4.5 (Quinlan, 1993) or CN2 (Clark & Niblett, 1989), can be used in conjunction
with full or no instance memory. However, this choice depends greatly on
the problem at hand. Figure 1 displays a classification of selected learning systems
based on concept memory and the various types of instance memory.
Having described the role of instance and concept memory in learning, we will
now discuss partial instance memory learning systems that have appeared in the
literature. In the following sections, we will focus on learning systems with instance
memory. Thus, for the sake of brevity, we will drop the term instance when referring
to such systems. For example, we will use the term partial memory to mean "partial
instance memory."
2.1. A Survey of Partial Memory Learning Systems
appears to be one of the
first partial memory systems. In some sense, it has a minimal partial memory model
4 MARCUS A. MALOOF AND RYSZARD S. MICHALSKI
partial
instance
memory
full instance
memory
memory
no instance partial
instance
memory
full instance
memory
memory
no instance partial
instance
memory
full instance
memory
concept memory
no concept memory
On-line Learning Systems
temporal batch incremental
GEM
AQ-15c
AQ-15c
Winnow
HILLARY
ID4
ITI
Figure
1. Learning systems classified by concept and instance memory.
because the system keeps only the first positive example. Consequently, it always
learns from the one positive example in partial memory and the arriving training
examples.
HILLARY (Iba et al., 1988) maintains a collection of recent negative examples in
partial instance memory. Positive examples may be added to a concept description
as disjuncts but are generalized in subsequent learning steps. HILLARY retains
negative examples if no concept description covers them; otherwise, it specializes
the concept description. Negative examples that are retained may be forgotten
later if they are covered by a positive concept description.
IB2 (Aha et al., 1991), an instance-based learning method, uses a scheme that,
like AQ-PM, keeps a nonconsecutive sequence of training examples in memory.
When IB2 receives a new instance, it classifies the instance using the examples
currently held in memory. If the classification is correct, the instance is discarded.
Conversely, if the classification is incorrect, the instance is retained. The intuition
behind this is that if an instance is correctly classified, then we gain nothing by
keeping it. This scheme tends to retain training examples that lie at the boundaries
of concepts. IB3 extends IB2 by adding mechanisms that cope with noise.
DARLING (Salganicoff, 1993) uses a proximity-based forgetting function, as opposed
to a time-based or frequency-based function, in which the algorithm initializes
the weight of a new example to one and decays the weights of examples within a
neighborhood of the new example. When an example's weight falls below a thresh-
old, it is removed. DARLING is also an example of a partial memory learning
system, since it forgets examples and maintains only a portion of the past training
examples.
The FLORA2 system (Widmer & Kubat, 1996) selects a consecutive sequence
of training examples from the input stream and and uses a time-based scheme to
forget those examples in partial memory that are older than a threshold, which
is set adaptively. This system was designed to track drifting concepts, so during
periods when the system is performing well, it increases the size of the window
and keeps more examples. If there is a change in performance, presumably due to
some change in the target concepts, the system reduces the size of the window and
forgets the old examples to accommodate the new examples from the new target
concept. As the system's concept descriptions begin to converge toward the target
concepts, the size of the window increases, as does the number of training examples
maintained in partial memory.
FLORA3 extends FLORA2 by adding mechanisms to cope with changes in con-
text. The change of seasons, for instance, is a changing context, and the concept of
warm is different for summer and for winter. Temperature is the contextual variable
that governs which warm concept is appropriate. FLORA4 extends FLORA3 by
adding mechanisms for coping with noise, similar to those used in IB3 (Aha et al.,
1991).
Finally, the MetaL(B) and MetaL(IB) systems (Widmer, 1997) are based on the
naive Bayes and instance-based learning algorithms, respectively. These systems,
like FLORA3 (Widmer & Kubat, 1996), can cope with changes in context and use
partial memory mechanisms that maintain a linear sequence of training examples,
but over a fixed window of time. When the algorithm identifies the context, it
uses only those examples in the window relevant for that context. MetaL(IB)
uses additional mechanisms, such as exemplar selection and exemplar weighting, to
further concentrate on the relevant examples in the window.
The FAVORIT system (Krizakova & Kubat, 1992; Kubat & Krizakova, 1992),
which extends UNIMEM (Lebowitz, 1987), uses mechanisms for aging and forgetting
nodes in a decision tree. Although FAVORIT has no instance memory, we
include this discussion because aging and forgetting mechanisms are important for
partial memory learners, and because this system uses a third type of forgetting:
frequency-based forgetting.
FAVORIT uses incoming training examples to add nodes and to strengthen existing
nodes in a decision tree. Over time, aging mechanisms gradually weaken the
strengths of nodes. If incoming training examples do not reinforce a node's presence
in the tree, then the node's score decays until it falls below a threshold. At this
point, the algorithm forgets, or removes, the node. Conversely, if incoming training
examples continue to reinforce and revise the node, its score increases. If the score
surpasses an upper threshold, then the node's score is fixed and remains so.
2.2. A General Framework for Partial Memory Learning
Based on an analysis of these systems and on our design of AQ-PM, we developed a
general algorithm for inductive learning with partial instance memory, presented in
6 MARCUS A. MALOOF AND RYSZARD S. MICHALSKI
1. Learn-Partial-Memory(Data t
2. Concepts 0
3. PartialMemory
4. for to n do
5. Missed t
Find-Missed-Examples(Concepts
, Data t
PartialMemory
Missed t
7. Concepts
8. TrainingSet 0
Concepts t
9. PartialMemory
Concepts t );
11. end . /* Learn-Partial-Memory */
Table
1. Algorithm for partial memory learning.
table 1. The algorithm begins with a data source that supplies training examples
distributed over time, represented by Data t , where t is a temporal counter. We
generalize the usual assumption that a single instance arrives at a time by placing
no restrictions on the cardinality of Data t , allowing it to consist of zero or more
training examples. This criterion is important because it ultimately determines the
structure of time for the learner. 1 By allowing Data t to be empty, the learner can
track the passage of time, since the passage of time is no longer associated with the
explicit arrival of training examples. By allowing Data t to consist of one or more
training examples, the learner can model arbitrary periods of time (e.g., days and
weeks) without requiring that a specific number of training examples arrive during
that interval. Intuitively, there may be a day when the system learns one thing, but
simply because it learns something else does not mean that another day passed.
Initially, the learner begins with no concepts and no training examples in partial
memory (steps 2 and 3), although it may possess an arbitrary amount of background
knowledge. For the first learning step 1), the partial memory learner behaves
like a batch learning system. Since it has no concepts and no examples in partial
memory, the training set consists of all examples in Data 1 . It uses this set to induce
the initial concept descriptions (step 7). Subsequently, the system must determine
which of the training examples to retain in partial memory (steps 8 and 9).
In subsequent time steps (t ? 1), the system begins by determining which of the
new training examples it misclassifies (step 5). The system uses these examples and
the examples in partial instance memory to learn new concept descriptions (step 7).
As we have seen in the review of related systems, there are several ways to accomplish
this. The system could simply memorize the new examples in the training set.
It could also induce new concept descriptions from these examples. And finally, it
could use the examples in the training set to modify or alter its existing concept
descriptions to form new concept descriptions.
The precise way in which a particular learner determines misclassified examples
(step 5), learns (step 7), selects examples to retain (step 8), and maintains those
examples (step depends on the concept description language, the learning meth-
ods employed, and the task at hand. Therefore, to ground further discussion, we
will describe the AQ-PM learning system.
3. Description of the AQ-PM Learning System
AQ-PM is an on-line learning system that maintains a partial memory of past training
examples. To implement AQ-PM, we extended the AQ-15c inductive learning
system (Wnek et al., 1995), so we will begin by describing this system before delving
into the details of AQ-PM.
AQ-15c represents training examples using a restricted version of the attributional
language VL 1 (Michalski, 1980). Rule conditions are of the form
'[' !attribute? `=' !reference? ']',
where !attribute? is an attribute used to represent domain objects, and !reference?
is a list of attribute values. A rule condition is true if the attribute value of the
instance to which the condition is matched is in the !reference?. Decision rules
are of the form
where D is an expression in the form of a rule condition that assigns a decision to
the decision variable, ( is an implication operator, and C is a conjunction of rule
conditions. If all of the conditions in the conjunction are true, then the expression D
is evaluated and returned as the decision. We can also represent training instances
in VL 1 by restricting the cardinality of each reference to one. That is, we can view
training instances as VL 1 rules in which all conditions have references consisting of
single values.
The performance element of AQ-15c consists of a routine that flexibly matches
instances with VL 1 decision rules. Decision rules carve out regions in the representation
space, leaving some of the space uncovered. If an instance falls into an
uncovered region of the space, then, using strict matching technique, the system
would classify the instance as unknown, which is important for some applications.
Flexible matching involves computing the degree of match between the instance and
each decision rule. We can compute this metric in a variety of ways, but, for the
experiments discussed here, we computed the degree of match as follows. 2 For each
decision class D i , consisting of n conjunctions C j , the degree of match, for an
instance is given by
(1)
is the number of conditions in C j satisfied by the instance, and fi ij is the
total number of conditions in C j .
yields a number in the range [0, 1] and expresses the proportion of the
conditions of a rule an instance matches. A value of zero means there is no match,
and a value of one means there is a complete match. The flexible matching routine
8 MARCUS A. MALOOF AND RYSZARD S. MICHALSKI
returns as the decision the label of the class with the highest degree of match. If
the degree of match falls below a certain threshold, then the routine may report
"unknown" or "no match".
To learn a set of decision rules, AQ-15c uses the AQ algorithm (Michalski, 1969),
a covering algorithm. Briefly, AQ randomly selects a positive training example,
known as the seed. The algorithm generalizes the seed as much as possible, given
the constraints imposed by the negative examples, producing a decision rule. In the
default mode of operation, the positive training examples covered by the rule are
removed from further consideration, and this process repeats using the remaining
positive examples until all are covered.
To implement AQ-PM, we extended AQ-15c by incorporating the features outlined
in the partial memory algorithm in table 1. AQ-PM finds misclassified training
examples by flexibly matching the current set of decision rules with the examples in
Data t (step 5). These "missed" examples are grouped with the examples currently
held in partial memory (step and passed to the learning algorithm (step 7).
Like AQ-15c, AQ-PM uses the AQ algorithm to induce a set of decision rules from
training examples, meaning that AQ-PM operates in a temporal batch mode. To
form the new contents of partial memory (step 8), AQ-PM selects examples from
the current training set using syntactically modified characteristic decision rules
derived from the new concept descriptions, which we discuss further in section 3.1.
Finally, AQ-PM may use a variety of maintenance policies (step 9), like time-based
forgetting, aging, and inductive support, which are activated by setting parameters.
3.1. Selecting Examples
One of the key issues for partial memory learners is deciding which of the new
training examples to select and retain. Mechanisms that maintain these examples
are also important because some of the examples held in partial memory may no
longer be useful. This could be due to the fact that concepts changed or drifted,
or that what the system initially thought was crucial about a concept is no longer
important to represent explicitly, since the current concept descriptions implicitly
capture this information.
Returning to AQ-PM, we used a scheme that selects the training examples that lie
on the boundaries of generalized concept descriptions. We will call these examples
extreme examples. Each AQ-PM decision rule is an axis-parallel hyper-rectangle in
discrete n-dimensional space, where n is the number of attributes used to represent
domain objects. Therefore, the extreme examples could be those that lie on the
surfaces, the edges, or the corners of the hyper-rectangle covering them. For this
study, we chose the middle ground and retained those examples that lay on the
edges of the hyper-rectangle, although we have considered and implemented the
other schemes for retaining examples (Maloof, 1996).
Referring to figure 2, we see a portion of a discrete version of the iris data set
1936). We took the original data set from the UCI Machine Learning
Repository (Blake, Keogh, & Merz, 1998) and produced a discrete version using
the SCALE implementation (Bloedorn, Wnek, Michalski, & Kaufman, 1993) of
setosa example versicolor example
Figure
2. Visualization of the setosa and versicolor training examples.plsetosa example versicolor example010101010101010101010101010101010101010101010101010101010101010101010101010101010100110101010101
sw
setosa concept versicolor concept
Figure
3. Visualization of the setosa and versicolor concept descriptions with overlain training
examples.
the ChiMerge algorithm (Kerber, 1992). Shown are examples of the versicolor and
setosa classes with each example represented by four attributes: petal length (pl),
petal width (pw), sepal length (sl), and sepal width (sw).
To find extreme or boundary training examples, AQ-PM uses characteristic decision
rules, which specify the common attributes of domain objects from the same
class (Michalski, 1980). These rules consist of all the domain attributes and their
A. MALOOF AND RYSZARD S. MICHALSKIplsw
setosa example versicolor example
Figure
4. Visualization of the setosa and versicolor extreme examples.
values for the objects represented in the training set, and form the tightest possible
hyper-rectangle around a cluster of examples. Returning to our example, figure 3
shows the characteristic rules induced from the training examples pictured in figure
2.
AQ-PM syntactically modifies the set of characteristic rules so they will match
examples that lie on their boundaries and then uses a strict matching technique to
select the extreme examples. Although AQ-PM uses characteristic rules to select
extreme examples, it can use other types of decision rules (e.g., discriminant rules)
for classification. Figure 4 shows the examples retained by the selection algorithm,
which are those examples that lie on the edges of the hyper-rectangles expressed
by characteristic decision rules.
Theorem 1 states the upper bound for the number of examples retained by AQ-
PM and its lesioned counterpart. The lesioned version of AQ-PM, which we describe
formally in the next section, is equivalent to a temporal batch learning system with
full instance memory. For the best case, the partial memory learner will retain
fewer training examples than the lesioned counterpart by a multiplicative factor.
For the worst case, or the lower bound, the number of examples maintained by the
partial memory learner will be equal to that of the lesioned learner. This follows
from the proof of Theorem 1 and occurs when the training set consists only of
examples that lie on the edges of a characteristic concept description.
Theorem 1 For the characteristic decision rule D ( C induced from training
examples drawn from an n-dimensional discrete representation space, the number
of training examples retained by the partial memory learner is
(jreference k
while its lesioned counterpart will retain
Y
jreference k j:
Proof: Let D ( C be a characteristic decision rule induced from training examples
drawn from an n-dimensional discrete representation space !. Let c k be the kth
condition in C. By definition, the following three are numerically equivalent:
1. The dimensionality n of !.
2. The number of conditions c 2 C.
3. The number of attributes forming !.
For the partial memory learner, c k expresses the kth dimension in the hyper-rectangle
and will match jreference k j training examples along each edge of the kth
dimension. Furthermore, c k corresponds to 2 n\Gamma1 edges in the kth dimension of the
hyper-rectangle realized by D ( C. Therefore, the number of training examples
matched by c k is
If we were to compute this number for would overcount the
training examples that lie at the corners of the hyper-rectangle. Therefore, we must
subtract the two training examples that lie at the endpoints of each edge of the
hyper-rectangle, yielding
(jreference k
But now this undercounts the number of training examples because it excludes all
of the training examples that lie at the corners. Since there are 2 n corners in an
n-dimensional hyper-rectangle, the total number of examples matched is
For the lesioned learner, each attribute value of a training example will map to
a corresponding value in a condition c k , by definition of a characteristic concept
description. For a set of training examples, each attribute will result in a condition
c k such that the number of attribute values in the condition's reference is equal to
the number of unique values the attribute takes. Therefore, the number of training
examples maintained by the lesioned learner is equal to
Y
jreference k j:
12 MARCUS A. MALOOF AND RYSZARD S. MICHALSKI
1. AQ-BL(Data t
2. Concepts 0
3. TrainingSet
4. for to n do
5. TrainingSet t
6. Concepts t
7.
8. end . /* AQ-BL */
Table
2. Algorithm for the lesioned version of AQ-PM, AQ-Baseline (AQ-BL).
3.2. Forgetting Mechanisms
Forgetting mechanisms are important for partial memory learners for two reasons.
First, if the learner selects examples that lie on the boundaries of concept descrip-
tions, as AQ-PM does, and these boundaries change, then there is no reason to
retain the old boundary examples. The new extreme examples do the important
work of enforcing the concept boundary, so the learner can forget the old ones.
Second, if the learner must deal with concept drift, then forgetting mechanisms
are crucial for removing irrelevant and outdated examples held in partial memory.
As we will see in the experimental section, when concepts change suddenly, the
learner must cope with the examples held in partial memory from the previous
target concept. In the context of the new target concept, many of these examples
will be contradictory, and forgetting them is imperative.
In AQ-PM, there are two types of forgetting: explicit and implicit. Explicit
forgetting occurs when examples in partial memory meet specific, user-defined cri-
teria. In the current implementation, AQ-PM uses a time-based forgetting function
to remove examples from partial memory that are older than a certain age.
Implicit forgetting occurs when examples in partial memory are evaluated and
deemed useless because they no longer enforce concept boundaries. When computing
partial memory, the basic algorithm (table 1) evaluates the training examples
currently held in partial memory and those misclassified by the current concept
descriptions (step 8). Consequently, it repeatedly evaluates the extreme examples
and determines if they still fall on a concept boundary, which gives rise to an implicit
forgetting process. That is, if the learning algorithm generalizes a concept
description such that a particular extreme example no longer lies on the concept
boundary, then it forgets the example. We call this an implicit forgetting process
because there is no explicit criterion for removing examples (e.g., remove examples
older than fifty time steps).
R
R
R
Size
Red
Green
Blue
Shape
Color
R
R
R
Size
Red
Green
Blue
Shape
Color
R
R
R
Size
Red
Green
Blue
Shape
Color
a. Target concept for
time steps 1-39.
b. Target concept for
time steps 40-79.
c. Target concept for
time steps 80-120.
Figure
5. Visualization of the STAGGER Concepts.
4. Experimental Results
In this section, we present a series of experimental results from a lesion study (Ki-
bler & Langley, 1990), in which we used AQ-PM for three problems. To produce
the lesioned version of AQ-PM, we simply disabled its partial memory mechanisms,
resulting in a system equivalent to a temporal batch learner with full instance mem-
ory. We present this learner formally in table 2 and will refer to it as AQ-Baseline
(AQ-BL). We also included IB2 (Aha et al., 1991) for the sake of comparison, which
is a instance-based learner with a partial memory model.
The first problem, a synthetic problem, is referred to as the "STAGGER Con-
cepts" (Schlimmer & Granger, 1986). It has become a standard benchmark for
testing learning algorithms that track concept drift. We derived the remaining two
data sets from real-world problems. The first problem entails detecting blasting
caps in X-ray images of airport luggage (Maloof & Michalski, 1997), and the second
involves using learned profiles of computing behavior for intrusion detection
(Maloof & Michalski, 1995). We chose these real-world problems because they
require on-line learning and likely involve concepts that change over time. For ex-
ample, computing behavior changes as individuals move from project to project
or from semester to semester. The appearance of visual objects can also change
due to deformations of the objects or to changes in the environment. For these ex-
periments, the independent variable is the learning algorithm, and the dependent
variables are predictive accuracy and the number of training examples maintained.
For both of these measures, we computed 95% confidence intervals, which are also
presented. Detailed results for learning time and concept complexity for these and
other problems can be found elsewhere (Maloof, 1996).
14 MARCUS A. MALOOF AND RYSZARD S. MICHALSKI406080100
Predictive
Accuracy
Time Step
AQ-BL
Figure
6. Predictive accuracy for AQ-PM, AQ-BL, and IB2 for the STAGGER Concepts.
4.1. The STAGGER Concepts
The STAGGER Concepts (Schlimmer & Granger, 1986) is a synthetic problem in
which the target concept changes over time. Three attributes describe domain ob-
jects: size, taking on values small, medium, and large; color, taking on values red,
green, and blue; and, shape, taking on values circle, triangle, and rectangle. Conse-
quently, there are 27 possible object descriptions (i.e., events) in the representation
space. The presentation of training examples lasted for 120 time steps with the
target concept changing every 40 steps. The target concept for the first 39 steps
was red]. For the next 40 time steps, the target concept
was And for the final 40 time steps, the target
concept was [size = medium - large]. The visualization of these target concepts
appears in figure 5.
At each time step, a single training example and 100 testing examples were generated
randomly. 3 For the results presented, we conducted 60 learning runs using
IB2, AQ-PM, and AQ-BL, the lesioned version of AQ-PM.
Referring to figure 6, we see the predictive accuracy results for IB2, AQ-PM, and
AQ-BL for the STAGGER Concepts. IB2 performed poorly on the first target concept
and worse on the final two (53\Sigma2.7% and 62\Sigma4.0%, respectively).
Conversely, AQ-PM and AQ-BL achieved high predictive accuracies for the first
target concept (99\Sigma1.0% and 100\Sigma0.0%, respectively). However, once the target
concept changed at time step 40, AQ-BL was never able to match the partial memory
learner's predictive accuracy because the former was burdened with examples
irrelevant to the new target concept. This experiment illustrates the importance
of forgetting mechanisms. AQ-PM was less burdened by past examples because
it kept fewer examples in memory and forgot those held in memory after a fixed
period of time. AQ-PM's predictive accuracy on the second target concept was
Examples
Maintained
Time Step
AQ-BL
Figure
7. Memory requirements for AQ-PM, AQ-BL, and IB2 for the STAGGER Concepts.
89\Sigma3.38%, while AQ-BL's was 69\Sigma3.0%. For the third target concept, AQ-PM
achieved 96\Sigma1.8% predictive accuracy, while AQ-BL achieved 71\Sigma3.82%.
The predictive accuracy results for AQ-PM are comparable to those of STAGGER
(Schlimmer & Granger, 1986) and of the FLORA systems (Widmer & Kubat,
1996) with the following exceptions. On the first target concept, AQ-PM did not
converge as quickly as the FLORA systems but ultimately achieved similar predictive
accuracy. On the second target concept, AQ-PM's convergence was like
that of the FLORA systems, but it performed about 10% worse on the test cases.
Performance (i.e., slope and asymptote) on the third target concept was similar.
Turning to memory requirements, shown in figure 7, we see that the partial memory
learners, AQ-PM and IB2, maintained far fewer training examples than AQ-BL.
Without the partial memory mechanisms, the baseline learner, AQ-BL, simply accumulated
more and more examples. Intuitively, this is an inefficient and inadequate
policy when learning changing concepts. Yet, as IB2's predictive accuracy showed,
selection mechanisms alone are not enough.
Taking a closer look at the memory requirements for AQ-PM and IB2 (figure 8),
we see that the number of examples each learner maintained increases because of
example selection mechanisms. Overall, IB2 maintained fewer training examples
than AQ-PM, but this savings cannot mitigate its poor predictive accuracy. During
the first 40 time steps, for instance, both learners accumulated examples. As
each achieved acceptable predictive accuracies, the number of examples maintained
stabilized. Once the concept changed at time step 40, both learners increased the
number of examples held in partial memory to retain more information about the
new concept. The increases in IB2's memory requirements occurred because it adds
new examples only if they are misclassified by the examples currently held in mem-
ory. When the target concept changed, most of the new examples were misclassified
and, consequently, added to memory. Because IB2 kept all of the examples related
to the previous target concept, predictive accuracy suffered on this and the final
Examples
Maintained
Time Step
Figure
8. Memory requirements for AQ-PM and IB2 for the STAGGER Concepts.
Blasting Caps
Figure
9. Example of X-ray image used for experimentation.
target concept. Although AQ-PM also increased the number of examples held in
memory, it used an explicit forgetting process to remove outdated and irrelevant
training examples after a fixed period of fifty time steps, which proved crucial for
learning these concepts.
We cannot compare AQ-PM's memory requirements to STAGGER's, since the
latter does not maintain past training examples, but we can indirectly compare it
to one run of FLORA2 (Widmer & Kubat, 1996). Recall that the size of the representation
space for the STAGGER problem is only 27 examples. At time step 50,
FLORA2 maintained about 24 examples, which is 89% of the representation space.
At the same time step, AQ-PM maintained only 10.11 examples, on average, which
is only 37% of the representation space. Over the entire learning run, FLORA2
kept an average of 15 examples, which is 56% of representation space. AQ-PM, on
the other hand, maintained only 6.6 examples, on average, which is only 24% of
the space.
4.2. Blasting Cap Detection Problem
The blasting cap detection problem involves detecting blasting caps in X-ray images
of airport luggage (Maloof & Michalski, 1997). The 66 training examples for this
experiment were derived from 5 images that varied in the amount of clutter in
the luggage and in the position of the bag relative to the X-ray source. Figure 9
shows a typical X-ray image from the collection. Positive and negative examples of
blasting caps were represented using 27 intensity, shape, and positional attributes
(Maloof, Duric, Michalski, & Rosenfeld, 1996). We computed eleven attributes for
the blob produced by the heavy metal explosive near the center of the blasting cap.
We also computed these same eleven attributes for the rectangular region produced
by the blasting cap's metal tube. Finally, we used five attributes to capture the
spatial relationship between the blob and the rectangular region. These real-valued
attributes were scaled and discretized using the SCALE implementation (Bloedorn
et al., 1993) of the ChiMerge algorithm (Kerber, 1992). 4 The 15 most relevant
attributes were then selected using the PROMISE measure (Baim, 1988). The
resulting attributes for the blob were maximum intensity, average intensity, length
of a bounding rectangle, and three measures of compactness. For the rectangular
region, the selected attributes were length, width, area, standard deviation of the
intensity, and three measures of compactness. And finally, the remaining spatial
attributes were the distance between the centroids of the blob and rectangle, and
the component of this distance that was parallel to the major axis of a fitted ellipse.
We randomly set aside 10% of the original data as a testing set. The remaining
90% was partitioned randomly and evenly into 10 data sets (i.e., Data t , for
We then conducted an experimental comparison between IB2, AQ-PM,
and the lesioned version of the system, AQ-BL. For each learning run, we presented
the learners with Data t and tested the resulting concept descriptions on the testing
set, making note of predictive accuracy and memory requirements. We conducted
100 learning runs, in which we randomly generated a new test set and new data
sets Data t , for averaging the performance metrics over these 100 runs.
Figure
shows the predictive accuracy results for the blasting cap detection
problem. AQ-PM's predictive accuracy was consistently lower than AQ-BL's, which
learned from all of the available training data at each time step. When learning
stops at time step 10, AQ-PM's predictive accuracy was 7% less than that of the
lesioned learner, AQ-BL (81\Sigma3.4% vs. 88\Sigma2.8%). IB2 did not perform well on this
task and ultimately achieved a predictive accuracy of 73\Sigma3.8%.
A notable decrease in memory requirements has to be measured against AQ-PM's
loss in predictive accuracy, as shown by figure 11. When learning ceased at time
step 10, the baseline learner maintained the entire training set of 61\Sigma0.0 examples,
while the partial memory learner kept 18\Sigma0.5 training examples, on average, which
is roughly 30% of the total number of examples. IB2 retained slightly more examples
in partial memory than AQ-PM: 25\Sigma0.6.
A. MALOOF AND RYSZARD S. MICHALSKI5565758595
Predictive
Accuracy
Time Step (t)
AQ-BL
Figure
10. Predictive accuracy for AQ-PM, AQ-BL, and IB2 for the blasting cap detection
Examples
Maintained
Time Step (t)
AQ-BL
Figure
11. Memory requirements for AQ-PM, AQ-BL, and IB2 for the blasting caps detection
problem.
4.3. Computer Intrusion Detection Problem
For the computer intrusion detection problem, we must learn profiles of users'
computing behavior and use these profiles to authenticate future behavior. Learning
descriptions of intrusion behavior is problematic, since adequate training data is
difficult, if not impossible, to collect. Consequently, we chose to learn profiles for
each user, assuming that misclassification means that a user's profile is inadequate
or that an unauthorized person is masquerading as the user in question. Most
existing intrusion detection systems make this assumption.
The data for this experiment were derived from over 11,200 audit records collected
for 9 users over a 3 week period. We first parsed each user's computing activity
Predictive
Accuracy
Time Step (t)
AQ-BL
Figure
12. Predictive accuracy for AQ-PM, AQ-BL, and IB2 for the intrusion detection problem.
from the output of the UNIX acctcom command (Frisch, 1995) into sessions by
segmenting at logouts and at periods of idle time of twenty minutes or longer.
This resulted in 239 training examples. We then selected seven numeric audit
metrics: CPU time, real time, user time, characters transferred, blocks read and
CPU factor, and hog factor. Next, we represented each of the seven numeric
measures for a session, which is a time series, using the maximum, minimum, and
average values, following Davis (Davis, 1981). These 21 real and integer attributes
were scaled and discretized using the SCALE implementation (Bloedorn et al.,
1993) of the ChiMerge algorithm (Kerber, 1992). Finally, using the PROMISE
measure (Baim, 1988), we selected the 13 most relevant attributes: average and
maximum real time, average and maximum system time, average and maximum
user time, minimum and average characters transferred, average blocks transferred,
average and maximum CPU factor, and average and maximum hog factor.
The experimental design for this problem was identical to the one we used for the
blasting cap problem. Referring to figure 12, we can see the predictive accuracy
results for AQ-PM, AQ-BL, and IB2 for the intrusion detection problem. AQ-PM's
predictive accuracy was again slightly lower than AQ-BL's. When learning stopped
at time step 10, AQ-PM's accuracy was 88\Sigma1.6%, while AQ-BL's was 93\Sigma1.2%, a
difference of 5%. IB2 fared much better on this problem than on previous ones.
When learning ceased, IB2's predictive accuracy was a slightly better than AQ-
PM's: 89\Sigma1.3%, although this result was not statistically significant (p ! :05).
Figure
13 shows the memory requirements for each learner for this problem.
AQ-PM maintained notably fewer training examples than its lesioned counterpart.
When learning ceased at time step 10, the baseline learner, AQ-BL, maintained
examples, while AQ-PM maintained 64\Sigma1.0 training examples, which is
roughly 29% of the total number of examples. IB2 maintained even fewer examples
than AQ-PM. When learning stopped, IB2 held roughly 52\Sigma0.8 examples in partial
memory, which was slightly fewer than the number held by AQ-PM.
A. MALOOF AND RYSZARD S. MICHALSKI50150250
Examples
Maintained
Time Step (t)
AQ-BL
Figure
13. Memory requirements for AQ-PM, AQ-BL, and IB2 for the intrusion detection problem.
4.4.
Summary
The lesion study comparing AQ-PM and AQ-BL suggested that the mechanisms for
selecting extreme examples notably reduced the number of instances maintained in
partial memory at the expense of predictive accuracy. When concepts changed, AQ-
PM relied on forgetting mechanisms to remove outdated and irrelevant examples
held in memory. Recall that AQ-PM can use two types of forgetting: implicit and
explicit. Explicit mechanisms proved crucial for the STAGGER Concepts, but the
implicit forgetting mechanisms, in general, had little effect, an issue we explore
further in the next section.
The direct comparison to IB2 using the STAGGER Concepts further illustrated
the importance of forgetting policies, as it was apparent that the example selection
mechanisms alone did not guarantee acceptable predictive accuracy when concepts
changed. On the other hand, when concepts were stable, as was the case with
the computer intrusion detection and blasting cap detection problems, forgetting
mechanisms played a less important role than the selection mechanisms. Moreover,
we predict that the differences in performance between AQ-PM and IB2 on these
problems were due to inductive bias rather than a limitation of IB2's example
selection method. This would explain why IB2 performed well on the intrusion
detection problem but performed poorly on the blasting cap detection problem.
Indeed, AQ-PM and IB2 used similar selection methods, and experimental results
showed that each maintained roughly the same number of examples in memory.
Regarding the indirect comparison to the FLORA systems, AQ-PM performed as
well on two of the three STAGGER Concepts, and it appears to have maintained
fewer training examples in partial memory. The difference in memory requirements
is due to how the learners selected examples from the input stream. The FLORA
systems kept a sequence of examples of varying length from the input stream, and,
as a result, partial memory likely contained duplicate examples. This would be
PARTIAL MEMORY LEARNING 21
especially true for a problem like the STAGGER Concepts in which we randomly
draw 120 examples from a representation space consisting of 27 domain objects.
Conversely, AQ-PM retained only those examples that lay on the boundaries of
concept descriptions and, consequently, would not retain duplicate examples or
examples from the interior of the concept.
We claim that AQ-PM was able to achieve comparable accuracy while maintaining
fewer examples in partial memory because the selected examples enforced concept
boundaries and, hence, were of high utility. The two systems do use different
concept description languages: AQ-PM uses VL 1 , which is capable of representing
DNF concepts, whereas the FLORA systems use a conjunctive description language.
However, upon analyzing the STAGGER Concepts, we concluded that it is unlikely
that representational bias accounted for the differences or similarities in predictive
accuracy.
As we noted, AQ-PM did not fare as well as the FLORA systems on the second
of the three STAGGER Concepts. Transitioning concept descriptions from the
first target concept to the second is the most difficult because it is here that there
is the most change in the representation space. It is here that there is the most
overlap between the old negative concept and the new positive concept, as depicted
in figure 5.
AQ-PM should have had an advantage over an incremental learning system in this
situation because it operates in a temporal batch mode. Since AQ-PM replaces old
concept descriptions with new ones, it would not be burdened by the information
about the old concepts encoded in the concept descriptions. But, because AQ-PM
operates in a temporal batch mode, the only cause for its fair performance on the
second concept is the examples held in partial instance memory.
As we have discussed, AQ-PM used a simple forgetting policy that removed examples
older than fifty time steps. The FLORA systems, on the other hand, used an
adaptive forgetting window, which, in this case, more efficiently discarded examples
after the concept changed and may account for the difference in performance on the
second concept. Making the transition between the second and third STAGGER
Concepts is easier than transitioning between the first and second because there
is more overlap between the old positive concept description and the new positive
concept description (see figure 5). AQ-PM's static forgetting policy worked better
during this transition than during the previous one, and the learner achieved
predictive accuracies that were comparable to the FLORA systems.
5. Discussion
Intelligent systems need induced hypotheses for reasoning because they generalize
the system's experiences. We anticipate that manipulating some concept descriptions
to cope with changing concepts will slow a system's reactivity. By keeping
extreme examples in addition to concept descriptions, the learner maintains a rough
approximation of the current concept descriptions and, consequently, is able to both
reason and react efficiently.
22 MARCUS A. MALOOF AND RYSZARD S. MICHALSKI
When learning stable concepts, we expect slight changes in the positions of concept
boundaries. The extreme examples, in this case, document the past and provide
stability. On the other hand, when examples arrive that radically change
concept boundaries, then the examples held in memory that no longer fall on concept
boundaries are removed and replaced with examples that do. This process
is actually happening in both situations, but to different degrees. The extreme
examples provide stability when it is needed. Yet, they do not hinder the learner
because forgetting mechanisms ensure that stability does not result in low reac-
tivity. For systems to succeed in nonstationary environments, they must find a
balance between stability and reactivity.
In the sections that follow, we examine a variety of issues related to this study
and, more globally, to partial memory learning and nonstationary concepts. In
particular, we examine experimental results from other aspects of our study (Mal-
oof, 1996), such as learning time, concept complexity, other methods of example
selection, and incremental learning. Then, after discussing the some of the current
limitations of this work, we consider directions for the future.
5.1. Learning Time
The experimental results from the lesion study showed that the example selection
method greatly reduced the number of training examples maintained when compared
to the baseline learner. Because the number of training examples affects
run time of the algorithms investigated, reducing the number of training examples
maintained resulted in notable decreases in learning time. For the intrusion detection
problem, as an example, at time step 10, AQ-PM's learning time was 36.7
seconds, while AQ-BL's was 55.6 seconds, 5 meaning that AQ-BL was 52% slower
than AQ-PM for this problem.
5.2. Complexity of Concept Descriptions
We also examined complexity of induced concept descriptions in terms of conditions
and rules. AQ-PM produced concepts descriptions that were as complex or simpler
than those produced by AQ-BL. The degree to which descriptions induced by AQ-
PM were simpler was not as notable as other measures, such as learning time and
memory requirements.
Table
3 shows decision rules from the intrusion detection problem that AQ-PM
induced for two computer users, daffy and coyote. 6 The first rule, for daffy, consists
of one condition involving the average system time attribute, which must fall in
the high range of [25352.53: : :63914.66]. 7 The class label "daffy" is assigned to the
decision variable if the average system time for one of daffy's sessions falls into
this range. Therefore, daffy's computing use is characterized by a considerable
consumption of system time.
The weights appearing at the end of the rules are strength measures. The t-
weight indicates how many total training examples the rule covers. The u-weight
indicates how many unique training examples the rule covers. Rules may overlap,
Table
3. Examples of AQ-PM rules for daffy and coyote's computing
behavior.
(t-weight: 10, u-weight: 10)
(t-weight: 7, u-weight:
(t-weight: 4, u-weight:
so two rules can cover the same training example. The rule for daffy's computing
use is strong, since it alone covers all of the available training examples. The next
two rules characterize coyote's behavior, whose use of computing resources is low,
especially compared to daffy's.
5.3. Other Example Selection Methods
The selection method used for this study retained the examples that lay on the edges
of the hyper-rectangle expressed by a decision rule. We alluded to similar methods
that keep the examples that lie on the corners and surfaces of these hyper-rectangles.
Experimental results from a previous study (Maloof, 1996) for the blasting caps
and intrusion detection problems showed that keeping the examples that lie on the
corners of the hyper-rectangle, as opposed to those on the edges, resulted in slightly
lower predictive accuracy and slightly reduced memory requirements. We anticipate
that a method retaining the examples that lie on surfaces of the hyper-rectangle
will slightly improve predictive accuracy and slightly increase memory requirements
when compared to the edges method. From these results, we can conclude that as
AQ-PM keeps more and more examples in partial memory, its predictive accuracy
will converge to that of the full memory learner.
5.4. Adding Examples to Partial Memory
In this paper, we have discussed a reevaluation strategy for maintaining examples
in partial memory. Using this scheme, AQ-PM uses new concept descriptions to
test if the misclassified examples and all of the examples in partial memory lie on
concept boundaries. It retains those examples that do and removes those that do
not. And, as we discussed, this gives rise to an implicit forgetting process.
An alternative scheme is to accumulate examples by computing partial memory
using only the misclassified examples and by adding the resulting extreme examples
to those already in partial memory. Therefore, once an example is placed in partial
memory, it remains there until removed by an explicit forgetting process. For
the problems discussed here and elsewhere (Maloof, 1996), we did not see notable
differences in performance between the reevaluation policy and the accumulation
policy. For example, one would expect that the reevaluation policy would work best
A. MALOOF AND RYSZARD S. MICHALSKI
for dynamic problems, like the STAGGER Concepts, and the accumulation policy
would work best for more stable problems, like the blasting cap problem. To date,
our experimental results have not supported this intuition.
5.5. Incremental Learning
In the basic algorithm, we used a temporal batch learning method (table 1, step
7). We have also examined variants using incremental learning algorithms (Maloof,
1996), meaning that the system learns new concept descriptions by modifying the
current set of descriptions using new training examples and those examples in partial
memory. We have investigated this notion using two incremental algorithms:
the GEM algorithm (Reinke & Michalski, 1988), a full instance memory technique,
and the AQ-11 algorithm (Michalski & Larson, 1983), a no instance memory tech-
nique. We chose these algorithms because their inductive biases are most similar to
that of AQ-PM: both use the VL 1 representation language (Michalski, 1980) and
the AQ induction algorithm (Michalski, 1969).
Experimental results for the computer intrusion detection and the blasting cap
detection problems show evidence that the incremental learning variants of AQ-PM
lose less predictive accuracy than AQ-PM using a temporal batch learning method.
We can infer that the incremental learning variants perform better because the
concepts themselves encode information that is lost when using a temporal batch
method. The incremental learning methods take advantage of this information,
whereas the temporal batch method does not. Moreover, we intend to evaluate
these incremental learning methods using the STAGGER problem to determine
how they perform. We may find that the incremental methods perform worse
because they encode too much information about the past and reduce the learner's
ability to react to changing environments.
5.6. Current Limitations
Many of the current limitations of the approach stem from assumptions the system
makes. For example, the system assumes the given representation space is adequate
for learning. That is, it is currently incapable of constructive induction (Michalski,
1983). Also, it assumes that the context in which training examples are presented
is stationary. Hence, it cannot learn contextual cues (Widmer, 1997). Although we
did not implement explicit mechanisms to handle noise, there has been work on such
mechanisms in contexts similar to these (Widmer & Kubat, 1996). In general, the
selection methodology works best for ordered attributes, taking advantage of their
inherent structure. Consequently, for purely nominal domains, the method selects
all training examples, since, for each training example, there exists a projection of
the representation space in which the example lies on a concept boundary.
5.7. Future Work
Much of the current research assumes that the representation space in which concepts
drift or contexts change is adequate for learning. If an environment is nonsta-
tionary, then the representation space itself could also change. Learners typically
detect concept change by a sudden drop in predictive accuracy. If the learner is
subsequently unable to achieve acceptable performance, then it may need to apply
constructive induction operators in an effort to improve the representation space for
learning. To this end, one may use a program that automatically invokes constructive
induction, like AQ-18 (Bloedorn & Michalski, 1998; Kaufman & Michalski,
1998).
Another interesting problem for future research is how to detect good and bad
types of change. Consider the problem of intrusion detection. We need systems
that are flexible enough to track changes in a user's behavior; otherwise, when
changes do occur, the system's false negative rate will increase. Yet, if intrusion
detection systems are too flexible, then they may perceive a cracker's behavior as a
change in the true user's behavior and adapt accordingly. We envision a two-layer
system that learns a historical profile of a user's computing behavior and learns
how that historical profile has changed over time. If a user's computing behavior
no longer matches the historical profile, then the system would determine if the
type of change that occurred is plausible for that user. If it is not, then the system
would issue an alert. Such systems should prove to be more robust and should
perform with lower false negative rates.
From the standpoint of the methodology, we would like to investigate policies
that let the learner function when instances arrive without feedback. Producing
decisions without feedback is not necessarily problematic, but, when feedback does
arrive after a period of time, the system may realize that many of its past decisions
were wrong. Naturally, the simplest policy is to forget past decisions, in which case
the learner would never realize that it had made mistakes. Certain applications,
like intrusion detection, require systems to be more accountable. However, even
though a system may remember past decisions, when it realizes that some were
wrong, perhaps it should only issue an alert. Alternatively, the system may seek
feedback for the events that led to the incorrect decisions and relearn from them.
From the perspective of the implementation, a fruitful exercise would be to implement
the example selection method using another concept representation, like
decision trees. There is nothing inherent to the method that limits it to decision
rules. We could apply the method to any symbolic representation that uses linear
attributes. We could also implement other example selection and maintenance
schemes as well as mechanisms for coping with noise and contextual changes, but
these latter areas, as we have commented, have been well-studied elsewhere.
Finally, there are several opportunities for additional experimental studies. Here,
we investigated concepts that change suddenly. Changes in concepts could also
occur more gradually. If we think of concepts as geometric objects in a space, then
they could change in shape, position, and size. Consequently, concepts could grow
(i.e., change in size, but not in position and shape), move (i.e., change in position,
26 MARCUS A. MALOOF AND RYSZARD S. MICHALSKI
but not in shape and size), deform (i.e., change in shape, but not in position and
size), and so on. Although synthetic data sets like these provide opportunities
to investigate specific research hypotheses, we are also interested in concept drift
in real-world applications like intrusion detection and agent applications (e.g., an
agent for prioritizing e-mail).
6. Conclusion
Partial memory learning systems select and maintain a portion of the past training
examples and use these examples for future learning episodes. In this paper, we
presented a selection method that uses extreme examples to enforce concept bound-
aries. The method extends previous work by using induced concept descriptions to
select a nonconsecutive sequence of examples from the input stream. Reevaluating
examples held in partial memory and removing them if they no longer enforce
concept boundaries results in an implicit forgetting process. This can be used in
conjunction with explicit forgetting mechanisms that remove examples satisfying
user-defined criteria. Experimental results from a lesion study suggested that the
method notably reduces memory requirements with small decreases in predictive
accuracy for two real-world problems, those of computer intrusion detection and
blasting cap detection in X-ray images. For the STAGGER problem, AQ-PM performed
comparably to STAGGER and the FLORA systems. Finally, a direct comparison
to IB2 revealed that AQ-PM provided comparable memory requirements
and often higher predictive accuracy for the problems considered.
Acknowledgments
We would like to thank Eric Bloedorn, Ren'ee Elio, Doug Fisher, Wayne Iba, Pat
Langley, and the anonymous reviewers, all of whom provided suggestions that improved
this work and earlier versions of this paper. We would also like to thank the
Department of Computer Science at Georgetown University, the Institute for the
Study of Learning and Expertise, and the Center for the Study of Language and
Information at Stanford University for their support of this work.
This research was conducted in the Machine Learning and Inference Laboratory
at George Mason University. The laboratory's research has been supported in part
by the National Science Foundation under grants IIS-9904078 and IRI-9510644, in
part by the Advanced Research Projects Agency under grant N00014-91-J-1854,
administered by the Office of Naval Research, under grant F49620-92-J-0549, administered
by the Air Force Office of Scientific Research, and in part by the Office
of Naval Research under grant N00014-91-J-1351.
Notes
1. The structure of time is not crucially important for this paper, but we do feel that this issue
warrants a more sophisticated treatment.
2. AQ-15c has other methods for computing the degree of match, but, based on empirical analysis,
we found that this method worked best for the problems in this study.
3. For the first time step, we generated two random examples, one for each class.
4. We ran IB2 using the unscaled, continuous data.
5. We conducted these experiments using a C implementation of AQ-PM running on a Sun
2.
6. Attribute values have been expressed using their original real ranges.
7. The units in this case are seconds.
--R
A method for attribute selection in inductive learning systems.
UCI repository of machine learning databases
The CN2 induction algorithm.
CONVART: a program for constructive induction on time dependent data.
Master's thesis
An incremental deductive strategy for controlling constructive induction in learning from examples.
The use of multiple measurements in taxonomic problems.
Essential system administration (Second
Trading simplicity and coverage in incremental concept learning.
ChiMerge: discretization of numeric attributes.
Machine learning as an experimental science.
FAVORIT: concept formation with ageing of knowledge.
Forgetting and aging of knowledge in concept formation.
Experiments with incremental concept formation: UNIMEM.
Redundant noisy attributes
Progressive partial memory learning.
A method for partial-memory incremental learning and its application to computer intrusion detection
Learning symbolic descriptions of shape for object recognition in X-ray images
On the quasi-minimal solution of the general covering problem
Pattern recognition as rule-guided inductive inference
A theory and methodology of inductive learning.
CA: Morgan Kaufmann.
Incremental generation of VL 1 hypotheses: the underlying methodology and the description of program AQ11 (Technical Report No.
Department of Computer Science
Incremental learning of concept descriptions: a method and experimental results.
A case study of incremental concept induction.
Beyond incremental processing: tracking concept drift.
ID5: an incremental ID3.
Decision tree induction based on efficient tree restructuring.
Guiding constructive induction for incremental learning from examples.
Tracking context changes through meta-learning
Learning in the presence of concept drift and hidden contexts.
Selective induction learning system AQ15c: the method and user's guide (Reports of the Machine Learning and Inference Laboratory No.
--TR
A Method for Attribute Selection in Inductive Learning Systems
Incremental learning of concept descriptions: A method and experimental results
Instance-Based Learning Algorithms
Redundant noisy attributes, attribute errors, and linear-threshold learning using winnow
Essential system administration
An Incremental Deductive Strategy for Controlling Constructive Induction in Learning from Examples
<italic>FAVORIT</italic>
Forgetting and aging of knowledge in concept formation
C4.5: programs for machine learning
Learning in the presence of concept drift and hidden contexts
Tracking Context Changes through Meta-Learning
Progressive partial memory learning
Data-Driven Constructive Induction
The CN2 Induction Algorithm
Experiments with Incremental Concept Formation
A Method for Partial-Memory Incremental Learning and its Application to Computer Intrusion Detection
--CTR
Chi-Chun Huang, A novel gray-based reduced NN classification method, Pattern Recognition, v.39 n.11, p.1979-1986, November, 2006
Steffen Lange , Gunter Grieser, Variants of iterative learning, Theoretical Computer Science, v.292 n.2, p.359-376, 27 January
Steffen Lange , Gunter Grieser, On the power of incremental learning, Theoretical Computer Science, v.288 n.2, p.277-307, 16 October 2002
Antonin Rozsypal , Miroslav Kubat, Association mining in time-varying domains, Intelligent Data Analysis, v.9 n.3, p.273-288, May 2005
Marcus A. Maloof , Ryszard S. Michalski, Incremental learning with partial instance memory, Artificial Intelligence, v.154 n.1-2, p.95-126, April 2004
Miquel Montaner , Beatriz Lpez , Josep Llus De La Rosa, A Taxonomy of Recommender Agents on theInternet, Artificial Intelligence Review, v.19 n.4, p.285-330, June | on-line concept learning;partial memory models;concept drift |
360834 | Diamond Quorum Consensus for High Capacity and Efficiency in a Replicated Database System. | Many quorum consensus protocols have been proposed for the management of replicated data in a distributed environment. The advantages of a replicated database system over a non-replicated one include high availability and low response time. We note further that the multiple sites can act as multiple agents so that at any time, multiple requests can be handled in parallel. This feature leads to the desirable consequence of high workload capacity. In this paper, we define a new metric of read-capacity for this feature. We propose a new protocol called diamond quorum consensus which has two major properties that are superior to the previous protocols of majority, tree, grid, and hierarchical quorum consensus: (1) it has the highest read-capacity, (2) it has the smallest optimal read quorum size of 2. We show that these two features are achievable without jeopardizing the availability. The small quorum size is a significant feature because it relates to the messaging cost. Few previous work on quorum consensus has discussed the handling of partition failure, which in many cases will depend on the quorum consensus protocol, we show how we can use the generalized virtual partition protocol to handle partition failure in the case of diamond quorum consensus. | Introduction
A replicated database system is built in a distributed environment with multiple
sites, in which copies of data are stored at multiple sites. The main motivations of a
replicated database are to improve the reliability and performance. By storing data
at multiple sites, the database system can continue to operate even if some sites
have failed. Also, multiple sites make it possible to support multiple concurrent
operations at different sets of sites.
A major problem with replicated data is that we need to ensure data consistency
in spite of concurrent operations. A reliable concurrency control protocol is necessary
to synchronize the user transactions in order to maintain data validity. For
example, two write operations from two different user transactions must not be allowed
to simultaneously update different copies of a data object. In order to achieve
this kind of synchronization among multiple copies, additional communication and
processing costs are incurred. We have a problem of how to do the synchronization
in such a way that the cost can be minimized. One well-studied approach for this
management of replicated data is to use certain sets of replicas, called read and
write quorums, for read and write operations, respectively. Any write quorum has
at least one copy in common with every read quorum and every write quorum. For
example, write quorums could be all sets containing a majority of copies [18]. For
better performance, some logical structure is imposed on the network, and the quorums
are chosen under the consideration of such structures. Such logical structures
include the tree [2, 1, 3, 20] and grid [4, 11] structures. A geometric approach for
dealing with logical structures is proposed in [12]. A number of metrics have been
used for evaluating such a protocol, they include the following:
1. Availability
2. Quorum size (best case and worst case)
3. Is the algorithm fully distributed?
4. Cost of one node failure in the worst case
5. Message overhead
6. Communication delay
The first metric on availability has been studied extensively but its significance
has decreased as systems tend to be more and more reliable, a protocol needs only
to have high availabilities under a reasonably reliable environment. Metrics (2) to
are listed in [10]. The quorum size is also a well-studied issue because it can
be related to the messaging cost. From [10], the
N algorithm in [13] is a fully
distributed algorithm that achieves the smallest quorum size. The protocol in [16]
has a bigger quorum size but remedies the availability and other problems of [13].
The third metric is whether all copies assume equal burden for synchronization,
for example, the tree quorum is not fully distributed because the nodes higher in
the tree are given more share of the burden in order to achieve a small quorum size.
However, we shall argue below that this metric is problematic. The fourth metric
is the impact that a single node failure has on the size of a quorum. For example,
for the
N algorithm of [13], if one of the sites in a chosen quorum fails, then in
the worst case, a new quorum must be formed with a totally different set of
nodes. The average message overhead of replica control protocols is studied in [17].
Communication delay is studied in [7, 8]. For communication overhead, both the
quorum size and the communication protocol are examined [9].
Other than availability, the above metrics try to ensure two important performance
criteria of a database system: fast response time and high throughput, even
when the system is under heavy workload. However, we argue here that while these
metrics are useful, they are not sufficient. Let us look at the second metric of quorum
size. A quorum size can be small, but if each request is targeted at the same
set of sites, these sites will become a bottleneck and when the system is busy, both
the throughput and response time of the system will become poor. The third metric
of even workload distribution is relevant, it is set to be a yes/no metric in [10]. An
even distribution of workload may not necessarily lead to good performance under
heavy workload. For example, the majority quorum consensus has even workload
distribution, but at any one time, only one operation can proceed, which means
DIAMOND QUORUM CONSENSUS 3
that the performance cannot be enhanced by concurrent operations. Therefore,
we would need another metric for the performance under heavy workload. In this
paper, we introduce such a metric, which we shall call the read-capacity.
Let us consider a replicated database system, it is in a distributed environment
with multiple sites, where copies of a data are kept at different sites. For quorum
consensus, when a read (write) operation is executed, it must obtain the consensus
from a read (write) quorum. A write quorum intersects each other write quorum
and each read quorum, therefore at any time only one transaction can be writing
a logical data. A read quorum needs only to intersect a write quorum, therefore,
as long as two read quorums do not intersect each other, two read operations on
the same logical data can be executed concurrently. One may think that the read
quorums of two concurrent read operations can intersect each other, but when
we measure the throughput or response time, we must consider the amount of
work involved in a read operation for each copy. In particular, the messaging to
and from each site in a quorum will be a major overhead, and an intersection in
read quorums will create a bottleneck. This is why the metric of even workload
distribution is important. If two read quorums are disjoint, there will not be any
bottleneck for response time and throughput. In view of the above, we introduce
a metric here that measures the maximal number of read operations that can be
handled simultaneously, which is effectively the maximal number of disjoint read
quorums that can be formed. This metric is the read-capacity. In [14, 15, 19], there
is a metric called the load which roughly speaking measures the minimal load on
the busiest site. It has a similar flavor to the metric of read-capacity. However, [15]
considers only a collection of quorums every two of which intersect.
A protocol with a high read-capacity can handle heavy workload without much
deterioration in response time and system throughput. However, high read-capacity
does not guarantee competitive performance under light workload conditions and
does not guarantee high availability. Therefore, a good protocol should simultaneously
satisfy all the criteria of high read-capacity, small best case quorum sizes,
high availability, and low cost of one node failure.
In this paper, we present such a new protocol, which is called diamond quorum
consensus or simply the diamond protocol, for managing replicated data. In
this protocol, the sites in the network are logically organized into a two-dimensional
diamond structure. This protocol can be viewed as a specialized version of the grid
protocol [10] because the logical structure can be seen as a grid with holes. It has
been noted in [11] that grids with holes often produce a higher availability than
solid grids. We show here that there is much more to the story. The diamond
protocol also resembles the crumbling walls protocol [15]. However, [15] does not
consider read and write operations. There are two main properties of the diamond
quorum consensus that make it a good choice for replicated data management:
1. Compared with the majority quorum, tree quorum and grid quorum (without
holes) protocols, the diamond protocol results in the greatest number of disjoint
read quorums which shall lead to a better throughput and response time.
2. The protocol achieves the smallest optimum read quorum size and the second
smallest write quorum size among the above protocols. Since the quorum size
is a good indicator of messaging cost, and read operations are usually in higher
proportion than write operations, this is a very desirable property.
Other than the above, we also show that diamond quorum consensus has high
availability when the probability of site failures is reasonably low, and it has a low
cost of node failures.
Partition failure can be a problem in a replicated database system, and handling
partition failure has been a main motivation for the introduction of quorum consensus
in the earlier work. However, few recent work in quorum consensus discuss
partition failure handling, which in many cases will depend on the quorum consensus
protocol. We shall examine the use of the generalized virtual partition protocol
with the diamond protocol to handle partition failure.
This paper is organized as follows. In the next section, we present the diamond
quorum consensus. Analysis of our protocol and comparison with other protocols
are given in Section 3. The handling of partition failures is discussed in Section 4.
Section 5 is a conclusion.
2. Diamond Quorum Consensus
In our protocol, the sites in a network are logically arranged in a two-dimensional
diamond shape as in Figure 1. Often we shall refer to the physical sites of the
network as sites, and refer to the nodes in the logical structure as nodes. When this
distinction is not relevant, we shall use the terms sites and nodes interchangeably.
In
Figure
1, nodes are represented by circles. The top and bottom rows of the
diamond shape contain two nodes each. In a diamond structure, suppose the number
of rows is odd, we label the rows from the top row to the middle row by levels,
so that the top row is at level 0, the second row is at level 1, etc. Similarly we
label the rows from the bottom row to the middle row by levels, so that the bottom
row is at level 0, the second row from the bottom is at level 1, etc. The number of
nodes in the rows increases by a certain amount w i for every level i for i ? 0. If
and each of the top and bottom rows contains 2 nodes, then we
call the resulting diamond structure a regular diamond structure and denote it
by d . In our basic model, w i is 2 for all i, therefore the resulting structure is a 2
structure.
For example, in Figure 1, there are 32 nodes, or 32 sites, in the network, the
number of nodes for the rows are 2, 4, 6, 8, 6, 4, 2.
The table in Figure 2 shows some relevant figures for the diamond structure, k is
any integer greater than 0. For example, the diamond structure in Figure 1 has a
maximum level of 3, 7 rows and 32 nodes in total.
In the diamond protocol, read and write quorums are formed in the following
ways:
DIAMOND QUORUM CONSENSUS 5
Figure
1. Example of a read quorum and a write quorum
Max Level Configuration No. of rows No. of nodes
Figure
2. Some figures for the diamond structure
6 FU, WONG AND WONG
ffl Write Quorum
To form a write quorum, we can choose all nodes of any one row plus an arbitrary
node for each remaining row.
ffl Read Quorum
To form a read quorum, we can choose
1. any entire row of nodes, or
2. an arbitrary node of each row.
Figure
1 and Figure 3 show some quorums forming instances. It is obvious that the
above method ensures that each write quorum intersects each other write quorum
and each read quorum. We shall refer to the above protocol as diamond quorum
consensus or simply the diamond protocol.
For the diamond protocol, the minimum read quorum size is obtained by choosing
the whole top or bottom row of nodes. The minimal write quorum size is obtained
by choosing the whole top or bottom row of nodes plus a node for each remaining
row. Therefore, if k is the maximum level in a diamond structure, the minimal read
quorum size is 2 and the minimal write quorum size is 2k 2.
In the more general case, the number of rows in a diamond structure can be
even or odd, the difference in the number of nodes between 2 adjacent rows in the
diamond structure can be any integer, and the number of nodes in the top and
bottom row can be other than 2. In the extreme case where w
level 0 contains only one node, the diamond reduces to a single column of nodes,
and we shall have the read-one-write-all protocol.
In another generalization, the diamond logical structure can be adjusted according
to the number of sites in a particular network. That is, it is not restricted only
to the numbers as shown in the Table 2. For instance, if there are 40 sites in a
network, we can use a structure with 4 levels, 9 rows and 50 nodes, with 10 of them
not being occupied by physical sites. The idea is similar to that of a hollow grid
structure [11]. If the total number of nodes in the network does not fit into the
shape above, the protocol can also accommodate the change of the shape by the
addition or deletion of a number of nodes in any row and also by the addition or
deletion of rows. Therefore, we have the following definition of a general diamond
structure.
Definition 1. A general diamond structure (also called a G structure) is a stack
S of n rows of nodes. Let us label the rows by numbers 1 to n. We say that a row
A is above (below) another row B if the label of A is less (greater) than the label
of B. For each row R in S such that no row contains more nodes than R, if we
consider the rows in the stack S above and including R, each row has a size greater
than or equal to the size of the row above it. If we consider the rows in the stack
S below and including R, each row has a size greater than or equal to the size of
the row below it.
Figure
3. More examples of read and write quorums
2.1. Some Properties of the Diamond Protocol
In the d structure, for the top k rows, a row at level i + 1 has w i more nodes than
a row at level i. If w i is set to a constant d, then the sum of nodes for k rows from
levels 0 to k \Gamma 1 is given by a finite arithmetic series:
A s (k; d) =
a 2:
Then, let oe(k; d) be the number of nodes in a d structure with a maximum level of
k, and with w
We shall emphasize on the 2 structures because it will give us good performance
characteristics. If k is the maximum level in 2 , then from the above, the number
of nodes is given by oe(k;
have
2N .
The number of rows in a 2 structure is given by
The number of nodes in the longest row in a 2 structure is given by 2(k
2N .
We shall see that the above properties of a 2 structure lead to good performance
in the next section. However, a 2 structure is not a general structure that can
accommodate any number of sites. Therefore, we would like to discover general
diamond structures that can preserve these desirable properties. This is given in
the following theorem.
Theorem 1 Given any integer N - 5, one can build a general diamond structure
(a G structure) that contains N nodes, where the number of rows in the G structure
a b
c d e f
r s
A
Figure
4. Hypotenuse of even length (even arrangement)
is
l p
1, the top and bottom rows have size 2, and the maximum row size is
bounded from above by
l p
Proof: In Figure 4, the nodes in a diamond structure are represented by alphabets
a to s. Each node occupies one square in the figure. Since the dark grey area is
equal to the light grey area, the space occupied by all nodes is equal to the space
occupied by the square ABCD, which has sides of length S. For the right-angled
triangles ABC or ACD, the hypotenuse AC has length
In
Figure
5, the nodes are represented by alphabets a to x. Each node occupies
one square in the figure. Let the area of each square be 1. The square WXY Z has
sides of length S and a hypotenuse of length 7. Since the dark grey area is
equal to the light grey area, the space occupied by all nodes is less than the space
occupied by square WXY Z by an amount equal to the black area, which is given
by 0.5.
It is easy to see that the two figures can be extended to any values of H - 3,
and the relationship between the space occupied by nodes and the space occupied
by the diamond ABCD or WXY Z will not change. The arrangement of nodes for
even H is as follows: the top and bottom rows have size 2, there is only one row of
maximum length, difference in size between adjacent rows is 2, Let us call this the
even arrangement. The arrangement of nodes for an odd H is similar, except
that there are two rows of maximum length, so the difference in size between the
middle two rows is 0. Let us call this the odd arrangement.
By the Pythagoras Theorem, H Given any integer N - 2
(this corresponds to the N given sites), we can find the smallest integer H such
that N - S
2N , and
l p
. If H is even (odd),
DIAMOND QUORUM CONSENSUS 9
a b
c d e f
Y
Z
Figure
5. Hypotenuse of odd length (odd arrangement)
then we have a case as shown in Figure 4 (5), and we can match the N sites into
an even (odd) arrangement. In this matching, there may be nodes that are not
matched to any site.
One can see that there exists a matching such that each row contains at least 2
sites. To see this we note that N has at least S nodes. This is because
0:5), then we should have chosen
looking for the smallest integer H as specified in the above
paragraph. Therefore, there will be at most H \Gamma 1 holes in the G structure, which
are the nodes not being matched to sites. For N - 5, one needs only to place at
most one hole at each of the rows other than the top and bottom rows, and then
place at most one more hole at the longest row.
The resulting general diamond ( G ) structure has the same number of rows as in
the full ( 2 ) structure in the even (odd) arrangement, which is
l p
and its top and bottom rows have size 2. Each row in the G structure has size less
than or equal to the corresponding row in the 2 structure. Therefore, the longest
row in the G structure contains at most
l p
nodes.
3. Performance Analysis
Different logical structures and quorum forming methods result in different performance
in different metrics. In this section, we first introduce the metric of
read-capacity. Then we examine the performance of the diamond protocol under
different metrics, including the read-capacity, quorum size and availability. We
compare the protocol with known protocols of majority quorum consensus, the
grid protocol, the tree protocol, and hierarchical quorum consensus. For the grid
protocol, we shall examine the modified grid protocol in [11], since it is an improvement
over the original gird protocol, we shall only examine the cases without holes
in the grid.
3.1. Read-Capacity
With the non-empty intersection property of read and write, write and write, quo-
rums, only one write quorum can be formed at any instance in a replicated database
system. Hence, the capacity analysis here is targeted only on the read operations.
We define the read-capacity of a replicated database as:
maximal number of concurrent read operations
maximal number of disjoint read quorums
Given a network with N nodes in which data is fully replicated, assume that any
node of the network system can only handle one read operation at any time. In the
following, we examine the read-capacity for each protocol.
ffl majority quorum consensus:
It can only handle one read operation as there is an intersection between each
pair of read quorums.
ffl hierarchical quorum consensus:
We consider the case when the branching factor is set to 3, which is recommended
in [10] as the structure that gives a minimal quorum size. In this case,
each quorum is not disjoint from any other quorum, therefore it can only handle
one read operation at a time.
ffl tree quorum consensus:
For the majority tree protocol, the nodes are logically organized into a ternary
tree(i.e. degree of height h, i.e., each node has d children, and the
maximum height is h. Each read or write quorum should have a length l and
a width w, and we denote the quorum by dimensions hl; wi. The protocol tries
to construct a quorum by selecting the root and w children of the root, and
for each selected child, w of its children, and so on, for depth l. If some node
is inaccessible at depth h 0 from the root while constructing this tree quorum,
then the node is replaced recursively by w tree quorums of height l \Gamma h 0 starting
from the children of the inaccessible node. The details can be found in [2].
Suppose the read quorums q r have dimensions hl r ; w r i and the write quorums
have dimensions q [2]). When the quorums are constructed,
the following constraints should be fulfilled so that the nonempty intersection
of read and write, write and write, quorums holds:
DIAMOND QUORUM CONSENSUS 11
our protocol
. majority
- tree quorum
Number of nodes
Number
of
read
Figure
6. Comparison of read-capacities
For the comparison, we used a ternary tree model. The maximal number of
disjoint read quorums occurs when the length of read quorums is 1, and each
level of the tree will give rise to one disjoint read quorum. Hence it can handle
at most log 3 N read operations simultaneously. However, to achieve acceptable
availability and overall performance, the quorum length should be set to a
greater value, Hence, the maximum number of read operations that the tree
quorum can handle in parallel will actually be smaller.
ffl grid protocol:
for the grid protocol, read quorums of size approximated by
N can be constructed
without intersection. So, it can handle, on average,
N read operations
simultaneously.
ffl diamond quorum consensus:
The maximal number of disjoint read quorums is obtained by taking a read
quorum from each row of the diamond structure. From the proof of Theorem 1,
there are
l p
rows in a general quorum structure that has the properties
stated in the theorem, hence the diamond protocol can handle
l p
simultaneous read operations.
The diamond protocol can handle the maximum number of simultaneous read
operation among the above protocols. That is, it can achieve the best read-capacity
among the majority, tree quorum and grid protocols. The comparison is also shown
in
Figure
6.
Recall that the number of nodes in a diamond structure with a maximum level
of k, and with w all i, is given by oe(k; d) d. Given a
value of k, the value of oe(k; d) increases with the value of d. Therefore, to obtain a
higher read-capacity for a fixed number of nodes (oe(k; d)), one should try to obtain
a greater value for k, which implies a smaller value for d 1 . However, we must also
consider other factors for quorum consensus. Therefore we have chosen a value of
2 for d because with this setting, the other properties of the diamond protocol are
also satisfactory.
3.2. Quorum Size
We have already discussed the importance of the quorum size. In this subsection
we examine the optimal quorum sizes as well as the worst case quorum sizes. Again
a number of quorum consensus are compared with the diamond quorum consensus
method.
ffl majority quorum consensus:
the read and write quorum size is approximated by
\Upsilon
ffl hierarchical quorum consensus:
As shown in [10], the minimal quorum size is given by N 0:63 in both the best
case and the worst case.
ffl tree quorum consensus:
If read quorums have dimensions hl; wi (a tree quorum of length l and width
w), then write quorums must have dimensions
the height and d is the degree of the tree.
Then, the read quorum size of tree quorum protocol varies
from w l \Gamma 1
where the smaller size occurs when all copies of the quorum are from the upper
levels (near the root) of the tree. The size of write quorums varies
from
to
\Theta
ffl grid quorum consensus:
For the grid protocol, we assume that the grid structure is approximately a
square, the read quorum size is approximated by
N , and the write quorum
size is approximated by 2
N .
ffl diamond quorum consensus:
The optimal read quorum size is 2 and is independent of the total number of
sites. The worst case is when the longest row is chosen, or when a node from
each row is chosen. From the proof of Theorem 1, the general diamond structure
that satisfies the conditions in the theorem has a biggest quorum size bounded
by
l p
, and has
l p
Therefore the worst case read quorum
size is
l p
DIAMOND QUORUM CONSENSUS 13
our protocol
. majority
- tree quorum
Number of nodes
Optimal
read
quorum
size
our protocol
. majority
- tree quorum
Number of nodes
Read
quorum
size-worst
case
Figure
7. Read quorum size
The smallest write quorum is a node from each row union either the top or
bottom row, hence the optimal write quorum size in terms of N is
l p
\Gamma1+1.
l p
. In the worst case, a node from each row union the longest row forms
the biggest write quorum, the size is bounded by
l p
l p
is equal to 2
l p
For the diamond protocol, we can choose the top or bottom row as the read
quorum, provided that they are functional, to keep the read quorum size at 2. In the
other protocols under comparison, only the tree protocol can attain a comparable
optimal read quorum size of 1, which is by choosing only the root node. However,
in such a case, the root node becomes a bottleneck and the corresponding write
quorum size will be quite big. Also, from [2], the availability will be severely affected
by this choice of read quorum. 2
Figure
7 shows the best case and worst case read quorum sizes of the tree quorum
consensus, majority quorum consensus, grid quorum consensus, and our protocol.
For the tree protocol, we set because this has been chosen for performance
studies in [2]. We have set the dimensions of the read quorums to be hl r
and those for write quorums to be hl w This is because from [2],
h2; 2i is the dimensions for hl r ; w r i that lead to the smallest read quorum size while
maintaining acceptable availabilities, for the cases studied in [2].
Figure
8 shows the best case and worst case write quorum sizes of the protocols.
In both cases, the diamond protocol can achieve a quorum size that is the second
best and very close to best. The tree quorum has the optimal write quorum size,
but its worst case write quorum size is very large. The grid quorum protocol has the
smallest worst case quorum size but its optimal read quorum size and optimal write
quorum size are the second highest. The comparison leads us to the conclusion that
14 FU, WONG AND WONG
our protocol
. majority
- tree quorum
Number of nodes
Optimal
quorum
size
our protocol
. majority
- tree quorum
Number of nodes
Write
quorum
size-worst
case
Figure
8. Write quorum size
in overall considerations of the quorum sizes, the diamond protocol is superior to
the other protocols.
3.3. Availabilities
As in most previous work, the availability of an algorithm is defined as the probability
of forming a quorum successfully in that algorithm. In the following we
denote the probability of X by P rob(X). Let p be the probability that a site is up
and be the probability that a site is down. Let the number of sites be N .
ffl majority quorum consensus:
the availability of read and write operations are defined as:
rob(majority copies are available)
are available)
rob(all copies are available)
ffl tree quorum consensus:
For the tree quorum protocol, the availability can be calculated by a recurrence
relations for both read and write availabilities. Let A h [l; w; d] be the availability
of operations with a tree quorum of dimensions hl; wi in a tree of height h and
degree d (see [2] for the definition of dimensions). From [2], the availability in
a tree of height h formulated as
A h+1 [l; w;
[Availability of w subtrees with A h [l \Gamma
DIAMOND QUORUM CONSENSUS 15
+P rob(Root is down) \Theta
[Availability of w subtrees with A h [l; w; d]]:
The expansion of this formula can be found in [2].
ffl diamond quorum consensus:
The calculation of the read and write availabilities for the diamond protocol is
very similar to that for the grid protocol. The diamond structure can be treated
as a hollow grid, some of the positions in the grid structure are not occupied
by nodes. For instance, the structure shown in Figure 1 can be regarded as an
8 \Theta 7 grid structure model, with all the corner nodes being chopped off. In this
particular case, 24 nodes, 6 in each corner, are being eliminated in the structure.
The read and write availabilities of our model can be calculated in a similar
way as for the modified grid protocol [11]. That is, a row is alive if at least one
site in it is up, and is dead otherwise. A row is good if all sites in it are up, and
is bad otherwise. Suppose there are n i columns that contain m i nodes each, for
Read availability are bad)
rob(all rows are bad and alive)
Write availability are alive)
\Gammaprob(all rows are bad and alive)
where there are n k rows that contain m k sites in the diamond structure.
ffl grid protocol
The availabilities are also given by RA and WA, although we shall examine
solid grids so that each row will have the same number of nodes.
For the above protocols, we examine 3 cases, each one with a different number of
sites, for comparison of the availabilities. They are 13, 40 and 121 sites. Figures 9
to 11 show the read availabilities and the write availabilities. For Figure 9, there
are 13 sites in the network. and a 3 \Theta 4 grid structure is chosen for the modified
grid protocol. The general diamond structure is chosen so that the row sizes are
2g. For
Figure
10, there are 40 sites for Figure 10, and a 5 \Theta 8 grid
structure is chosen. The general diamond structure is chosen so that the row sizes
are f2, 4, 6, 8, 8, 6, 4, 2g.
. tree
-x- majority
-o- our protocol
probability that a site is up
read
availability
nodes)
-x- majority
our protocol
probability that a site is up
availability
nodes)
Figure
9. Availability (13 sites)
. tree
-x- majority
-o- our protocol
probability that a site is up
read
availability
nodes
-x- majority
our protocol
probability that a site is up
availability
nodes
Figure
10. Availability (40 sites)
For
Figure
11, there are 121 sites and a 10 \Theta 12 grid structure is chosen. The
general diamond structure is chosen so that the row sizes are f2, 4, 6, 8, 9, 10, 12,
14, 14, 12, 10, 8, 6, 4, 2g. Note that in all the above diamond structures, we have
the properties that if N is the number of sites, then the number of rows is given
by
l p
1, the top and bottom rows have size 2, and the maximum row size is
bounded from the above by
l p
, as specified in Theorem 1.
From the figures, we see that when the site reliability is reasonably high, the
diamond protocol performs well both in the read and write availabilities. If one
would like to achieve higher write availability for the diamond quorum consensus,
it is possible to do so. For example, for the general diamond structure
DIAMOND QUORUM CONSENSUS 17
-x- majority
our protocol
probability that a site is up
read
availability
nodes
-x- majority
our protocol
probability that a site is up
availability
nodes
Figure
11. Availability (121 sites)
. tree
-x- majority
-o- our protocol
probability that a site is up
read
availability
nodes
-x- majority
our protocol
probability that a site is up
availability
nodes
Figure
12. Improved availability (40 sites)
can be chosen so that the row sizes are f 3, 3, 6, 8, 8, 6, 3, 3g. Figure 12 shows
the resulting availabilities. For 121, the general diamond structure can be
chosen so that the row sizes are f 3, 3, 6, 8, 9, 10, 12, 14, 14, 12, 10, 8, 6, 3, 3g.
Figure
13 shows the results. Compared with Figures 10 and 11, we can see a big
improvement in the write availability when the site availability is high. In both
cases, the read-capacity remains the same after the diamond structure is modified,
the optimal read quorum size is increased only by one, which remains to be smaller
than that of the other protocols.
-x- majority
our protocol
probability that a site is up
read
availability
nodes
-x- majority
our protocol
probability that a site is up
availability
nodes
Figure
13. Improved Availability (121 sites)
4. Generalized Virtual Partition Protocol
To handle partition failures, we use the generalized virtual partition protocol, GVP,
which is presented in [6]. This is a generalization of the virtual partition protocol
(VP) in [5]. We shall give a brief description of VP and then point out the differences
of GVP from VP.
We present VP in a slightly generalized form. In VP each transaction executes
in a view. A view can be considered to correspond to the reachable part of the
network as seen from a site after partition failure. Transactions executing in a view
are controlled by a concurrency control protocol within the view as follows.
For each data object X, there are two positive integers, A r [X] and Aw [X], called
read and write accessibility thresholds, respectively, satisfying
A r [X]
where n[X] denotes the total number of copies of X. Thus, a set of copies of X
of size Aw [X] has at least one copy in common with any set of copies of X of size
A r [X]. Each site maintains a set of sites called its view. Views are totally ordered
according to their unique view-id's, which are non-negative integers.
Each copy of a data object has a version number = hV id; ki, indicating that
it was last written in view V with view-id V id and that its value is the result of
the kth update in that view, where indicates the initial value written by the
"view-update transaction" (see below). A "less than" (or "larger than") relation
is defined among version numbers by their lexicographical ordering (I.e. a version
number
In view V , each logical data object X is assigned, if possible, read and write
quorum sizes q r [X; V ] and q w [X; V ], which specify, respectively, how many copies
of X must be accessed to, respectively, read and write X in view V . (An access
DIAMOND QUORUM CONSENSUS 19
operation on a copy may return only its version number, not its value.) In our
terminology, a view read(write) quorum for data object X in view V , is a set
of copies of X that can be accessed to perform logical read(write) on X in view V .
denotes the set of all view read (write) quorums for X in
. Let n[X; V ] be the number of copies of X that reside at sites in view V . The
quorum sizes must satisfy the following conditions. For all X and V ,
These ensure that each view write quorum for X in V , if any, has at least one copy
in common with each view read quorum for X in V (by Equation (2)) and with
any view which has at least A r [X] copies of X (by Equations (1) and (4)). If there
are at least A r [X] copies of X in view V , then we say that X is inheritable in V .
then there is no choice for q w Equation
(4) above; in this case both rq(X; V ) and wq(X; V ) will be empty.
Consider a transaction T executing at a site s having view V with view-id V id.
(In this case we say that T executes in V .) It can read or write copies at another
site s 0 only if s 0 also has view V with the same view-id. (Ways to relax this
restriction are discussed in [5].) If rq(X; V ) 6= OE then the logical read operation
R T [X] by transaction T executing in V with view-id V id is implemented as follows:
1. Access all copies in a view read quorum in rq(X; V ) at sites having view V with
2. Determine ki, the maximum version number among the
accessed copies,
3. If V id 6= V idmax, then abort T , else read a copy in rq(X; V ) with version
number vnmax.
Note that in [5], X cannot be read in V unless X is also inheritable in V . We
relax this requirement by allowing a read operation on X in V once X has been
"initialized" in V . This will make it possible for two concurrent partitions (under
different views) to perform both read and write operations on the same logical data,
provided that some transaction has performed a write without read on the data.
then the logical write operation
executing in V is implemented as follows:
1. Access all copies in a view write quorum in wq(X; V ) at sites having view V
with view-id V id,
2. Determine ki, the maximum version number among the
accessed copies, and
3. Update the copies in a view write quorum in wq(X; V ) and change their version
numbers to hV id; k+1i, if V and to hV id; 1i if
A site may change its view from time to time. For example, a site may want to
change its view when it notices a difference between its current view and the sites
it can actually communicate with. Whenever a site s changes its view to a new
view, s must execute a view-update transaction that updates data object copies
stored at site s. Site s may decide on the members of a new view V based on its
own information, in which case s is called the initiator of V . It may also decide
to use a view V initiated by another site, in which case, s adopts view V .
Sites change their views automatically as follows. (For details, see [5]). An
initiator s of a new view V first assigns to V a unique view-id, new view id, that
is larger than any other view-id that s has encountered. (Uniqueness of the view-id
can be achieved by using the initiator's unique site ID (identification number) to
be the least significant digits of the view-id.) Site s then executes a view-update
transaction to update the local copy of each data object inheritable in V . For
each such data object X, the view-update transaction reads the copy of X at
a site in V that has a copy of X with the largest
version number among a set of A r [X] copies. The version number of X is set to
be hnew view id; 0i. If the view-id of S 0 (X) is not larger than new view id, then
the value of X is copied from S 0 (X); otherwise, the view-update transaction is
aborted. When this is repeated for all inheritable data objects X, the new view is
installed at s. If a site s 0 accessed by the view-update transaction has a view-id
less than new view id, then s 0 immediately adopts new view id. If a site accessed
by the view-update transaction has a view-id greater than new view id, then the
view-update transaction is aborted, in which case site s immediately adopts the
greater view-id and initiates a view-update transaction with that view-id.
Some comments are in order for the case where X is not inheritable in V , since
unlike [5], we may still allow reading of X in V . After V has been installed at some
sites, but before any user transaction is executed in V , the copies of X at these
sites, if any, have version numbers hV 0 id; ki such that V 0 id ! V id. At this time,
no user transaction should be allowed to read X. However, X can be written if
and the first write on X in V will initialize X in V , by changing the
version number of the updated copies to hV id; li. Thereafter, X can be read in V .
We now describe the Generalized Partition Protocol, GVP. The major difference
of GVP from VP is that the definition of a quorum is in terms of a set of copies
rather than by the size of the quorum. This gives us greater flexibility in defining the
quorums. We shall not give the full description of the protocol here, instead, we shall
list the main features that are dependent on the quorum consensus method, and
explain how to incorporate the protocol in the case of diamond quorum consensus.
As in VP, in GVP each transaction executes in a view which is a subset of the set
of all replication sites. Each view has a unique view-id, V id. Each copy of a data
object has a version number = hV id; ki, indicating that it was last written in
view V with view-id V id, and that its value is the result of the th update in
that view. Transactions executing in a view are controlled by a concurrency control
protocol within the view. The following are some major features.
DIAMOND QUORUM CONSENSUS 21
Figure
14. An example of global read, view read, and view write quorums
1. Global read quorums : For each data object X, a global read quorum set
RQ(X) is defined. X is inheritable in view V if V contains a global read
quorum belonging to RQ(X).
2. View quorums : For each data object X, and each view V , a view read
quorum set rq(X; V ) and a view write quorum set wq(X; V ) are defined.
Each view write quorum in wq(X; V ), if any, intersects each quorum in RQ(X)
and each view read quorum in rq(X; V ), if any. If
OE), then X is said to be writable (readable) in V .
3. Reading and writing : A logical write of X by a user transaction T executing
in view V with view-id V id is required to update or initialize all copies in a view
write quorum in wq(X; V ) (where each copy has a version number that contains
a view-id V id), giving them the same new version number hV id; ki larger than
their previous version number. A logical read of X by user transaction T is
allowed in view V only if at least one copy of X accessed by the read has been
initialized in V . A logical read of X by T reads a view read quorum from sites
that have the same view as T and takes the value of the copy with the highest
version number.
For our protocol, GVP is adopted as follows: if a view V is fs
can be chosen as follows:
Choose a set fR 1 which is a subset of the set of rows in the diamond
structure. We call R i a row in RQ(X). RQ(X) is the set of all sets of the following
22 FU, WONG AND WONG
or
or
and if V contains a quorum Q in RQ(X), then
1. is the set of any whole row of sites in V , and
2. is the set of
(i) all nodes of any one row, in V ,plus
(ii) an arbitrary node for each complete row, in V , and each row in Q.
ffl As rq(X; V ) is in V , must intersect any view write quorum in view V ,
and
ffl as wq(X; V ) contains a whole row in V , it must intersect any view write quorum
in V . In addition, as wq(X; V ) contains one node in each row in Q, wq(X; V )
must intersect any quorum in RQ(X).
ffl All quorums in RQ(X) intersect each other.
An example of a possible set of global read quorum, view read quorum and view
write quorum is shown in Figure 14. The black nodes are the unreachable sites for
the non-black nodes, so they may be considered as a partition. The set of non-black
nodes will form a view for the functional sites. The example shows the intersection
properties among the quorums.
For a diamond structure with a maximum level of k, we may choose the second,
k-th and (2k \Gamma 2)-th row to be the rows in RQ(X) because the second and (2k \Gamma 2)th
can minimize the cost of rq(X; V ), the k-th row is the row that contains the largest
number of node within one row. Choosing this row can provide a high tolerance
ability of partition failure.
5. Conclusion
We summarize our major findings in the table in Figure 15. In the table, N is the
number of sites. The quorum size (best case) refers to the smallest read or write
quorum size. The quorum size (worst case) refers to the greatest read or write
quorum size. Note that although the minimal quorum size of the tree protocol is 1,
the resulting protocol will have problems in read-capacity, write quorum size and
also availability.
The cost of one node failure refers to the effect of a single node failure on the
consensus of the smallest quorum. Suppose we have decided on a smallest quorum,
but one of the nodes in the quorum has failed, then we need to form another
quorum, and this cost refers to the number of additional number of nodes one
needs to access. Therefore, for the diamond protocol, the cost of one node failure
is the number of nodes in the second smallest quorum, which is 2.
DIAMOND QUORUM CONSENSUS 23
Protocol read- quorum size quorum size cost of one node
capacity (best case) (worst case) failure (worst case)
Alg. 1
Tree log 3 N 1 N 1
Grid -
Diamond
l p
l p
Figure
15. Properties of some quorum consensus protocols
From the table and our previous discussions, we have shown that the diamond
quorum consensus is a good choice for high throughput and efficiency in replicated
data management.
There are two main contributions in this paper. First we define a new metric
of read-capacity which we show to capture the characteristics of workload-capacity,
which measures how well the system can maintain high throughput and low response
time under heavy workload. Secondly we propose a new protocol, the diamond
protocol that can achieve high read-capacity, low quorum size, and other desirable
features for replicated data management. We also show that the new protocol can
easily merge with the generalized virtual partition failure protocol.
One open problem that should be investigated is the quorum selection strategy.
We suggest to use a random selection since it requires no overhead. We believe
that it should be able to distribute the workload quite well during a busy period.
By such a selection method, and with the proposed protocol, the chance of a node
having no workload during a busy period would be small. Hence the throughput
should be high and the average response time can be low. Another possible strategy
for busy periods is for each site to pick the quorums in a round robin fashion. For
example, if there are k quorums then if the previous quorum chosen at
site s is Q i , the next quorum to be used will be Q (i mod k)+1 . In this way, we ensure
that all quorums have similar chances of being chosen and hence the workload will
also be distributed quite evenly. We can try to confirm these expectations by more
detailed analysis and experiments.
Acknowledgements
The authors would like to thank the anonymous referees for their very thorough
review and very helpful comments which enhance the paper significantly. This
research was supported by the RGC (the Hong Kong Research Grants Council)
grant UGC REF.CUHK 495/95E.
Notes
1. Note that when the top row size is 1 and d is 0, we have the read-one-write-all protocol, which
has the maximal read-capacity of N . However, this protocol has problems such as poor write
availability.
2. This is a main reason why we have not set the upper and lower rows of the diamond to have
size one.
--R
An efficient and fault-tolerant solution for distributed mutual exclusion
The generalized tree quorum protocol: An efficient approach for managing replicated data.
Performance characterization of the tree quorum algorithm.
The grid protocol: a high performance scheme for maintaining replicated data.
Maintaining availability in partitioned replicated databases.
Enhancing Concurrency and Availability for Database Systems.
Delay optimal quorum consensus for distributed systems.
Hypercube quorum consensus for mutual exclusion and replicated data management.
Hierarchical quorum consensus: A new algorithm for managing replicated data.
A performance study of general grid structures for replicated data.
A geometric approach for consructing coteries and k-coteries
A p N algorithm for mutual exclusion in decentralized systems.
The load
Crumbling walls: A class of practical and efficient quorum systems.
A fault-tolerant algorithm for replicated data management
An analysis of the average message overhead in replica control protocols.
A majority consensus approach to concurrency control for multiple copy databases.
Quorum systems for distributed control protocols.
Message complexity of the tree quorum algorithm.
--TR
Maintaining availability in partitioned replicated databases
An efficient and fault-tolerant solution for distributed mutual exclusion
Hierarchical Quorum Consensus
Enhancing concurrency and availability for database systems
The generalized tree quorum protocol
Performance Characterization of the Tree Quorum Algorithm
A <inline-equation> <f> <rad><rcd>N</rcd></rad></f> </inline-equation> algorithm for mutual exclusion in decentralized systems
A Fault-Tolerant Algorithm for Replicated Data Management
Crumbling walls
An Analysis of the Average Message Overhead in Replica Control Protocols
A Geometric Approach for Constructing Coteries and k-Coteries
Delay-Optimal Quorum Consensus for Distributed Systems
A Majority consensus approach to concurrency control for multiple copy databases
Message Complexity of the Tree Quorum Algorithm
The Grid Protocol
Quorum-oriented Multicast Protocols for Data Replication | fault-tolerant computing;quorum consensus;mutual exclusion;distributed systems;replicated database systems |
361028 | Optimizing static calendar queues. | The calendar queue is an important implementation of a priority queue that is particularly useful in discrete event simulators. We investigate the performance of the static calendar queue that maintains N active events. The main contribution of this article is to prove that, under reasonable assumptions and with the proper parameter settings, the calendar queue data structure will have constant (independent of N) expected time per event processed. A simple formula is derived to approximate the expected time per event. The formula can be used to set the parameters of the calendar queue to achieve optimal or near optimal performance. In addition, a technique is given to calibrate a specific calendar queue implementation so that the formula can be applied in a practical setting. | INTRODUCTION
The calendar queue data structure, as described by Brown [Brown 1988], is an
important implementation of a priority queue that is useful as the event queue in
a discrete event simulator. At any time in a discrete event simulator there are
active events, where each event e has an associated event time t(e) when it is
intended to occur in simulated time. The set of events is stored in the priority
queue ordered by their associated event times. A basic simulation step consists
of nding an event e 0 which has the smallest t(e 0 ), removing the event from the
priority queue, and processing it. As a result of the processing new events may
be generated. The parameter N can vary if zero or more than one new events are
generated. Each new event e has an event time t(e) > t(e 0 ) and must be inserted
in the priority queue accordingly.
In the calendar queue the events are stored in buckets with each bucket containing
events whose times are close to each other. All the events with the smallest times
are in the same bucket so they can be accessed quickly and simulated. Any newly
generated event can be quickly put into its bucket. When the events in one bucket
are consumed, the next bucket is considered. The details of the algorithm are given
later. The calendar queue has several user controllable parameters, the bucket
width and number of buckets, that aect its performance. Brown [Brown 1988]
provided empirical evidence that the calendar queue, with its parameters properly
set, achieves expected constant time per event processed. The goal of this paper is
to prove the constant time per event of the calendar queue behavior in a reasonable
model where, for each new event e, the quantity t(e) t(e 0 ) is a nonnegative random
variable sampled from some distribution.
Generally, the number of active events may vary over time. An important case
is the static case which arises when N is a constant, such as the case of simulating
a parallel computer. In this case, each event corresponds to either the execution
of a segment of code or an idle period by one of the processors. Thus, if there are
processors, then there are exactly N active events in the priority queue. In this
paper we focus on the static calendar queue.
Even before Brown's paper [Brown 1988] the calendar queue was used in discrete
event simulators when the number of events is large. In many of these situations the
calendar queue signicantly outperforms traditional priority queue data structures
[Brown 1978; Francon et al. 1978; Knuth 1973; Sleator and Tarjan 1985; Sleator and
Tarjan 1986; Vuillemin 1978]. An interesting new development is the employment
of a calendar queue like data structure as part of the queuing mechanism of high-speed
network switches and routers [Rexford et al. 1997]. In this case the calendar
queue like data structure is implemented in hardware.
1.1 Organization
In Section 2 we dene the calendar queue data structure and the parameters that
govern its performance. In Section 3 we present the Markov chain model of calendar
queue performance. In Section 4 we present an expression that describes
the performance of the calendar queue in the innite bucket case. The bucket
width can be chosen to approximately minimize the expected time per event at a
constant. In Section 5 we describe how to choose the number of buckets without
Optimizing Static Calendar Queues 3
signicantly compromising the performance over the innite bucket calendar queue.
In Section 6 we develop a technique for calibrating a calendar queue implementation
and demonstrate the eectiveness of the technique. In Section 7 we present
our conclusions. The Appendix contains the longer technical proofs.
2. THE CALENDAR QUEUE
A calendar queue has M buckets numbered 0 to M 1, a current bucket i 0 , a bucket
width -, and a current time t 0 . We have the relationship that i
For each event e in the calendar queue, t(e) t 0 , and event e is located in bucket
if and only if i t(e)=- mod M < (i 1). The analogy with a calendar can be
stated by: there are M days in a year each of duration -, and today is i 0 which
started at absolute time t 0 . Each event is found on the calendar on the day it is to
occur regardless of the year.
As an example choose 30. The 8 events
have times 31, 54, 85, 98, 111, 128, 138, 251.
{ 111 128 31 { 54 { { 85 98
In this example the next event to be processed has time 31 which is in the current
bucket numbered 3. Suppose it is deleted and the new event generated has time
87. Then, the new event is placed in bucket 8 next to the event with time 85. Since
will not be processed until the current bucket has cycled around all the
buckets once. Thus, t 0 is increased by -, and the next bucket to be examined is
bucket 4 which happens to be empty. Thus, the processing of the buckets is done in
cyclic order and only the events e which are in the current cycle, t 0 t(e) <
are processed.
A calendar queue is implemented as an array of lists. The current bucket is
an index into the array, and the bucket width and current time are either integers,
xed-point or
oating-point numbers. Each bucket can be implemented in a number
of ways most typically as an unordered linked list or as an ordered linked list. In
the former case insertion into a bucket takes constant time and deletion of the
minimum from a bucket takes time proportional to the number of events in the
bucket. In the latter case insertion may take time proportional to the number of
events in the bucket, but deletion of the minimum takes constant time. The choice
of algorithm for managing the individual buckets is called the bucket discipline. In
this paper we focus on the unordered list bucket discipline.
2.1 Calendar Queue Performance
For the calendar queue, the performance measure we are most interested in is the
expected time per event, that is, the time to delete the event with minimum time and
insert the generated new event. There are two key (user controllable) parameters
in the implementation of a calendar queue that eect its performance, namely, the
bucket width - and the number of buckets M . The choice of the best - and M
depends on the number of events N and the process by which t(e) is chosen for a
newly generated event e. Assuming M is very large (innite), if - is chosen too
4 K.B. Erickson, R.E. Ladner, A. LaMarca
N number of events known parameter
mean of the jump known or estimated parameter
time per empty bucket hidden parameter determined by calibration
time per list entry hidden parameter determined by calibration
d xed time per event hidden parameter determined by calibration
- bucket width user controlled parameter
M number of buckets user controlled parameter
Table
1. Parameters of the calendar queue.
large, then the current bucket will tend to have many events which is ine-cient. On
the other hand if - is chosen too small, then there will be many empty buckets to
traverse before reaching a non-empty bucket, which again is ine-cient. Regardless
of the choice of -, if M is chosen too small, then the current bucket will again tend
to have too many events in it which are not to be processed until later visits to the
same bucket.
In order to analyze the calendar queue we make some simplifying assumptions
on the process by which t(e) is chosen for a new event e. The main assumption we
make is that the quantity t(e) t(e 0 ), called the jump, is a random variable sampled
from some distribution that has a mean , where e 0 is the event with minimum
time t(e 0 ). We will fully delineate the simplifying assumptions later. The choice of
a good - certainly depends on both and N . As grows so should -. As N grows
should decrease. Determining exactly how - should change as a function of and
N to achieve optimal performance is a goal of this paper.
Assume that we have innitely many buckets. In addition to the two parameters
and N the choice of a good - also depends on three hidden implementation
parameters b, c, and d where b is the incremental time to process an empty bucket,
c is the incremental time to traverse a member of a list in search of the minimum in
the list, and d is the xed time to process an event. If m empty buckets are visited
before reaching a bucket with n events (n 1), then the time to process an event
is dened to be:
KN (-) to be the stationary expected value of bm d. Then KN (-) is
the expected time per event in the innite bucket calendar queue.
In a real implementation of a calendar queue the number of buckets M is nite.
In this case, it may happen that some events in the bucket have times that are not
within - of the current time and are not processed until much later. Dene K M
to be the expected time per event in the M bucket calendar queue. Generally,
N (-) KN (-) because extra time may be spent traversing events in buckets
that are not processed until later. Another goal of this paper is to determine how
to choose M so that K M
N (-) is the same or only slightly larger than KN (-).
Table
1 summarizes the various parameters that aect the performance of the
calendar queue.
Optimizing Static Calendar Queues 551525
time
per
event
bucket width
Fig. 1. Graph of bucket width, -, vs. the expected time per event, KN (-), in the simulated
innite bucket calendar queue with 100 events, exponential jump with mean 1, and
time
per
event
number of buckets
Fig. 2. Graph of number of buckets, M , vs. the expected time per event, K M
N (-), for - chosen
optimally in the M bucket simulated calendar queue with 1,000 events, exponential jump with
mean 1, and
Figure
1 illustrates the existence of an optimal - for minimizing the expected time
per event. Figure 2 illustrates the eect of selection of M on the expected time
per event. The graphs in both Figures were generated by simulating the calendar
queue with an exponential jump with mean 1 and
were taken after a suitably long warm-up period and over a long enough period so
that average time per event was very stable. The simulation of Figure 1 uses an
innite number of buckets with 100 events. The simulation of Figure 2 uses the
optimal bucket width for for the innite bucket calendar queue, then
varying the number of buckets. In choosing or 3; 000 the performance
curve is almost
at approaching the performance with innitely many buckets.
6 K.B. Erickson, R.E. Ladner, A. LaMarca
3. MODELING THE CALENDAR QUEUE PERFORMANCE
To model the calendar queue performance we begin by specifying the properties of
the random variable, is the event with current minimal
time, and e is the newly generated event. We assume that is a random variable
with density f dened on [0; 1), the nonnegative reals. We call f the jump density
and its random variable simply the jump. Successive jumps are assumed to be
mutually independent and identically distributed. Let be the mean of the jump,
that is:
Z 1f1 F (z)g dz: (1)
where F is the distribution function of the : F
R xf(z) dz. We call F the
jump distribution.
We dene the support of the jump distribution to be
The value 1 is not excluded.
Technical Assumptions about f and F .
In order to facilitate the proofs we make several technical assumptions about f and
F that will be in force throughout, except as noted.
J1. The density f(x) > 0 for all x in the interval (0; ).
J2. The mean is nite.
J3. There is an 0 > 0 and c 0 such that F (x) c 0 x for all x 0 .
Assumption J2 is crucial; it guarantees the existence of a non-trivial \steady state".
Note that J3 holds if the density f is bounded in a neighborhood of 0.
3.1 The Markov Chain
We model the innite bucket calendar queue as a Markov chain b
X with state space
in [0; 1) N . For denote the state of
the chain at time t. The state (X 1 represents the positions,
relative to the beginning of the current bucket, of the N events (indexed 1 to N)
in the calendar queue at step t. A step of the calendar queue consists of examining
the current bucket, and either moving to the next bucket, if the current bucket
is empty, or removing the event with smallest time from the current bucket and
inserting a new event (with the same index) according to the jump distribution.
Accordingly, the transitions of b
are as follows: Let m be the index such that
for all i. If X m (t) < -, then for i
are independent non-negative random variables. It is assumed that
these random variables t ; t 0, all have the same probability density f . The
parameter - is a xed non-negative real number.
We can think of X i (t) as the position of the i-th particle in an N particle system.
If no particle is in the interval [0; -), then all particles move - closer to the origin.
Otherwise, the particle closest to the origin, in the interval [0; -), jumps a random
Optimizing Static Calendar Queues 7
distance from its current position and the other particles remain stationary. Thus,
a particle in the Markov chain b
X represents an event in the innite bucket calendar
queue where the position of the particle corresponding to an event e is the quantity
. The interval [0; -) corresponds to the currently active bucket in the innite
bucket calendar queue.
It is important to note that a step of the Markov chain b
X does not correspond to
the processing of an event in the calendar queue. The processing of an event in the
calendar queue corresponds to a number of steps of the Markov chain where the
interval [0; -) is empty followed by one step where the interval [0; -) is nonempty.
to be the limiting probability, as t goes to innity, that the interval [0; -)
has exactly i particles in it. Technically, q is a function of N and -, but
we drop the N and - to simplify the notation. The quantity q 0 is the probability
that the interval [0; -) is empty. It is not obvious that q i exists for 0 i N , so
we prove the following Lemma in Appendix A.
Lemma 3.1. If the jump density has properties J1, J2, and J3, then the limiting
probabilities exist and are independent of the initial state of b
X.
Let us also dene EN (-) to be the limiting expected number of particles in the
interval [0; -), that is,
4. EXPECTED TIME PER EVENT IN INFINITE BUCKET CASE
The expected time to process an event in the innite bucket calendar queue is
closely related to the function EN (-) as we see from the following Lemma.
Lemma 4.1. The expected time per event in the innite bucket calendar queue
is
Proof. The Markov chain b
X models the calendar queue. Thus q 0 is the portion
of buckets visited which are empty and for j > 0, q j is the portion of buckets visited
which have j events. Each empty bucket visited, which happens with probability
cost b, but does not result in nding an event to process. Each bucket
visited with j > 0 events, which happens with probability q j , has cost cj + d, and
results in nding an event to process. Thus, the expected cost per event in the
calendar queue is
d)
which yields equation (3) using equation (2).
8 K.B. Erickson, R.E. Ladner, A. LaMarca
Let us dene the following important quantity
Z -[1 F (x)] dx: (4)
The second part of equation (1) implies that 0 p 1. Note also that -[1
F (-)]= p -=.
In order to derive a good approximating formula for KN (-) we rst need to nd
good bounds for the quantities q i for 0 i N . The following technical Lemma,
proved in the Appendix, Section B, provides those bounds.
Lemma 4.2. For N 2 and all - > 0 we have
and for
where B(j) is the tail of the binomial distribution for N trials with \success" parameter
p:
The simple exact formula for q 0 is interesting. It is possible to write down some
very complicated integrals which give exact expressions for the other q j , but these
are highly unwieldy and their proofs are not informative (cf. [Erickson
1999]).
It is also interesting to note that our assumption J1 requiring the probability
density f to be positive on its support can be removed, but the proofs of the
theorems become even longer. Without J1, if the density f has the property that
there is a constant c > 0 such that we have
exact expressions for q j for all j 1.
Lemma 4.2 yields the following upper and lower bounds on KN (-).
Lemma 4.3. For N 2 and all - > 0
KN (-) d b
Proof. From (5) and (6) we have
+N-
to be the standard binomial distribution with N trials
and success parameter p. Since and b(i) has mean Np and
Optimizing Static Calendar Queues 9
second moment (Np) 2 +Np(1 p) we sum by parts to derive
which, upon substituting into equation (3) and doing a little rearranging, yields the
left side of (8). Similarly, one derives the right side of (8).
The range of - that gives good calendar queue performance is when
In this case the bounds of (8) give us a wonderfully simple, and accurate, approximating
formula for KN (-).
Theorem 4.1. If then the expected time per event in the innite
bucket calendar queue with bucket width - is
In fact, there are numbers 1 ; 2 such that for any xed
> 0 the O(N 1 ) term is
bounded by ( 2
)=N uniformly for 0 < -
=N .
The proof of this Theorem is almost an immediate consequence of (3), (5), and
but it is also postponed to the Appendix, Section C. Interestingly, the expected
time depends on the mean of the jump and not on the shape of its probability
density.
Note that one immediate consequence of Theorem 4.1 is that if the bucket width
is chosen to be =N for in a xed interval, then the innite bucket calendar
queue has constant expected time per event performance. Indeed, a formula for the
optimal performance of the calendar queue can be derived as seen in the following
Theorem.
Theorem 4.2. The expected time per event, KN (-), achieves a global minimum
in the interval (0; 1) at - opt where
r
c
KN (- opt
2bc +O N 1
The proof of this Theorem is in the Appendix, Section D. Theorem 4.2 shows that
the optimal choice of - only depends on the ratio of b to c, the mean of the jump,
and N .
5. CHOOSING THE NUMBER OF BUCKETS
Now that we have found how to select - so as to approximately minimize the
expected time per event in the innite bucket calendar queue our next goal is to
select M , the number of buckets, so that the M bucket calendar queue has the
same or similar performance as the innite bucket calendar queue.
For the case in which the jump distribution has nite support ( < 1), there is a
natural choice for M which guarantees that the calendar queue with M buckets has
A. LaMarca
exactly the same performance as the innite bucket calendar queue. If M =-+1,
then it is guaranteed that in the long run all the events e in the current bucket will
have In this case, eventually each event in the current bucket will
be processed during the current visit to the bucket and not postponed until future
visits to the bucket. For the case in which the support of the jump distribution
is either innite or is nite but =- is too large to be practical, then it will be
necessary to choose a number M which gives performance less than that of the
innite bucket calendar queue.
5.1 Expected Time per Event in the nite Bucket Case
The same Markov chain b
X can be used to analyze this case. Let L M
N (-) be the
(steady state) expected number of particles in the set
In terms of the M bucket calendar queue, if an event e has t(e) t 0 2 , then
the event is in the current bucket but is not processed. The occurrence of such an
event will cause the M bucket calendar queue to run less e-ciently than the innite
bucket calendar queue. The following Lemma quanties the dierence between the
performance of the nite and innite bucket calendar queues.
Lemma 5.1. The expected time per event in the M bucket calendar queue with
bucket width - is
Proof. In the Markov chain b
X, let q ij be the limiting probability that there are
particles in the interval [0; -) and j particles in . The probabilities q ij can be
shown to exist in the same way as we did for the probabilities q i in Lemma 3.1 by
using Corollary A.1 in the Appendix, Section A. In the M bucket calendar queue
the cost of visiting a bucket with i events whose times are in the interval [t
and j events whose times are in the set
Thus, the expected cost per event K M
But
(equation (5) of Lemma 4.2) and
(-). By using equation (3) in the proof of Lemma 4.1 we derive the equation
for K M
(-).
In the
Appendix
, Section E, we indicate how to derive the following rather horrible
looking bounds for L M
(-).
Lemma 5.2. The function L M
N (-) is bounded above by
Optimizing Static Calendar Queues 11
and bounded below by
+N-
where p and F have the same meaning as before (see equation (4)) and
Z jM-
[1 F (x)] dx;
[F (jM- y) F (jM- y)]:
It should be noted that under the hypothesis < 1 (J2), the above series converge
and can be given bounds in terms of ; -, and M . However, using the bounds as
stated in the Lemma, we can derive a more useful asymptotic expression for L M
(-).
The Lemma is proven in the Appendix, Section F.
Lemma 5.3. If are constants, then
[1 F (jxr)]: 1
5.2 Degradation in Performance due to Finitely Many Buckets
M to be the degradation in performance in choosing M buckets instead of
innitely many buckets, that is,
If we choose - optimally, then Lemma 5.3 and Theorem 4.2 yield the following
asymptotic expression for M .
Theorem 5.1. If M=N is constant and
c
[1 F (jM-)] (12)
The following asymptotic bound is implied by Theorem 5.1.
Theorem 5.2. If M=N is constant and
c
r c
12 K.B. Erickson, R.E. Ladner, A. LaMarca
Proof. By Theorem 5.1 it su-ces to show that
To see this let k 2, then
Z 1xf(x)dx
Z jD+D
x f(x)dx
The niteness of implies that x[1 F (x)] ! 0 as x ! 1. Therefore, if we let
k !1, we get =D
Equation (13) shows that for a xed > 0 (like .01) M can be chosen to be O(N)
so that M . In other words, one can always choose the number of buckets M
to be a multiple of N and still obtain a performance almost as good as that of the
innite bucket case.
For the interesting case of the exponential jump density
we can calculate the series in the equation (12) exactly:
e
c
Let us suppose optimally equal to
2=N allows us
to solve equation (14) for M=N when given an acceptable M . For example, if we
choose should be approximately 1:92 and if
M=N should be approximately 3:02. Figure 3 illustrates that asymptotic equation
provides an excellent choice of M over a wide range of N . Using our simulation
of the calendar queue we plot for a wide range of N the value of M for each of
3:02. Again, measurements were taken after a suitably
long warm-up period and over a long enough period so that average time per event
was very stable. Both plots are relatively
at near the asymptotic values .05 and
respectively. Thus, equation (14) seems quite accurate. The bound of Theorem
5.2 is not necessarily tight because we are crudely approximating an integral. For
example, if we choose then the formula (13) requires M=N to be at least
Optimizing Static Calendar Queues 130.030.07100 1000
degredation
Fig. 3. Graph of N vs degradation, M , from simulations for
6. CALIBRATION OF A CALENDAR QUEUE IMPLEMENTATION
In an actual calendar queue implementation we would like to nd the best bucket
width - and number of buckets M . The preceding theory tells us how to do so
if we know the hidden implementation parameters b, c, and d. In this Section we
give a relatively simple method of estimating these parameters simply by timing
executions of the simulation for various values of - proportional to =N . The key
to the method is equation (9) for the expected time per event. We can write KN (-)
as a linear function of the unknowns b, c, and d. The general calibration method is
as follows: rst estimate M to be large enough so that the degradation in using M
buckets over innitely many is small. Second, nd K M
N (-) for a number of dierent
-'s by timing executions of the implementation, and third, use a linear least squares
approximation to nd the b, c and d that best ts the function
We illustrate this method with an example. We developed a calendar queue implementation
in C++ and ran it on a DEC alphastation 250. We chose
and an exponential jump with mean 10; 000. Just by examining the code we felt
that b, the time to process an empty bucket, was considerably larger than c, the
cost of traversing a list entry. We made an educated guess that the optimal - was
certainly greater than 5. We chose
or larger there is only a small chance that an event in the current bucket is not
processed because its time is too large. We timed the calendar queue for 20 values
of - ranging over several orders of magnitude, namely, Using
this data we used linear least squares approximation to compute
using equation (15). Figure 4 shows the curve of
equation (15) using these parameters. The Figure also shows the time per event for
200. Thus, this method accurately predicts data points that were
not used in the linear least squares approximation. It is interesting to note that
using these values of b, c, and d in equations (10) and (11), we obtain - opt 61:5
and K(- opt ) 1756:38. By contrast, the best - among the 40 executions is
14 K.B. Erickson, R.E. Ladner, A. LaMarca
with execution time 1754:92.1800220026003000
time
per
event
bucket width
measured
predicted
Fig. 4. Measured and predicted expected time per event for a calibrated implementation of a
calendar queue.
Care must be taken in applying this calibration method. In the method, the
hidden parameters, b, c, and d, are measured indirectly by measuring the expected
time per event. For xed M , N , and -, the expected time per event can vary over
dierent runs because of interruptions by other processes, page faults, or other ef-
fects. However, in our experimental setting we carefully controlled the environment
so that our running times varied little for a xed parameter setting. In addition,
measurements were taken after a suitably long warm-up period and over a long
enough period so that average time per event was very stable. In a real computing
environment that cannot be controlled this calibration method might not yield such
good results.
Ideally, using a xed N , M (large enough), and we can estimate the hidden
parameters b, c, and d which then could be used for any other N , M (large enough),
and , and -. However, because of the cache behavior of modern processors, the
values of b, c, and d are not actually constant independent of M , N , and properties
of the jump distribution other than its mean. For example, a smaller M
might achieve fewer cache misses reducing the running time and thereby eectively
lowering the values of these constants. It may be that in applying the calibration
method, - is chosen so large that the original M chosen is far larger than necessary.
In this case, it might be wise to choose a smaller M , then recalibrate the calendar
queue starting with a larger -.
In a real application of the calendar queue it is unlikely that the jumps are mutually
independent, identically distributed random variables as described in our
model. Nonetheless, the mean of the jump can be empirically estimated, the calibration
done, and equation (10) for the optimal - applied to nd a potentially good
-.
Optimizing Static Calendar Queues 15
7. CONCLUSION
We have shown that there is an expression for the expected time to process an event
in the innite bucket calendar queue and that the bucket width can be chosen
optimally. With the bucket width near the optimal bucket width the calendar
queue has expected constant time per event. The optimal bucket width depends
only on a few parameters, the incremental time to process an empty bucket (b),
the incremental time to traverse a list item (c), the mean of the jump (), and the
number of events (N ). We have shown that the number of buckets M can be chosen
to be O(N) so as to achieve minimal or almost minimal expected time per event.
Finally, we have shown that the implementation parameters can be determined by
using approximation based on the method of linear least squares.
Although the calendar queue runs very fast for certain applications it has the
disadvantage that its performance depends on the choice of parameters - and M .
An interesting problem would be to design a priority queue based on the calendar
queue that automatically determines good choices for - and M . We believe that
the calibration method described in this paper might give insight into the design
of a dynamic calendar queue where N and/or can vary over time.
Section A of the Appendix sets up the notation and concepts that are used through-out
the Appendix.
A. INVARIANT DISTRIBUTION, POSITIVITY, AND LIMITS
Consider the Markov chain b
described in Section 3. The symbol P shall denote
the probability measure induced on the trajectory space of the chain b
X when the
initial distribution is , and P b x shall denote trajectory space probabilities when
the chain starts at the point b x. (Note: P
R P b x d(b x), the integration being
carried out over the entire state space.) Integration (better known as expectation)
with respect to P and P b x is denoted E and E b
x , respectively.
stand for the set of points b such that
and let A . For an b x in the state space and a (measurable) subset A, the
one-step transition probability (T. P.) that the chain will move from b x to a point
in A is given by
(b
Z 11A (b x
the standard i th unit coordinate vector, and 1A (b x)
is the function which is 1 for b x in A and 0 otherwise. 3
3 Actually, (16) does not dene a proper transition probability at all points! Indeed, if b x is any
point with two or more coordinates that are equal and strictly less than -, then b x does not lie in
A 0 nor in B i for any i. Hence P (b x; subsets A. However, the jump distribution F
has a density so it has no atoms (discrete points of positive probability). This and the dynamical
description of the chain in Section 3 imply that two points which start at the same position
A. LaMarca
Let be the set [0; - It follows from (16) that
(b in and A C .
In other words, is an absorbing set for the chain. Moreover, the dynamical
description of the chain implies that a particle which starts outside of will reach
in a nite (but possibly random) number of steps. (One can show, using the
method in the proof in B.3 that the number of steps required to eventually enter
has a nite expectation.) Thus C is transient for the chain. (Of course,
measure m is an invariant measure for the chain if m is -nite and for every
measurable subset A of the state space
Z
(b x; A)mfdb xg; (17)
A Markov chain is called a Harris recurrent chain, or simply a Harris chain, if
there exists a unique, up to positive multiples, invariant measure m such that if
A is any Borel subset with m(A) > 0, then P b x ( b
x in the
state space. (The initials i.o. stand for \innitely often".) A Harris chain with an
invariant probability measure, necessarily unique, is called positive.
The state space of a Harris chain can be written as a disjoint union:
6. The sets C are
known as the recurrent cyclic classes. The integer d is nite and if d = 1, the chain
is called aperiodic.
Theorem A.1. If the jump density satises J1, J2, and J3, then the Markov
chain b
X with T.P. (16) is a positive, aperiodic, recurrent Harris chain. Its invariant
probability is concentrated on
Corollary A.1. If ' is any bounded measurable function, then for any initial
distribution , we have
lim
Z
'(b x) dm(b x); P a.s. (18)
Note: The left-most term is a limit of averages of random quantities and the assertion
is that the limit exists with P -probability 1 and equals the (non-random)
quantities on the right. If ' is unbounded but integrable with respect to m and
if the chain is Harris and positive, then the above limit relations remain valid at
least in the case that is point mass at some b x or that = m.
Proof. of Corollary A.1. The deterministic limit statements of Corollary A.1
are immediate consequences of Proposition 2.5 in Ch.6, x2, of [Revuz 1984]. The
eventually become and remain separated w. p. 1. Indeed, this occurs as soon as one of them
jumps to the right. Thus, it does no harm if we banish such points from the state space initially.
With this understanding P is indeed a transition probability on its state space.
Optimizing Static Calendar Queues 17
a.s. limit-of-averages assertion is a consequence of the ergodic Theorem for Harris
chains. See Theorem 4.3, and its companion remark, in [Revuz 1984], Ch. 4, x4.
Remark. The main signicance of aperiodicity is that it justies the existence of
the limit of E b x f( b
occurring in (18). The existence of limits of averages does
not require aperiodicity.
Proof. of Lemma 3.1. Let
exactly j components of b
x lie in [0; -)g:
the number of particles in interval [0; -) at time t
and from (18) it follows immediately that q g.
The proof of Theorem A.1 will be postponed to the very last Appendix below.
It is lengthy and somewhat tedious, but there are some interesting features.
B. PROOF OF LEMMA 4.2
B.1 The computation of q
In this Section we prove (5) of Lemma 4.2.
The sets B i , dened in the last Section, are disjoint and their union is the complement
(in ) of A 0 . Since m assigns 0 mass to [0; 1) N \ c , we have
Let be any bounded or positive function on the state space. Equation (17) has
an analogue for functions which reads:
R (b x)mfdb
R mfdb xg
R (b y)P (b x; db y).
Noting that
R
R (b y)P (b x; db
R
A0 (b x - b 1)mfdb xg, by (16), and doing
a little rearranging, we get
Z
(b x - b 1) (b x)
Z
mfdb xg
(b x)
Z
(b y)P (b x; db y)
Fix i and let (b complex number with nonnegative
real part. Then for b x in B i ,
Z
(b y)P (b x; db
R 1e z f(z) dz is the Laplace transform of F . Also, for b
x in B j with
Z
(b y)P (b x; db
All but the i th term on the right side of (20) vanishes and it becomes
[1 ()]
Z
A. LaMarca
Simplication of the left-hand side of (20) leads to:
Z
Z
Divide (21) by and make ! 0. The result is -mfA 0 g= 0 (0)mfB i
Equation (5) follows immediately from this and (19).
B.2 The Case
If we observe the successive positions of a single one of our N particles at only
those times at which it actually moves, we get a 1-dimensional version of the N -
dimensional chain. For
From the description of the chain in terms of the independent random variables ,
one concludes:
(i) Each sequence fX i is itself a Markov chain on the line;
(ii) These N Markov chains are mutually independent.
If one can nd an increasing sequence of times fS k g such that each S k is a common
value of every one of the u i (that is, for each k there are numbers r i (k) not
necessarily the same, such that S
mutually independent components. Here is such a sequence: let S
and for k > 0, let
The times T are the successive (random) times at which the
interval [0; -) is empty of particles (Z(T k
is obtained from
by adding the deterministic constant - to each of the components of b
it follows that the components of b
are also mutually independent. It is easy to
show that the chain induced on A 0 (or trace chain), the sequence f b
is also a Markov chain. See [Revuz 1984], Exercise 3.13, page 27.
An important point to note is that the special structure of b
X implies that for
each i the chain fX i (T k ); k 0g , coincides in law with the trace chain on [-; 1)
of an
X. It is at least intuitively clear that the trace chain
positive recurrent and has an invariant probability distribution m 0 ,
say, obtained by renormalizing the distribution m restricted to A 0 . (See [Revuz
1984], Ex. 3.13, p. 27, and Prop. 2.9, p. 93 for a formal proof.) Thus for subsets B,
But because this trace chain also has independent components, it follows that m 0
is a \product measure" built up from the invariant distributions of each of its
component chains. These component chains have identical T.P.'s, so the factors in
are the same. Let us call this common factor distribution m 10 . Once computed,
(concentrated on [-; 1)) may be used to compute the limit, as k !1, of the
Optimizing Static Calendar Queues 19
probability of nding exactly j particles in the interval [0; -) at the times S
+1. By now it should be clear that the limiting distribution of Z(S k ) is a binomial
distribution corresponding to N Bernoulli trials with parameter
(However, the limit distribution of Z(t) for t tending to innity without restriction
is not a binomial.)
The invariant distribution, let us call it m 1 rather than m, in the case
our basic chain can be calculated explicitly and then m 10 obtained from the special
case of (22). The measure m 1 turns out to be uniform on [0; -) and coincides
with the (-translate) of the stationary distribution for the renewal process with
interarrival distribution F . This stationary distribution has a density equal to the
normalized tail-sum 1 F . See [Feller 1971], XI.4. One can give queuing theory
arguments for the above description of m 1 , but, since equation (21) leads to this
result almost immediately, we use that equation to give a quick proof. In the case
equation (21), simplies to
valid for any complex number ; Re() 0. If we set is an
arbitrary integer, we nd that the left-hand side vanishes. The density assumption
implies that () 6= 1 for any 6= 0. Hence
results in the theory of Fourier series implies that we
must have dm 1 some constant C. From (5) in the case
-). For the Laplace
transform of m 1 on [-; 1), we get
Z -e x
Inverting the Laplace transforms in this equation reveals that the density g 1 of m 1
on [-; 1), for x -, is given by g 1 From (22) it is then
clear that m 10 has the density
The upshot of the preceding is that we can now conclude that the limit distribution
of Z(S k ) is:
lim
(w.p.1), for
x
Remark. It follows from the work of the last two Sections that the measure m,
when restricted to A 0 , is a product measure because its restriction to A 0 coincides
with However, the m-measures of subsets of A 0 have no particular interest;
it is only the m-measures of the other A j (dened in the proof of Lemma 3.1) that
is required and these sets are contained in the complement of A 0 . But m restricted
to the complement of A 0 is not a product measure.
20 K.B. Erickson, R.E. Ladner, A. LaMarca
B.3 Estimates for the q j
Apart from the explicit representation of m on A 0 discussed in the last Section,
a simple expression for m on all of [0; 1) N for N > 1 is not available. This
means that, with one exception (the case F (-) = 0), we do not have simple explicit
formulae for the values of q and must resort to approximations. It
turns out, however, that the approximate formulae are quite amenable to analysis
particularly in the region of interest:
In this Section we nish the proof of the two inequalities of (6) which we henceforth
designate (LH-6), for the left side, and (RH-6) for the right.
To simplify the notation a little, the starting distribution will be omitted, if
not forgotten, when it is not essential. For this proof we introduce the objects
I A j ( b
I A j
where A
. The variable #(t; 0) diers from n - (t) by at most 1 because
the T k 's are the zeros of Z. Therefore, by the ergodic limit theory for b
lim
Hence
Sn
Sn
and then
Sn
Sn
a sequence of random variables fV k g (k 1) by V
shall be the total number of times t in the interval
that the counting variable Z(t) has the value j:
I A j ( b
The occurrence of the event V k > r implies that Z(S k 1 and that there are
at least r jumps, counting from the rst time in [S k 1
magnitudes smaller than -. Hence
Optimizing Static Calendar Queues 21
and therefore, assuming F (-) < 1,
(Indeed all moments, Ef(V k ) g, are nite.) This inequality, (23),
and the ergodic limit theory now yield
lim
lim
which is (RH-6).
As to (LH-6) note rst that
Hence, see (25), and (23),
lim n
which is (LH-6).
C. PROOF OF THEOREM 4.1
For the purposes of this proof let us write
The conclusion of Theorem 4.1 is equivalent to the assertion that
KN (-) d
x
c
x
uniformly on bounded x-intervals. We base the proof on (8) which states
c
x DN (-) KN (-) d b
x
in the new notation. Fix a number
> 0 and conne x to the interval (0;
=] so
that 0 < -
=N . Let 0 and c 0 be the numbers introduced in assumption J3 in
Section 3. Keeping N > maxf2c 0
gets
22 K.B. Erickson, R.E. Ladner, A. LaMarca
Also DN (-) 1(N 2
=N.
Hence
x
c
x DN (-)
From this, and a little algebra, it is easily seen that to
nish the proof of (28), it su-ces to nd a number C 2 , depending on
, such
But, because Np N-
x
x
x
The numbers C 1 (multiplied by c) and C 2 yield estimates for 1 and 2 mentioned
in Theorem 4.1:
D. PROOF OF THEOREM 4.2
Throughout this Section we will write a = b=c,
Moreover, there is no harm in also supposing that
Step 1. As a function of -, KN (-) is continuous on (0; 1). The reader is asked
to turn to the formula (3). To begin with the variable q 0 is =( + N-) which is
obviously continuous. The only possible discontinuous term in the formula for KN
is EN (-). However, the continuity of this function is an immediate consequence of
the following exact formula which will be discussed after the proof of the Theorem
is complete.
Lemma D.1.
Step 2. The function - 7! EN (-) is nondecreasing. One can prove this by
dierentiating the expression for E(-) of Lemma D.1 and checking that the result
is non-negative.
Optimizing Static Calendar Queues 23
Here is an outline of an alternative, but more intuitive, proof: Consider two
chains b
with the same N and jump density but with dierent bucket
. If we follow the trajectory of an individual particle in each chain,
then we nd that on average in the chain with the larger bucket size, - 2 , the particle
gets back to the interval [0; - 2 ) quicker than it would get back to the interval [0;
in the chain with the smaller bucket size, - 1 . Since a typical particle of the chain
more often in the interval [0; - 2 ) than in [0; - 1 ) of the chain b
the average
number of particles in [0; - 2 ) of b
2 is at least as large as the average number of
particles in [0; - 1 ) of b
Step 3. We next establish
lim
uniformly on [-
for each xed - 1 > 0. The explicit formula for q 0 yields that
nondecreasing function of -. By equation (3) and step 2, we nd
that the function (1 q 0 (-))KN (-) q 0 (-)b is also nondecreasing in -. Hence for
b:
For 1. From the
above inequality we have for - 1 - 2 and N =- 1 ,
From this inequality, it follows that KN (-) goes to innity uniformly in the interval
for each xed - 1 > 0. This follows from
equation (8).
Step 4. For
2a; we have min 0<-<
Moreover, if - 1 is the minimizing - on this interval, then
where - o;N is dened above. To prove all this, let
By straightforward calculus for each N , HN (-) has a global minimum on (0; 1) at
the point - o;N . Let - opt be a value of - which gives the minimum value of KN (-) on
the given interval (0;
=N ]. By Theorem 4.1, on this interval we can nd a constant
C, depending on
, such that for all N su-ciently large,
uniformly for 0 < - <
=N . Since
is in the interval (0;
=N ]. On this
interval, KN (-) is thus sandwiched between the two convex functions, HN (-)C=N ,
both of which have a global minimum at the same point - o;N interior to the interval.
For a xed N , - opt must be between the two solutions to the equation (in -) HN (-)
. By a simple calculation one nds that the dierence
A. LaMarca
between the two solutions is O(N 3=2 ) and this yields -
any - between the two solutions one nds that KN
Step 5. The next step is to show that for any
0 , there is
such that for all N N 1
KN
Thus, the minimum exhibited in step 4, extends to the xed interval (0; - 1 ]. This
fact and step 3 imply that for all N su-ciently large,
min
completing the proof of the Theorem.
For the moment we x - 1 such that 0 < F (- 1 ) < 1. We will choose - 1 later. By
the inequality (8) and the fact that p(-[1 F (-)]= we have for all - 1
KN (-) cQ a=z
-. Dene LN
.
For each N , the horizontal line at height K cuts the graph of the convex
function LN at two points, the larger of which we call z
N . Thus, z
N is the larger
root of the equation
As a sequence in N , the values z
N converge to a bounded positive limit. Dene
N =QN . Hence, the sequence N-
also converges to a limit,
. Note that
tends to
2a as Q approaches 1. We choose - 1 (and hence Q) so that
<
Now, choose N 1 such that -
Since the minimum
of KN (-) is bounded above by K bounded below by the function
LN (QN=(-)), then the minimum of KN (-) in the interval (0; must already lie
in the interval (0; -
N ], and hence in the interval (0;
D.1 A discussion of the exact formula for EN (-).
The proof of this result is quite long and is based on some exact, though very
complicated, integral formulae for the q j 's. See [Erickson 1999] for the details. The
exact formula for EN (-) leads to an exact formula for KN (-), but our work has
led us to the conclusion that the excellent asymptotic formulae of Theorems 4.1
and 4.2 (and the simple inequalities of Lemma 4.2 which lead to them) are of much
greater practical use and are certainly easier to prove. For this reason we have not
included the long proof of the exact formula.
Our main use of Lemma D.1 was to shorten the proof, slightly, of the global
minimization of KN . Note that D.1 yields, immediately, the continuity of KN as a
function of -. One requires continuity in order to speak sensibly of the existence of a
minimizing -. Even without the continuity, however, the basic result of Theorem 4.2
is essentially correct; only the language used to express it needs to be changed. (One
must use the term \greatest lower bound" in place of \minimum" and one can only
assert that there are points - at which the greatest lower bound is approximately
attained.)
Optimizing Static Calendar Queues 25
We will write L M
N (-) and Z A (t) for the number of particles in set A at time
t. is a subset of [-; 1) we have
are the successive times at which
the interval [0; -) is empty of particles.) Hence,
st
Z
Z
w.p.1, where Suppose that at time T k 1 there are
particles at positions x 1 2-). Then at there will be r
particles in [0; -) at positions x
where according as the i-th of these particles lands in or not when
it is nally removed from the interval [0; -). (This removal must occur during
Writing by the strong Markov property,
where F
is the eld of the random variables T k
and where H t (b) is the probability that a particle starting at the origin lands in the
interval it rst jumps over t. This H satises
where U is the renewal measure. See [Feller 1971], page 369. 4
In general U(z) Uf[0; z]g [1 F (z)] 1 for distributions on [0; 1) so that
sup
[F (jM- x z) F (jM- x z)] Ufdzg
(Recall the denition of 2 in the statement of Theorem 5.2.) Calling the right-hand
side p and noting that conditional on the -eld F
, the variables u i are
4 Feller denes H in terms of the open interval (-; 1) whereas we are using the closed interval.
Because F has no atoms, this dierence in denition has no consequence.
26 K.B. Erickson, R.E. Ladner, A. LaMarca
independent of k , we have
[Z +-
[Z +-
r
[1 F (-)] 1 Z +-
, and Z fg
means Z fg (T j k. Now at the times fT j g
the particles are independent so the limiting joint distribution of Z +-
is a trinomial. (Separately, they have binomial limit distributions.) Letting k !1
we get EfZ [-;2-)
Z +-
where p is dened at (4) and Going back to (32) with these
calculations we obtain
Z +-
Z [-;2-)
(Recall the basic property of conditional expectations EfE[ j F
Replacing p with 2 =[1 F (-)] and q 0 with =( +N-) combining fractions and
dropping the factor 1 F (-)] which will occur in the numerator, we nally obtain
the upper bound on L M
N .
For the lower bound we have
which evaluates to the lower bound on L M
N .
F. PROOF OF LEMMA 5.3
In the following we let lim, lim sup, and lim inf stand for the limits of various
quantities as N !1 with the other variables constrained to vary as stated in the
hypothesis.
First let us note that lim F
jrx. Note that t j does not vary with N . For all su-ciently
large N , - will be so small that the intervals f(t j -; t j +-]g 1
are non-overlapping.
-], then for all N we have JN+1 JN : Hence
by continuity of F (no atoms). (The letter F stands for both the distribution
function and the induced probability measure as is customary.) From this (and the
limits
Optimizing Static Calendar Queues 27
Next
[1 F (jrx)]:
As the rst sum on the left goes to
it then follows that
[1 F (jrx)]
Using these limits in the upper bound for L M
N (-) we get
lim sup L M
[1 F (jrx)]:
Similarly, from the lower bound for L M
mu+N-
[1 F (jrx)]:
Thus the lim sup L M
so the limit of L M
exists and its value is as stated.
G. PROOF OF THEOREM A.1.
We have seen that for each k 1 and given initial positions, the N components of
are mutually independent random variables. Also, it is not hard to see that
the conditional distribution of X the distribution
of the residual waiting time at epoch - of a delayed renewal process starting at
epoch x with interarrival distribution F . See [Feller 1971], page 369, and [Erickson
1999]. (We have already seen this distribution in the proof of Lemma 5.2, Section
E, though it was was described in slightly dierent language).
Letting H s fIg denote the probability that the residual waiting time at epoch s
lies in I for a pure renewal process starting at 0 (H s f[0;
E), it follows that for any xed x > 0 and every integer k >> x=- and any Borel
I [0; ),
Using the Markov property it then follows that for xed b
Borel sets I i [0; ),
Y
28 K.B. Erickson, R.E. Ladner, A. LaMarca
If U denotes the renewal measure, then the assumption (a) implies that U has an
absolutely continuous part which possesses a strictly positive density on (0; 1).
But, see [Feller 1971], page 369,
Consequently, the measure I 7! H k- x fIg also has an absolutely continuous part
which is strictly positive on [0; ). The conclusion one may draw from the preceding
is that f b
X(S k )g is irreducible with respect to the measure ' N , the Lebesgue measure
in R N (restricted to (0; ) N ). See [Revuz 1984], ch. 3 x2. This implies that the
trace chain f b
X(T k )g is also ' N irreducible on its state space A 0 . Together, these
two assertions imply that the full chain b
But we can also draw additional useful conclusions from (34) and (35).
From Stone's decomposition Theorem, [Revuz 1984], ch 5, x5, we can write
is a nite measure and U 1 is absolutely continuous with a bounded
continuous density u such that lim x!1 1=. For any Borel I [0; 1) and
Hence, by dominated convergence,
lim
Z
I
[1 F (x)] dx:
This and the product formula (34) yield that for any Borel set A [0; 1) N
lim
0 is the product measure F 0 F 0 F 0 and F 0 is the probability
distribution on [0; ) with density f1 F (x)g=. Not only does (36) give us one of
the limit Theorems we have used earlier, but it implies that subchains the b
and b
are both Harris recurrent (with invariant probabilities m
translate by - b
implies that, with
probability 1, b
for innitely many times k whatever be the initial
position, but b
is obtained from b
adding - to each component so the
previous assertion is also correct for b
with respect to its invariant probability.
See [Revuz 1984], ch 2, x3.
Consider now the full chain b
g. If ' N (A) > 0, A , then
the preceding makes it clear that b
will hit A with positive probability. This implies
that b
X is ' N -irreducible. According to [Revuz 1984], ch 2, Theorem 2.3, 2.5, and
Denition 2.6, either b
X is a Harris chain with a (unique up to constant multiples)
invariant measure m, or else the potential kernel is proper. The potential kernel, K,
is dened by K(b x;
Ag. If K is proper, then can be written
as an increasing sequence of subsets Dn each of which has bounded potential. But,
eventually any such sets must have positive Lebesgue measure, and in that case
Optimizing Static Calendar Queues 29
(34) implies that P b x f b
for innitely many positions
x. But K(b x; Dn ) < 1 implies that the expected total number of hits in Dn is nite
which implies that the number of hits must be nite with probability 1. We thus
cannot have a proper potential kernel and therefore b
X must be a Harris chain.
Consider next the aperiodicity issue. Seeking a contradiction, let us suppose that
X is periodic. Let fC i g d
be the recurrent cyclic classes in the decomposition of
the state space. These subsets have positive Lebesgue measure. Without loss of
generality we may suppose that mfC 1 \AN g > 0, to be the
smallest S k such that b
This stopping time is nite on account of
.
s be a doubly indexed sequence of independent random variables each with
distribution F . The earliest possible epoch after at which b
can arrive in A
no more than one particle can move at any
particular step. For any integer r 1 and any Borel rectangle
A 0 we have
Y
On account of hypotheses J1, the distribution F and each of its convolutions F r
puts positive mass on every subinterval in (0; ). Hence the right-hand side of
(37) is strictly positive whenever the cylinder set A has positive Lebesgue measure.
Standard measure theory implies that this is also correct for any Borel set A
[-) of positive measure.
probability 1, at time , b
X() belongs to C 1 \ AN C 1 . So, if the chain
is periodic, then, w.p.1, at all future epochs of the form nd the chain will
always be found in C 1 . Also, for each d 1, at times t
mod d; the chain must belong to the set C 1+k(d) which is disjoint from the other
classes including C 1 . But in (37), for any N + r N the right hand side is
strictly positive. By choosing A in (37) to be any set in C k \ [-) N of positive
measure and letting r take on dierent ( mod d) values, we get a contradiction to
the previous assertion about belonging to disjoint sets. There is no contradiction if
X is aperiodic.
It remains to show that the invariant measures m (unique up to constant multi-
ples) of the full chain b
are nite: m() < 1.
The trace chain f b
X(TK )g is positive recurrent with invariant probability m 0 . But
m 0 is also a multiple of m restricted to A 0 . Hence, It follows from the
Renewal Theorem that the mean return time to A 0 is also nite. Let 0 be a
bounded measurable function on A 0 . Then is m 0 -(and hence m-) summable and
lim
Z
(b x)dm 0 (b x) > 0:
denote the number of visits to A 0 by the full chain during 0 s t.
A. LaMarca
ih
But if the chain b
were null, that is if m had innite mass, then for any bounded
m-summable function , ([Revuz 1984], Theorem 2.6, page 198),
0:
By Fatou's Lemma, one can readily see that this would contradict (38).
ACKNOWLEDGMENTS
We would like to thank the two referees and the editor for their many valuable
suggestions for improving the paper.
--R
Implementation and analysis of binomial queue algorithms.
Calendar queues: A fast o(1) priority queue implementation for the simulation event set problem.
Calendar queue expectations.
An Introduction to Probability Theory and Its Applications
Description and analysis of an efcient
The Art of Computer Programming
Markov Chains.
Scalable architectures for integrated tra-c shaping and link scheduling in high-speed atm switches
A data structure for manipulating priority queues.
--TR
Self-adjusting binary search trees
adjusting heaps
Calendar queues: a fast 0(1) priority queue implementation for the simulation event set problem
A data structure for manipulating priority queues
The Art of Computer Programming, 2nd Ed. (Addison-Wesley Series in Computer Science and Information
--CTR
Wai Teng Tang , Rick Siow Mong Goh , Ian Li-Jin Thng, Ladder queue: An O(1) priority queue structure for large-scale discrete event simulation, ACM Transactions on Modeling and Computer Simulation (TOMACS), v.15 n.3, p.175-204, July 2005
Rick Siow Mong Goh , Ian Li-Jin Thng, Twol-amalgamated priority queues, Journal of Experimental Algorithmics (JEA), v.9 n.es, 2004
Farokh Jamalyaria , Rori Rohlfs , Russell Schwartz, Queue-based method for efficient simulation of biological self-assembly systems, Journal of Computational Physics, v.204 n.1, p.100-120, 20 March 2005 | priority queue;optimization;calendar queue;discrete event simulation;data structures;algorithm analysis;markov chain |
361168 | Critical Motions for Auto-Calibration When Some Intrinsic Parameters Can Vary. | Auto-calibration is the recovery of the full camera geometry and Euclidean scene structure from several images of an unknown 3D scene, using rigidity constraints and partial knowledge of the camera intrinsic parameters. It fails for certain special classes of camera motion. This paper derives necessary and sufficient conditions for unique auto-calibration, for several practically important cases where some of the intrinsic parameters are known (e.g. skew, aspect ratio) and others can vary (e.g. focal length). We introduce a novel subgroup condition on the camera calibration matrix, which helps to systematize this sort of auto-calibration problem. We show that for subgroup constraints, criticality is independent of the exact values of the intrinsic parameters and depends only on the camera motion. We study such critical motions for arbitrary numbers of images under the following constraints: vanishing skew, known aspect ratio and full internal calibration modulo unknown focal lengths. We give explicit, geometric descriptions for most of the singular cases. For example, in the case of unknown focal lengths, the only critical motions are: (i) arbitrary rotations about the optical axis and translations, (ii) arbitrary rotations about at most two centres, (iii) forward-looking motions along an ellipse and/or a corresponding hyperbola in an orthogonal plane. Some practically important special cases are also analyzed in more detail. | Introduction
One of the core problems in computer vision is the recovery of 3D scene
structure and camera motion from a set of images. However, for certain
con-gurations there are inherent ambiguities. This kind of problem
was already studied in optics in the early 19th century, for example,
by Vieth in 1818 and Muller in 1826. Pioneering work on the subject
was also done by Helmholtz. See [13] for references. One well-studied
ambiguity is when the visible features lie on a special surface, called
a critical surface, and the cameras have a certain position relative
to the surface. Critical surfaces or igef#hrlicher Ortj were studied by
Krames [21] based on a monograph from 1880 on quadrics [32]. See also
the book by Maybank [24] for a more recent treatment. Another well-known
ambiguity is that when using projective image measurements, it
is only possible to recover the scene up to an unknown projective transformation
[8, 10, 35]. Additional scene, motion or calibration constraints
are required for a (scaled) Euclidean reconstruction. Auto-calibration
uses qualitative constraints on the camera calibration, e.g. vanishing
skew or unit aspect ratio, to reduce the projective ambiguity to a sim-
ilarity. Unfortunately, there are situations when the auto-calibration
constraints may lead to several possible Euclidean reconstructions. In
this paper, such degeneracies are studied under various auto-calibration
constraints.
In general it is possible to recover Euclidean scene information from
images by assuming constant but unknown intrinsic parameters
of a moving projective camera [26, 7]. Several practical algorithms have
been developed [39, 2, 30]. Some of the intrinsic parameters may even
vary, e.g. the focal length [31], or the focal length and the principal
point [14]. In [29, 15] it was shown that vanishing skew suOEces for a
Euclidean reconstruction. Finally in, [16] it was shown that given at
least 8 images it is suOEcient if just one of the intrinsic parameters is
known to be constant (but otherwise unknown).
However, for certain camera motions, these auto-calibration constraints
are not suOEcient [42, 1, 40]. A complete categorization of these
critical motions in the case of constant intrinsic parameters was given
by Sturm [36, 37]. The uniformity of the constant-intrinsic constraints
makes this case relatively simple to analyze. But it is also somewhat
unrealistic: It is often reasonable to assume that the skew actually vanishes
whereas focal length often varies between images. While the case of
constant parameters is practically solved, much less is known for other
auto-calibration constraints. In [43], additional scene and calibration
constraints are used to resolve ambiguous reconstructions, caused by a
-xed axis rotation. The case of two cameras with unknown focal lengths
is studied in [12, 28, 4, 20]. For the general unknown focal length case,
Sturm [38] has independently derived results similar to those presented
here and in [20, 19].
In this paper, we generalise the work of Sturm [37] by relaxing the
constraint constancy on the intrinsic parameters. We show that for a
large class of auto-calibration constraints, the degeneracies are independent
of the speci-c values of the intrinsic parameters. Therefore, it
makes sense to speak of critical motions rather than critical con-gura-
tions. We then derive the critical motions for various auto-calibration
constraints. The problem is formulated in terms of projective geometry
and the absolute conic. We start with fully calibrated cameras, and then
continue with cameras with unknown and possibly varying focal length,
principal point, and -nally aspect ratio. Once the general description of
the degenerate motions has been completed, some particular motions
frequently occurring in practice are examined in more detail.
This paper is organized as follows. In Section 2 some background
on projective geometry for vision is presented. Section 3 gives a formal
problem statement and reformulates the problem in terms of the absolute
conic. In Section 4, our general approach to solving the problem is
presented, and Section 5 derives the actual critical motions under various
auto-calibration constraints. Some particular motions are analyzed
in Section 6. In order to give some practical insight of critical and near-
critical motions, some experiments are presented in Section 7. Finally,
Section 8 concludes.
2. Background
In this section, we give a brief summary of the modern projective formulation
of visual geometry. Also, some basic concepts in projective and
algebraic geometry are introduced. For further reading, see [6, 24, 33].
A perspective (pinhole) camera is modeled in homogeneous co-ordinates
by the projection equation
world point,
image, P is the 3 \Theta 4 camera projection matrix and ' denotes
equality up to scale. Homogeneous coordinates are used for both image
and object coordinates. In a Euclidean frame, P can be factored, using
a QR-decomposition, cf. [9], as
Here the extrinsic parameters (R; t) denote a 3 \Theta 3 rotation matrix
and a 3 \Theta 1 translation vector, which encode the pose of the camera. The
columns of de-ne an orthogonal base. The standard base is
de-ned by e . The intrinsic
parameters in the calibration matrix K encode the camera's internal
geometry: f denotes the focal length, fl the aspect ratio, s the skew
and the principal point.
A camera for which K is unknown is said to be uncalibrated.
It is well-known that for uncalibrated cameras, it is only possible to
recover the 3D scene and the camera poses up to unknown projective
transformation [8, 10, 35]. This follows directly from the projection
equation (1): Given one set of camera matrices and 3D points that
satis-es (1), another reconstruction can be obtained from
where T is a non-singular 4 \Theta 4 matrix corresponding to a projective
transformation of P 3 .
A quadric in P n is de-ned by the quadratic form
where Q denotes a (n
homogeneous point coordinates. The dual is a quadric envelope, given
by
where \Pi denotes homogeneous coordinates for hyper-planes of dimension
are tangent to the quadric. For non-singular matrices, it
can be shown that Q ' (Q ) \Gamma1 (see [33] for a proof). A quadric with a
non-singular matrix is said to be proper. Quadrics with no real points
are called virtual. In the plane, are called conics. We
will use C for the 3 \Theta 3 matrix that de-nes the conic points x T
and C for its dual that de-nes the envelope of tangent lines l T C
(where C ' C \Gamma1 ). The image of a quadric in 3D-space is a conic, i.e.
the silhouette of a 3D quadric is projected to a conic curve. This can
be expressed in envelope forms as
Projective geometry encodes only cross ratios and incidences. Properties
like parallelism and angles are not invariant under dioeerent projective
coordinate systems. An aOEne space, where properties like parallelism
and ratios of lengths are preserved, can be embedded in a
projective space by singling out a plane at in-nity \Pi 1 . The points on
\Pi 1 are called points at in-nity and be interpreted as direction vectors.
In P 3 , Euclidean properties, like angles and lengths, are encoded by singling
out a proper, virtual conic on \Pi 1 . This absolute
scalar products between direction vectors. Its dual, the dual absolute
quadric\Omega
products between plane
normals.\Omega
1 is a 4 \Theta 4
symmetric rank 3 positive semide-nite matrix, where the coordinate
system is normally chosen such
is\Omega 1 's
unique null
vector:\Omega 1 The similarities or scaled Euclidean
transformations in projective space are exactly those transformations
that
invariant. The transformations that leave \Pi 1 invariant
are the aOEne transformations. The dioeerent forms of the absolute
conic will be abbreviated to (D)AC for (Dual) Absolute C onic.
Given image conics in several images, there may or may not be a
3D quadric having them as image projections. The constraints which
guarantee this in two images are called the Kruppa constraints [22].
In the two-image case, these constraints have been successfully applied
in order to derive the critical sets, e.g. [28]. For the more general case
of multiple images, the projection equation given by (4) can be used for
each image separately.
3. Problem Formulation
The problem of auto-calibration is to -nd the intrinsic camera parameters
denotes the number of camera positions.
In general, auto-calibration algorithms proceed from a projective re-construction
of the camera motion. In order to auto-calibrate, some
constraints have to be enforced on the intrinsic parameters, e.g. vanishing
skew and/or unit aspect ratio. Thus, we require that the calibration
matrices should belong to some proper subset G of the group K of 3 \Theta 3
upper triangular matrices. Once the projective reconstruction and the
intrinsic parameters are known, Euclidean structure and motion are
easily computed.
For a general set of scene points seen in two or more images, there
is a unique projective reconstruction. However, certain special con-g-
urations, known as critical surfaces, give rise to additional ambiguous
solutions. For two cameras, the critical con-gurations occur only if both
camera centres and all scene points lie on a ruled quadric surface [24].
Furthermore, when an alternative reconstruction exists, then there will
always exist a third distinct reconstruction. For more than two cameras,
the situation is less clear. In [25], it is proven that when six scene points
and any number of camera centres lie on a ruled quadric, then there
are three distinct reconstructions. If there are other critical surfaces is
an open problem.
We will avoid critical surfaces by assuming unambiguous recovery
of projective scene structure and camera motion. In other words, the
camera matrices and the 3D scene are considered to be known up to an
unknown projective transformation. We formulate the auto-calibration
problem as follows: If all that is known about the camera motions
and calibrations is that each calibration matrix K i lies in some given
constraint set G ae K, when is a unique auto-calibration possible? More
Problem 3.1. Let G ae K. Then, given the true camera projections
G, is there any projective
transformation T (not a similarity) such that ~
~
calibration matrices ~
lying in
G?
Without constraints on the intrinsic parameters T can be chosen
arbitrarily, so auto-calibration is impossible. Also, T is only de-ned
modulo a similarity,
as such transformations leave K in the decomposition
invariant. Based on the above problem formulation, we can de-ne precisely
what is meant by a motion being critical.
De-nition 3.1. Let G ae K and let (P
denote two projectively
related motions, with calibration matrices (K
respectively. If the two motions are not related by a Euclidean transformation
and
are said to be critical with respect to G.
A motion is critical if there exists an alternative projective motion
satisfying the auto-calibration constraints. Without any additional as-
sumptions, it is not possible to tell which motion is the true one. One
natural additional constraint is that the reconstructed 3D structure
should lie in front of all cameras. In many (but by no means all) cases
this reduces the ambiguity, but it depends on which 3D points are
observed.
According to (4) the image of the absolute
i\Omega
I 0
Thus, knowing the calibration of a camera is equivalent to knowing its
image
Also, if there is a projectively related motion ( ~
the false image of the true absolute conic is the true image of a ifalsej
absolute conic:
i\Omega ~
T\Omega
i\Omega
T\Omega
virtual quadric of rank 3. This
observation allows us to eliminate the ifalsej motion ( ~
i=1 from the
problem and work only with the true Euclidean motion, but with a false
absolute dual
quadric\Omega
f .
Problem 3.2. Let G ae K. Then, given the true motion
G, is there any other proper, virtual conic
\Omega
f , dioeerent
1 , such that P
i\Omega
Given only a 3D projective reconstruction derived from uncalibrated
images, the
true\Omega 1 is not distinguished in any way from any other
proper, virtual planar conic in projective space. In fact, given any such
potential
conic\Omega
f , it is easy to -nd a 'rectifying' projective transformation
that converts it to the Euclidean DAC
and hence de-nes a false Euclidean structure. To recover the true struc-
ture, we need constraints that single out the
true\Omega 1 and \Pi 1 from all
possible false ones. Thus, ambiguity arises whenever the images of some
non-absolute conic satisfy the auto-calibration constraints. We call such
conics potential absolute conics or false absolute conics. They are
in one-to-one correspondence with possible false Euclidean structures
for the scene.
A natural question is whether the problem is dependent on the actual
values of the intrinsic parameters. We will show that this is not the
case whenever the set G is a proper subgroup of K. Fortunately, according
to the following easy lemma, most of the relevant auto-calibration
constraints are subgroup conditions.
Lemma 3.1. The following constrained camera matrices form proper
subgroups of the 3 \Theta 3 upper triangular matrices K:
(i) Zero skew, i.e.
(ii) Unit aspect ratio, i.e.
(iii) Vanishing principal point, i.e.
(iv) Unit focal length, i.e.
(v) Combinations of the above conditions.
Independence of the values of the intrinsic camera parameters is
shown as follows:
Lemma 3.2. Let G i 2 G for m, where G is a proper subgroup
of K. Then, the motion (P
is critical w.r.t. G if and only if the
motion
is critical w.r.t. G.
Proof. If
is critical with the alternative motion ( ~
calibrations
are also
critical, because G i K
by the closure of G under multipli-
cation. The converse also holds with G \Gamma1
, by the closure of G under
inversion.
Camera matrices with prescribed parameters do not in general form
a subgroup of K, but it suOEces for them to be of the more general form
known matrix and K belongs to a proper subgroup of
K. For example, the set of all camera matrices with known focal length
f has the form 2f 0 0
The invariance with respect to calibration parameters simpli-es things,
especially if one chooses G
m. With this in mind,
we restrict our attention to proper subgroups of K and formulate the
problem as follows.
Problem 3.3. Let G ae K be a proper subgroup. Then, given the true
motion
for calibrated cameras, where
any other false absolute
conic\Omega
f , dioeerent
i]\Omega
~
where ~
4. Approach
We want to explicitly characterize the critical motions (relative camera
placements) for which particular auto-calibration constraints are insuf-
-cient to uniquely determine Euclidean 3D structure. We assume that
projective structure is available. Alternative Euclidean structures correspond
one-to-one with possible locations for a potential absolute conic
in P 3 . Initially, any proper virtual projective plane conic is potentially
absolute, so we look for such
conics\Omega whose images also satisfy the
given auto-calibration constraints. Ambiguity arises if and only if more
than one such conic exists. We work with the true camera motion in
a Euclidean frame where the true absolute
conic\Omega 1 has its standard
coordinates.
Several general invariance properties help to simplify the problem:
Calibration invariance: As shown in the previous section, if the auto-calibration
constraints are subgroup conditions, the speci-c parameter
values are irrelevant. Hence, for the purpose of deriving critical motions,
we are free to assume that the cameras are in fact secretly calibrated,
even though we do not assume that we know this. (All that
we actually know is K which does not allow some image conics
I to be excluded outright).
Rotation invariance: For known-calibrated cameras, K i can be set to
identity, and thus the image !
I of any false AC must be identical
to the image of the true one. Since
i\Omega
i\Omega
hold for any rotation R, the image !
i is invariant to camera rota-
tions. Hence, criticality depends only on the camera centres, not on
their orientations. More generally, any camera rotation that leaves the
auto-calibration constraints intact is irrelevant. For example, arbitrary
rotations about the optical axis and 180 ffi AEips about any axis in the
optical plane are irrelevant if (a; s) is either (1; 0) or unconstrained,
and
Translation invariance: For true or false absolute conics on the plane
at in-nity, translations are irrelevant so criticality depends only on
camera orientation.
In essence, Euclidean structure recovery in projective space is a matter
of parameterizing all of the possible proper virtual plane conics, then
using the auto-calibration constraints on their images to algebraically
eliminate parameters until only the unique true absolute conic remains.
More abstractly, if C parameterizes the possible conics and X the camera
geometries, the constraints cut out some algebraic variety in (C; X)
space. A constraint set is useful for Euclidean structure from motion
recovery only if this variety generically intersects the subspaces
in one (or at most a few) points (C; X 0 ), as each such intersection represents
an alternative Euclidean structure for the reconstruction from
that camera geometry. A set of camera poses X is critical for the
constraints if it has exceptionally (e.g. in-nitely) many intersections.
Potential absolute conics can be represented in several ways. The
following parameterizations have all proven relatively tractable:
(i) Choose a Euclidean frame in
which\Omega
f is diagonal, and express
all camera poses relative this frame [36, 37]. This is symmetrical with
respect to all the images and usually gives the simplest equations. How-
ever, in order to -nd explicit inter-image critical motions, one must
revert to camera-based coordinates which is sometimes delicate. The
cases of a -nite false absolute conic and a false conic on the plane at
in-nity must also be treated separately,
e.g.\Omega
with either d 3 or d 4 zero.
(ii) Work in the -rst camera frame,
encoding\Omega
f by its -rst image ! and supporting plane (n T ; 1). Subsequent images !
are
given by the inter-image homographies H
is the i th camera pose. The output is in the -rst camera frame and
remains well-de-ned even if the conic tends to in-nity, but the algebra
required is signi-cantly heavier.
Parameterize\Omega
f implicitly by two images !
2 subject to the
Kruppa constraints. In the two-image case this approach is both relatively
simple and rigorous - two proper virtual dual image conics
satisfy the Kruppa constraints if and only if they de-ne a (pair of)
corresponding 3D potential absolute conics - but it does not extend
so easily to multiple images.
The derivations below are mainly based on method (i) .
5. Critical Motions
In this section, the varieties of critical motions are derived. In most
situations, the problem is solved in two separate cases. One is when
there are potential absolute conics on the plane at in-nity, \Pi 1 , and the
other one is conics outside \Pi 1 . If the potential conics are all on \Pi 1 , it is
still possible to recover \Pi 1 and thereby obtain an aOEne reconstruction.
Otherwise, the recovery of aOEne structure is ambiguous, and we say that
the motion is critical with respect to aOEne reconstruction.
The following constraints on the camera calibration are considered:
(i) known intrinsic parameters,
(ii) unknown focal lengths, but the other intrinsic parameters known,
(iii) known skew and aspect ratio.
These constraints form a natural hierarchy and they are perhaps the
most interesting ones from a practical point of view. In Section 3, it
was shown in that it is suOEcient to study the normalized versions of
the auto-calibration constraints, since critical motions are independent
of the speci-c values of the intrinsic parameters. That is, when some of
the intrinsic parameters are known, e.g. the principal point is (10; 20),
we may equivalently analyze the case of principal point set to (0; 0).
The corresponding camera matrices give rise to subgroup conditions
according to Lemma 3.1.
5.1. Known intrinsic parameters
We start with fully calibrated perspective cameras. The results may not
come as a surprise, but it is important to know that there are no other
possible degenerate con-gurations.
Proposition 5.1. Given projective structure and calibrated perspective
cameras at m - 3 distinct -nite camera centres, Euclidean structure
can always be recovered uniquely. With distinct camera centres,
there is always exactly a twofold ambiguity.
Proof. Assuming that the cameras have does not change the
critical motions. The camera orientations are irrelevant because any
false absolute conic must have the same (rotation invariant) images as
the true one. Calibrated cameras never admit false absolute conics on
as the (known) visual cone of each image conic can intersect \Pi 1
in only one conic, which is the true absolute conic. Therefore, consider
a -nite absolute
conic\Omega
f , with supporting plane outside \Pi 1 . As all
potential absolute conics are proper, virtual and positive semi-de-nite
[34, 37], a Euclidean coordinate system can be chosen such
f has
supporting plane z = 0, and matrix coordinates
\Omega
Since the cameras are calibrated, their orientations are
the conic projection (4) in each camera becomes
\Gammat]\Omega
y
x
z
z
x
y
x
y
x
z
z y1
optical centre
Figure
1. A twisted pair of reconstructions.
As the conic should be proper, both d 4 6= 0 and t 3 6= 0, which gives
Thus the only solutions are t
and\Omega
implies that there
are at most two camera centres, and the false conic is a circle of imaginary
radius i z, centred in the plane bisecting the two camera centres
In the two-image case, the improper self-inverse projective transfor-
mation
interchanges the
true\Omega
1 and the
f , according to
T\Omega
'\Omega and takes the two projection matrices P to
While the -rst camera remains -xed, the other has rotated 180 ffi about
the axis joining the two centres. This twofold ambiguity corresponds exactly
to the well-known twisted pair duality [23, 18, 27]. The geometry
of the duality is illustrated in Figure 1.
The 'twist' T represents a very strong projective deformation that
cuts the scene in half, moving the plane between the cameras to in-nity,
see
Figure
2. By considering twisted vs. non-twisted optical ray intersec-
tions, one can also show that it reverses the relative signs of the depths,
so for one of the solutions the structure will appear to be behind one
visual
optical centre
supporting
planes
potential
conics
cone
Figure
2. Intersecting the visual cones of two image conics satisfying the Kruppa
constraints generates a pair of 3D conics, corresponding to the two solutions of the
twisted pair duality.
camera, cf. [17]. To conclude, Proposition 5.1 states that any two-view
geometry has a 'twisted pair' projective involution symmetry and any
camera con-guration with three or more camera centres has a unique
projective-to-Euclidean upgrade.
5.2. Unknown focal lengths
In the case of two images and internally calibrated cameras modulo
unknown focal lengths, it is in general possible to recover Euclidean
structure. Since we know that the solutions always occur in twisted pairs
(which can be disambiguated using the positive depth constraint), it is
more relevant to characterize the motions for which there are solutions
other than the twisted pair duality. Therefore, the two-camera case will
be dealt with separately, after having derived the critical motions for
arbitrary many images.
5.2.1. Many images
If all intrinsic parameters are known except for the focal lengths, the
camera matrix can be assumed to be which in turn
implies that the image of a potential absolute conic satis-es
We start with potential absolute conics on \Pi 1 .
Potential absolute conics on \Pi 1
Let C f denote a 3 \Theta 3 matrix corresponding to a false absolute conic
(in locus form) on the plane at in-nity. Since C f is not the true one,
I. The image of C f is according to (4)
Notice that criticality is independent of translation of the camera.
Two cameras are said to have the same viewing direction if their
optical axes are parallel or anti-parallel.
Proposition 5.2. Given \Pi 1 and known skew, aspect ratio and principal
point, then a motion is critical if and only if there is only one
viewing direction.
Proof. Choose coordinates in which camera 1 has orientation R
Suppose a motion is critical. According to (6) and (7), this implies that
3 for some - ? \Gamma1. For camera 2, let
apply (7),
I
This implies that r in turn, R
which is equivalent
to a -xed viewing direction of the camera. Conversely, suppose the
viewing direction is -xed, which means that R
Then, it is not possible to disambiguate between any of the potential
absolute conics in the pencil C f (-) ' I
3 , since R i C f R T
Potential absolute conics outside \Pi 1
Assume we have a critical motion (R
with the false dual absolute
conic\Omega
f . If the supporting plane
f is \Pi 1 , the critical motion is
described by Proposition 5.2, so assume
f is outside \Pi 1 . As in
the proof of Proposition 5.1, one can assume without loss generality
that a Euclidean coordinate system has been chosen such
f has
supporting plane z = 0, and matrix coordinates
\Omega
The image
of\Omega
f is according to (4),
i\Omega
d 2i
optical centres
critical ellipse
critical hyperbola
Figure
3. Two orthogonal planes, where one plane contains an ellipse and the other
contains a hyperbola.
A necessary condition for degeneracy is that R i should diagonalize C f to
the form (6), i.e. the matrix C f must have two equal eigenvalues. As it
is always possible to -nd an orthogonal matrix that diagonalizes a real,
symmetric matrix [5], all we need to do is to -nd out precisely when C f
has two equal eigenvalues. Lemma A.1 in the Appendix characterizes
matrices of this form.
Applying the lemma to C f in (8), with oe
results in the following cases:
(i) If d 1 6= d 2 , then
a.
b.
These equations describe a motion on two planar conics for which
the supporting planes are orthogonal. On the -rst plane, the conic
is an ellipse, while on the other the conic is a hyperbola (depending
on whether d 1 ? d 2 or vice versa), see Figure 3.
(ii) If d arbitrary.
Notice that the second alternative in case (ii) of Lemma A.1 does
not occur, since it implies t T e making C f rank-de-cient. Also,
case (iii) is impossible, since oe
It remains to -nd the rotations that diagonalize C f . Since rotations
around the optical axis are irrelevant, only the direction of the optical
axis is signi-cant. Suppose the optical axis is parameterized by the
camera centre t and a direction d, i.e. Rg. Any point on
the axis projects to the principal point,
The direction d should equal the third row of R, which corresponds
to the eigenvector of the single eigenvalue of C f . Regarding the proof
of Lemma A.1, it is not hard to see that the eigenvectors are v '
in the two sub-cases
in (i) above. Geometrically, this means that the optical axis must
be tangent to the conic at each position, as illustrated in Figure 4(b).
Similarly in (ii), it is easy to derive that which means
that the optical axis should be tangent to the translation direction, cf.
Figure
4(c). An exceptional case is when C f has a triple eigenvalue,
because then any rotation is possible. However, according to Proposition
5.1, it occurs only for twisted pairs. To summarize, we have proven
the following.
Proposition 5.3. Given known intrinsic parameters except for focal
lengths, a motion is critical w.r.t. aOEne reconstruction if and only if
the motion consists of (i) rotations with at most two distinct centres
(twisted pair ambiguity), or (ii) motion on two conics 1 (one ellipse and
one hyperbola) whose supporting planes are orthogonal and where the
optical axis is tangent to the conic at each position, or (iii) translation
along the optical axis, with arbitrary rotations around the optical axis.
The motions are illustrated in Figure 4. In case (i) and (ii), the
ambiguity of the reconstruction is twofold, as there is only one false
absolute conic, whereas in case (iii) there is a one-parameter family of
potential planes at in-nity (all planes z=constant). Case (iii) can be
seen as a special case of the critical motion in Proposition 5.2, which
also has a single viewing direction, but arbitrary translations.
5.2.2. Two images
For two cameras, projective geometry is encapsulated in the 7 degrees
of freedom in the fundamental matrix, and Euclidean geometry in the
5 degrees of freedom in the essential matrix. Hence, from two projective
images we might hope to estimate Euclidean structure plus two
additional calibration parameters. Hartley [10, 11] gave a method for
the case where the only unknown calibration parameters are the focal
lengths of the two cameras. This was later elaborated by Newsam et. al.
1 The actual critical motion is the conics minus the two points where the ellipse
intersects the plane z = 0, since the image ! is non-proper at these points.
potential
conic
absolute
optical centre
(b)
conic
potential
absolute
optical centres
critical ellipse
critical hyperbola
(c)
conic
potential
optical centres
absolute
Figure
4. Critical motions for unknown focal lengths: (a) A motion with two -xed
centres. (b) A planar motion on an ellipse and a hyperbola. (c) Translation along
the optical axis. See Proposition 5.3.
y
x
x
z z y
(b)
x
x
y
Figure
5. Critical con-gurations for two cameras: (a) Intersecting optical axes. (b)
Orthogonal optical axis planes. See Proposition 5.4.
[28], Zeller and Faugeras [41] and Bougnoux [4]. All of these methods
are Kruppa-based. We will derive the critical motions for this case based
on the results of the previous sections.
Proposition 5.4. Given zero skew, unit aspect ratio, principal point
at the origin, but unknown focal lengths for two cameras, then a motion
(in addition to twisted pair) is critical if and only if (i) the optical axes
of the two cameras intersect or (ii) the plane containing the optical axis
of camera 1 and camera centre 2, is orthogonal to the plane containing
optical axis of camera 2 and camera centre 1.
Proof. Cf. [28]. Suppose a motion is critical. Regarding Proposition 5.2,
we see that if there is only one viewing direction, the optical axes are
parallel and intersect at in-nity, leading to (i) above. Examining the
three possibilities in Proposition 5.3, we see that the -rst one is the
twisted pair solution. The second one, either both cameras lie on the
same conic (and hence their axes are coplanar and intersect) or one lies
on the hyperbola, the other on the ellipse (in which case their optical
axes lie in orthogonal planes) leading to (ii). Conversely, given any two
cameras with intersecting or orthogonal-plane optical axes, it is possible
to -t (a one-parameter family of) conics through the camera centres,
tangential to the optical axes.
The two critical camera con-gurations are shown in Figure 5.
5.3. Known skew and aspect ratio
Consider the image !
of\Omega
1 . Inserting the parameterization
of K in (2) into its dual !, it turns out that !
. Since f and fl
never vanish, requiring that the skew vanishes is equivalent to !
The constraint can also be expressed in envelope form using !
dually !
If in addition to zero skew, unit aspect ratio is required in K, it is
equivalent to ! 22 . This follows from the fact that !
and
. The constraint can also be transfered to ! ,
dually !
Analyzing the above constraints on ! in locus form, results in the
following proposition when the plane at in-nity \Pi 1 is known.
Proposition 5.5. Given \Pi 1 , a motion is critical with respect to zero
skew and unit aspect ratio if and only if there are at most two viewing
directions.
Proof. For each image we have the two auto-calibration constraints (9),
given by (7). Choose 3D coordinates in which the -rst
camera has orientation R I. The image 1 constraints become simply
so we can parameterize C f with C 11 , C 12 and C 13 .
Given a subsequent image 2, represent its orientation R 2 by a quaternion
its two auto-calibration constraints, and
eliminate C 11 between them to give:
One of the 3 factors must vanish. If the -rst vanishes the motion is an
optical axis rotation, q 2
If the second vanishes it is a 180 ffi AEip
about an axis orthogonal to the optical one, q 2
In both cases
the viewing direction remains unchanged and no additional constraint
is enforced on C f . Finally, if the third factor vanishes, solving for C f in
terms of q gives a linear family of solutions of the form
(the third row of R 2 (q)) are the two
viewing directions and (ff; fi) are arbitrary parameters. Conversely, given
any potential AC C f 6' I, there is always exactly one pair of real
viewing directions that make C f critical under (11). The linear
family ff contains three rank 2 members, one for each eigenvalue
- of C f (with fi 0 =ff calculation shows that each
member can be decomposed uniquely (up to sign) into a pair
of viewing direction vectors supporting (11), but only the pair
corresponding to the middle eigenvalue is real. (Coincident eigenvalues
correspond to coincident viewing directions and can be ignored). Hence,
no potential AC C f can be critical for three or more real directions
simultaneously.
Table
I. Summary of critical motions in auto-calibration.
Auto-calibration Critical motions Reconstruction
constraint ambiguity
Known calibration twisted pair duality projective
Unknown focal length (i) optical axis rotation aOEne
but otherwise known (ii) motion on two planar conics projective
calibration (iii) optical axis translation projective
Unknown focal length (i) intersecting optical axes projective
(two images only) (ii) orthogonal optical axis planes projective
Zero skew and (i) two viewing directions aOEne
unit aspect ratio (ii) complicated algebraic variety projective
For potential absolute conics outside \Pi 1 things are more compli-
cated. For each image, there are two auto-calibration constraints. So
in order to single out the true absolute conic (which has 8 degrees of
freedom), at least 4 images are necessary. For a
f the polynomial
constraints in (9) and (10) determine a variety in the space of rigid
motions. We currently know of no easy geometrical interpretation of
this manifold.
It is easy to see that given a critical camera motion, the ambiguity
is not resolved by rotation around the camera's optical axis.
5.4.
Summary
A summary of the critical motions for auto-calibration under the auto-calibration
constraints studied is given in Table I. The reconstruction
ambiguity is classi-ed as projective if the plane at in-nity cannot be
uniquely recovered, and aOEne if it is possible. As mentioned earlier, the
twisted pair duality is not a true critical motion, since the positive-depth
constraint can always resolve the ambiguity.
6. Particular Motions
Some critical motions occur frequently in practice. In this section, a
selection of them is analyzed in more detail.
6.1. Pure rotation
In the case of a stationary camera performing arbitrary rotations, no 3D
reconstruction is possible. There always exist many potential absolute
conics outside \Pi 1 .
However, it is still possible to recover the internal camera calibration,
provided there are no potential absolute conics on \Pi 1 , cf. [37]. Proposition
5.2 and Proposition 5.5, regarding critical motions and potential
ACs on \Pi 1 tells us when such auto-calibration is possible for a purely
rotating camera.
6.2. Pure translation
If a sequence of movements only consists of arbitrary translations and
no rotations, all proper, virtual conics on \Pi 1 are potential absolute
conics. Still, one could hope to recover the plane at in-nity correctly,
and thus get an aOEne reconstruction.
Proposition 6.1. Let (t
be a general sequence of translations,
where m is suOEciently large. Then, the motion is
(i) always critical w.r.t. aOEne reconstruction under the constraints zero
skew and unit aspect ratio.
(ii) not critical w.r.t. aOEne reconstruction under the constraints zero
skew, unit aspect ratio and vanishing principal point.
Proof. (i) We need to show that there exists a potential
DAC\Omega
f outside
which is valid for all
. Choose a coordinate system such that
\Theta
I
. Then for
instance\Omega
is a potential DAC
(multiply P
i\Omega
i to get ! and check that it ful-lls (9) and (10)). (ii)
follows directly from Proposition 5.3.
Note that translating only along the optical axis in case (ii) above results
in a critical motion.
6.3. Parallel axis rotations
Sequences of rotations around parallel axes with arbitrary translations
are interesting in several aspects. They occur frequently in practice and
are one of the major degeneracies for auto-calibration with constant
intrinsic parameters [36, 43]. See Figure 6.
It follows directly from Proposition 5.5 that given zero skew, unit
aspect ratio and general rotation angles, the -xed-axis motion is not
critical unless it is around the optical axis. If we further add the vanishing
principal point constraint, the optical axis remains critical according
Figure
6. Rotations around the vertical axes with arbitrary translations.
to Proposition 5.2. If we know only that the skew vanishes, we have the
following proposition.
Proposition 6.2. Let (R
be a general motion whose rotations
are all about parallel axes, where R suOEciently large.
Given \Pi 1 , the motion is critical w.r.t. zero skew if and only if the
rotation is around one of the following axes:
or (1; \Gamma1; 0),
where each denotes an arbitrary real number.
Proof. Let C f denote a false AC on \Pi 1 . The zero skew constraint in
using the parameterization in (7) gives C
arbitrary rotation around a -xed axis (q can be parameterized
by R. Inserting this into the zero skew constraint in
yields a polynomial in R[-]. Since - can be arbitrary all coeOEcients of
the polynomial must vanish. The solutions to the system of vanishing
coeOEcients are the ones given above.
Some of these critical axes may be resolved by requiring that the
camera calibration should be constant. In [37], it is shown that parallel
axis rotations under constant intrinsic parameters are always critical
and give rise to the following pencil of potential absolute conics:
Combining constant intrinsic parameters, and some a priori known
values of the intrinsic parameters, some of the critical axes are still
critical.
Corollary 6.1. Let (R
be a general motion with parallel axis
rotations, where R suOEciently large. Given \Pi 1 , and
constant intrinsic parameters, the following axes are the only ones still
critical:
(ii) (0; 0; 1) w.r.t. zero skew and unit aspect ratio,
w.r.t. an internally calibrated camera except for focal length.
Proof. (i) Using the potential ACs in (12) in the proof of Proposition
6.2, one -nds that the only critical axes remaining under the
zero skew constraint are (0; ; ) and ( ; 0; ). (ii) and (iii) are proved
analogously.
7. Experiments
In practice, a motion is never exactly degenerate due to measurement
noise and modeling discrepancies. However, if the motion is close to
a critical manifold it is likely that the reconstructed parameters will
be inaccurately estimated. To illustrate the typical eoeects of critical
motions, we have included some simple synthetic experiments for case
of two cameras with unknown focal lengths but other intrinsic parameters
known. We focus on the question of how far from critical the two
cameras must be to give reasonable estimates of focal length and 3D
Euclidean structure [20]. The experimental setup is as follows: two unit
focal length perspective cameras view 25 points distributed uniformly
within the unit sphere. The camera centres are placed at (\Gamma2; \Gamma2;
and (2; \Gamma2; 0) and their optical axes intersect at the origin, similar to
the setup in Figure 5(a). Independent Gaussian noise of 1 pixel standard
deviation is added to each image point in the 512 \Theta 512 images.
In the experiment, the elevation angles are varied, upwards for the
left camera and downwards for the right one, so that their optical axes
are skewed and no longer meet. For each pose, the projective structure
Unknown focal bundle: -
Focal
Length
Elevation Angle (degrees)
Unknown focal bundle: -
Calibrated bundle: -x-
Point
Elevation Angle (degrees)
Figure
7. Relative errors vs. camera elevation for two cameras.
and the fundamental matrix are estimated by a projective bundle adjustment
that minimises the image distance between the measured and
reprojected points [3]. Then, the focal lengths are computed analytically
with Bougnoux' method [4]. For comparison, a calibrated bundle
adjustment with known focal lengths is also applied to the same data.
The resulting 3D error is calculated by Euclidean alignment of the true
and reconstructed point sets.
Figure
7 shows the resulting root mean square errors over 100 trials
as a function of elevation angle. At zero elevation, the two optical
axes intersect at the origin. This is a critical con-guration according
to Proposition 5.4. A second critical con-guration occurs when the
epipolar planes of the optical axes become orthogonal at around 35 ffi
elevation. Both of these criticalities are clearly visible in both graphs.
For geometries more than about 5-10 ffi from criticality, the focal lengths
can be recovered quite accurately and the resulting Euclidean 3D structure
is very similar to the optimal 3D structure obtained with known
calibration.
8. Conclusion
In this paper, the critical motions in auto-calibration under several auto-calibration
constraints have been derived. The various constraints on
the intrinsic parameters have been expressed as subgroup conditions on
the 3 \Theta 3 upper triangular camera matrices. With this type of condition,
we showed that the critical motions are independent of the speci-c
values of the intrinsic parameters.
It is important to be aware of the critical motions when trying to
auto-calibrate a camera. Additional scene or motion constraints may
help to resolve the ambiguity, but clearly the best way to avoid degeneracies
is to use motions that are ifarj from critical. Some synthetic
experiments have been performed that give some practical insight to the
numerical conditioning of near-critical and critical stereo con-gurations.
Acknowledgments
This work was supported by the European Union under Esprit project
LTR-21914 Cumuli. We would like to thank Sven Spanne for constructing
the proof of Lemma A.1.
Appendix
Lemma A.1. Let A be a real, symmetric 3 \Theta 3 matrix of the form
real
3 vector and ae a non-zero real scalar. Let oe 1 ,oe 2 and oe 3 be given real
scalars. Then, necessary and suOEcient conditions on (t,ae) for A to have
two equal eigenvalues can be divided into three cases:
for at least one i (where
or 3). Furthermore, ae can take the values:
for any i for which t T e
(ii) If oe
a. t T e
b. t T e
(iii) If oe arbitrary.
Proof. It follows from the Spectral Theorem [5] that if A is real and
symmetric with two equal eigenvalues -, then there is a third eigenvector
v and a scalar - such that
corresponding to v is -). This gives
Multiplying this matrix equation with e 1 , e 2 and e 3 , results in three
vector equations,
To prove (i), assume oe 1 6= oe 2 6= oe 3 . The orthogonal bases e 1 , e 2 and
e 3 are linearly independent and cannot all be linear combinations of t
and v, so one of oe must vanish, and thereby exactly one. Suppose
If one of the coeOEcients is non-zero, then t and v would be linearly
dependent. However, this is impossible because e 2 and e 3 are linearly
independent and oe Analogously, ae 6= 0 because
otherwise e 2 and e 3 would be linearly dependent according to (13).
Therefore t T e
If t is orthogonal to e 1 and calculations yield that
ae must be chosen as
which is also suOEcient.
When two or three of oe i are equal, similar arguments
can be used to deduce (ii) and (iii).
--R
Close Range Photogrammetry and Machine Vision.
Matrices and Linear Transformations.
Matrix Computation.
Handbuch der Physiologischen Optik.
Theory of Reconstruction from Image Motion.
Algebraic Projective Geometry.
Analytical Quadrics.
'Vision 3D Non Calibr#e: Contributions # la Reconstruction Projective et #tude des Mouvements Critiques pour l'Auto-Calibrage'
--TR
Multiple Interpretations of the Shape and Motion of Objects from Two Perspective Images
A theory of self-calibration of a moving camera
Three-dimensional computer vision
Reconstruction from Calibrated CamerasMYAMPERSANDmdash;A New Proof of the Kruppa-Demazure Theorem
Estimation of Relative Camera Positions for Uncalibrated Cameras
Camera Self-Calibration
What can be seen in three dimensions with an uncalibrated stereo rig
Self-Calibration from Image Triplets
Euclidean 3D Reconstruction from Image Sequences with Variable Focal Lenghts
Minimal Conditions on Intrinsic Parameters for Euclidean Reconstruction
Autocalibration and the absolute quadric
Euclidean Reconstruction from Image Sequences with Varying and Unknown Focal Length and Principal Point
Critical Motion Sequences for Monocular Self-Calibration and Uncalibrated Euclidean Reconstruction
Metric calibration of a stereo rig
The Modulus Constraint
Euclidean Reconstruction from Constant Intrinsic Parameters
Ambiguity in Reconstruction From Images of Six Points
From Projective to Euclidean Space Under any Practical Situation, a Criticism of Self-Calibration
Self-Calibration and Metric Reconstruction in spite of Varying and Unknown Internal Camera Parameters
--CTR
Antonio Valds , Jos Ignacio Ronda, Camera Autocalibration and the Calibration Pencil, Journal of Mathematical Imaging and Vision, v.23 n.2, p.167-174, September 2005
Gang Qian , Rama Chellappa, Bayesian self-calibration of a moving camera, Computer Vision and Image Understanding, v.95 n.3, p.287-316, September 2004
P. Sturm , Z. L. Cheng , P. C. Y. Chen , A. N. Poo, Focal length calibration from two views: method and analysis of singular cases, Computer Vision and Image Understanding, v.99 n.1, p.58-95, July 2005
Pr Hammarstedt , Fredrik Kahl , Anders Heyden, Affine Reconstruction from Translational Motion under Various Autocalibration Constraints, Journal of Mathematical Imaging and Vision, v.24 n.2, p.245-257, March 2006
Toms Svoboda , Daniel Martinec , Toms Pajdla, A convenient multicamera self-calibration for virtual environments, Presence: Teleoperators and Virtual Environments, v.14 n.4, p.407-422, August 2005
Kalle strm , Fredrik Kahl, Ambiguous Configurations for the 1D Structure and Motion Problem, Journal of Mathematical Imaging and Vision, v.18 n.2, p.191-203, March
Lourdes Agapito , E. Hayman , I. Reid, Self-Calibration of Rotating and Zooming Cameras, International Journal of Computer Vision, v.45 n.2, p.107-127, November 2001
Loong-Fah Cheong , Chin-Hwee Peh, Depth distortion under calibration uncertainty, Computer Vision and Image Understanding, v.93 n.3, p.221-244, March 2004
Xiaochun Cao , Jiangjian Xiao , Hassan Foroosh , Mubarak Shah, Self-calibration from turn-table sequences in presence of zoom and focus, Computer Vision and Image Understanding, v.102 n.3, p.227-237, June 2006
Han , Takeo Kanade, Multiple Motion Scene Reconstruction with Uncalibrated Cameras, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.7, p.884-894, July
Marta Wilczkowiak , Peter Sturm , Edmond Boyer, Using Geometric Constraints through Parallelepipeds for Calibration and 3D Modeling, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.2, p.194-207, February 2005 | critical motions;absolute conic;calibration;structure and motion;projective geometry;auto-calibration;3D reconstruction |
361263 | Tolerance to Multiple Transient Faults for Aperiodic Tasks in Hard Real-Time Systems. | AbstractReal-time systems are being increasingly used in several applications which are time-critical in nature. Fault tolerance is an essential requirement of such systems, due to the catastrophic consequences of not tolerating faults. In this paper, we study a scheme that guarantees the timely recovery from multiple faults within hard real-time constraints in uniprocessor systems. Assuming earliest-deadline-first scheduling (EDF) for aperiodic preemptive tasks, we develop a necessary and sufficient feasibility-check algorithm for fault-tolerant scheduling with complexity $O(n^2 \cdot k)$, where $n$ is the number of tasks to be scheduled and $k$ is the maximum number of faults to be tolerated. | Introduction
The interest in embedded systems has been growing steadily in the recent past, specially those
ples include autopilot systems, satellite and launch vehicle control, as well as robots, whether in
collaborating teams or not. For some of these systems, termed hard real-time systems (HRTSs), the
consequences of missing a deadline may be catastrophic. The ability to tolerate faults in HRTSs is
crucial, since a task can potentially miss a deadline when faults occur. In case of a fault, a deadline
can be missed if the time taken for recovery from faults is not taken into account during the phase
that tasks are submitted/accepted to the system. Clearly, accounting for recovery from faults is an
essential requirement of HRTSs.
When dealing with such HRTSs, permanent faults can be tolerated by using hot-standby spares
[KS86], or they can be masked by modular redundancy techniques [Pra86]. In addition to permanent
faults, tolerance to transient faults is very important, since it has been shown to occur much more
frequently than permanent faults [IR86, IRH86, CMS82]. In a study, an orbiting satellite containing
a microelectronics test system was used to measure error rates in various semiconductor devices
including microprocessor systems [CMR92]. The number of errors, caused by protons and cosmic
ray ions, mostly ranged between 1 and 15 in 15-minute intervals, and was measured to be as high
as 35 in such intervals. More examples of such safety critical applications can be found in [LH94].
Transient faults can be dealt with through temporal redundancy, that is, allowing extra time (slack)
in the schedule to re-execute the task or to execute a recovery block [HLMSR74].
The problem solved in this paper is as follows. Given a set of n aperiodic tasks,
we seek to determine if each task in the set T is able to complete execution before its deadline
under EDF scheduling even if the system has to recover from (at most) k faults. We consider a
uniprocessor system and assume that each task may be subjected to multiple transient faults.
A simple solution would be to check the feasibility of each of the schedules generated by the
possible combination of faults using the approach described in [LLMM99] for each schedule.
The high complexity of this scheme provides the impetus for searching for a more efficient solution.
The solution presented in this paper develops an optimal (necessary and sufficient) feasibility check
that runs in O(n 2 \Delta time in the worst case.
Although we consider aperiodic tasks, we note that the technique presented in this paper can be
used to verify the fault-tolerance capabilities of a set of periodic tasks by considering each instance
of a periodic task as an aperiodic task within the Least Common Multiple of the periods of all the
periodic tasks. Moreover, scheduling aperiodic tasks is the basis for scheduling periodic tasks in
frame-based systems, where a set of tasks (usually having precedence constraints) is invoked at regular
time intervals. This type of systems is commonly used in practice because of its simplicity. For
example, in tracking/collision avoidance applications, motion detection, recognition/verification,
trajectory estimation and computation of time to contact are usually component sub-tasks within
a given frame (period) [CBM93]. Similarly, a real-time image magnification task might go through
the steps of non-linear image interpolation, contrast enhancement, noise suppression and image
extrapolation during each period [NC96]. Even though these are periodic tasks, the system period
is unique and therefore the scheduling of the each instance (corresponding in our nomenclature to
an "aperiodic" task) can be done within a specific time interval.
The rest of this paper is organized as follows. In Section 2 we present the model and notation
for the aperiodic, fault-tolerant scheduling problem. In Section 3 we introduce an auxiliary function
that will aid in the presentation of our solution. In Section 4 we describe the feasibility tests for
a set of tasks under a specific fault pattern and generalize it in Section 5 for any fault pattern,
examining the worst case behavior with respect to k faults. In Section 6, we survey some related
work and, in Section 7, we finalize the paper with concluding remarks and directions for future
work.
Model and Notation
We consider a uniprocessor system, to which we submit a set T of n tasks: g. A
task - i is modeled by a tuple - is the ready time (earliest start time of
the task), D i is the deadline, and C i is the maximum computation time (also called worst case
execution time). The set of tasks that become ready at a given time t is denoted by RS(T ; t).
That is, RS(T
We assume EDF schedules with ties in deadlines broken arbitrarily. The schedule of T is
described by the function
" if EDF does not schedule any task between t and t
time. We will use EDF (T ) to refer to the EDF schedule of T .
We define e i to be the time at which task - i completes execution in EDF (T ), and we define the
function to be the number of free slots between
is, the number of slots for which EDF (T ; (excluding the slot that starts at t 2 ). EDF (T ) is
said to be feasible if e i - D i for all
It is assumed that faults can be detected at the end of the execution of each task. The time
required by the fault detection mechanism can be added to the worst case computation time C i of
the task and does not hinder the timeliness of the system. Many mechanisms have been proposed
for fault detection at the user level, the operating system level, and the hardware level. At the user
level, a common technique is to use consistency or sanity checks, which are procedures supplied
by the user, to verify the correctness of the results [HA84, YF92]. For example, using checksums,
checking the range of the results or substituting a result back into the original equations can be
used to detect a transient error.
Many mechanisms that exist in operating systems and computer hardware may be used for
error detection and for triggering recovery. Examples are the detection of illegal opcode (caused by
bus error or memory corruption), memory range violation, arithmetic exceptions and various time-out
mechanisms. Hardware duplication of resources can also be used for detecting faults through
comparison of results. It should be noted, however, that while each of the mechanisms described
above is designed for detecting specific types of faults, it has been long recognized that it is not
possible for a fault detection mechanism to accomplish a perfect coverage over arbitrary types of
faults.
When a fault is detected, the system enters a recovery mode where some recovery action must be
performed before the task's deadline. We assume that a task - i recovers from a fault by executing
a recovery block [HLMSR74, LC88], - i;1 at the same priority of - i . A fault that occurs during the
execution of - i;1 is detected at the end of - i;1 and is recovered from by invoking a second recovery
block, - i;2 , and so on. It is assumed that the maximum time for a recovery block of - i to execute
is . The recovery blocks for each task may have a different execution time from the task itself;
in other words, the recovery is not restricted to re-execution of the task. Recovery blocks can be
used for avoiding common design bugs in code, for providing a less accurate result in view of the
limited time available for recovery, or for loading a "safe" state onto memory from some stable
source (across the network or from shielded memory).
We shall denote a pattern of faults over T as a set g, such that f i is the number
of times the task - i 2 T or its recovery blocks will fail before successful completion. We use
to denote the EDF schedule of T under the fault pattern F , that is, when - i is forced
to execute f i recovery blocks. EDF F (T ) is said to be feasible if for all its
recovery blocks complete by D i . Note that EDF F (T ) cannot be feasible if
for any i.
Given a task set T and a specific fault pattern F , we define two functions. The first function,
defines the amount of work (execution time) that remains to be completed at time t in
EDF (T ). This work is generated by the tasks that became ready at or before time t, that is by
the tasks in f- Specifically,
where the -
\Gamma operator is defined as a -
At a time t, any positive amount of work
in W decreases by one during the period between t, while when a task becomes
ready at t, the work increases by the computation time of this task.
The second function, W F (T ; t) is defined in a similar way, except that we include time for
the recovery of failed tasks at the point they would have completed in the fault-free schedule.
Specifically,
The two functions defined above will be used to reason about the extra work needed to recover
from faults. Note that although task - i may complete at a time different than e i in EDF F (T ), the
function W F has the important property that it is equal to zero only at the beginning of an idle
time slot in EDF F (T ). This, and other properties of the two functions defined above are given
next.
only if
there is no work to be done at time t in EDF (T ), which means that any task with R i - t finishes
at or before time t in the fault-free case.
Property 2: W F only if there is no work to be done at time t in EDF F (T ),
which means that any task with R i - t finishes at or before time t when the tasks are subject to
the fault pattern F .
Property 3: W F That is, the amount of work incurred when faults are
present is never smaller than the amount of work in the fault-free case.
That is, the slot before the end of a task is never idle.
The above four properties follow directly from the definition of W
3 The ffi-Function
In order to avoid explicitly deriving the EDF schedule in the presence of faults, we define a function,
ffi, which loosely corresponds to the "extra" work induced by a certain fault pattern, F .
Intuitively, is the amount of unfinished "extra" work that has been induced by the
fault pattern F at time t. In other words, it is the work needed above and beyond what is required
in the fault-free schedule for T . The idle time in the fault-free EDF schedule is used to do this
extra work.
The ffi -function will play an important role in the process of checking if each task meets its
deadline in EDF F (T ). Following is a method for computing ffi directly from the fault-free EDF
schedule of T and the fault pattern F .
(2)
In order to show that the above form for ffi(T ; t; F) is equivalent to W F
consider the four different cases above.
case 1: At no task can end, and thus t 0 6= e j for any j. From the definitions of W and W F ,
this implies that W (T ;
case 2: When implies that W (T which by Property
3 implies that also W F
case 3: When EDF (T ; states that W (T ; states that
j. In this case, ffi (T ; t;
case 4: When t 6= e i and EDF implies that W (T
which by Property 3 implies that also W F Hence, the -
operations in the
definitions of W and W F reduce to the usual subtraction, and it is straightforward to show
that
For illustration, Figure 1 shows an example of a task set and the corresponding values of the
function for a specific F . In this example, we consider the case in which only - 1 and - 3 may be
subject to a fault. Note that the value of ffi decreases when EDF (T ) is idle and increases at the
end of each task that is indicated as faulty in F .
R C D V223000
Fault-free
EDF Schedule
Figure
1: Task Set, EDF schedule and ffi values for f
As we have mentioned above, the ffi function is an abstraction that represents the extra work to
be performed for recovery. This extra work reduces to zero when all ready tasks complete execution
and recovery, as demonstrated by the following theorem.
Theorem in both EDF (T ) and EDF F (T ), any
task with R i - t finishes at or before time t.
Proof:
If Equation (2), this decrease in the value of the
ffi-function is only possible if EDF (T ; ", which from Property 1 leads to W (T ;
Equation (1) gives W F and the proof follows from Properties 1 and 2 ffl
4 Feasibility Test for a Task Set Under a Specific Fault Pattern
Given a task set, T , and a fault pattern, F , we now present a method for checking whether the
lowest priority task, denoted by - ' 2 T , completes by its deadline in EDF F (T ).
Theorem Given a task set, T , and a fault pattern, F , the lowest priority task, - ' , in T
completes by D ' in EDF F (T ), if and only if
Proof: To prove the if part, assume that t 0 is the smallest value such that e ' - t 0 - D ' and
are identical
from which implies that - ' completes by e ' - D ' in both schedules. If, however,
be the latest time before t 0 such that ffi(T ; - t; F) ? 0.
Note that -
is the first value after e ' at which
the definition of - t). Hence, by Theorem 1, all tasks that are ready before - t finish execution by - t
in both EDF (T ) and EDF F (T ). Moreover, ffi(T ; t; which means that
thus EDF (T ) is identical to EDF F (T ) in that period. But - ' completes
in EDF (T ) at e ' , which means that it also completes at e ' in EDF F (T ).
We prove the only if part by contradiction: assume that
finishes in EDF F (T ) at - t for some e ' - t - D ' . The fact that the lowest priority task, - ' ,
executes between time -
means that no other task is available for execution at - t \Gamma 1, and
thus 1. Given the assumption that implies that
which by Property 1 implies that EDF leads to
which is a contradiction ffl
The next corollaries provide conditions for the feasibility of EDF F (T ) for the entire task set,
T .
Corollary 1: A necessary and sufficient condition for the feasibility of EDF F (T ) for a given T
and a given F can be obtained by applying Theorem 2 to the n task sets T j ,
contains the j highest priority tasks in T .
Proof: The proof is by induction. The base case is trivial, when since there is only a single
task. For the induction step, assume that EDF F (T j ) is feasible and consider T
where - ' has a lower priority than any task in T j . In EDF F (T j+1 ), all tasks in T j will finish at
exactly the same time as in EDF F (T j ), since - ' has the lowest priority. Hence, the necessary and
sufficient condition for the feasibility of EDF F (T j+1 ) is equivalent to the necessary and sufficient
condition for the completion of - ' by D ' ffl
Corollary 2: A sufficient (but not necessary) condition for the feasibility of EDF F (T ) for a given
T and a given F is
Proof: Note that the proof of the only if part in Theorem 2 relies on the property that - ' is the
lowest priority task, which, in EDF, means the task with the latest deadline. The if part of the
theorem, however, is true even if - ' is not the lowest priority task. Hence, any - i 2 T , completes
by D i in EDF F (T ), if which proves the corollary ffl
Figure
2: The fault-tolerant schedule for the task set in Figure 1
We clarify the conditions of the above corollaries by examples. First, we show that the condition
given in Corollary 2 is not necessary for the feasibility of EDF F (T ). That is, we show that, for
any given - i , it is not necessary that in order for - i to finish
by D i in EDF F (T ). This can be seen from the example task set and fault pattern shown in Figure
1. The value of ffi(T ; t; F) is not zero between e 7.
Yet, as shown in Figure 2, - 1 and - 2 will finish by their deadlines in EDF F (T ). In other words, the
condition that stated in Theorem 2, is necessary and sufficient
for the feasibility of only the lowest priority task in EDF F (T ) (task - 4 in the above example).
Next, we show that, as stated in Corollary 1, we have to repeatedly apply Theorem 2 to all
task sets T j , to obtain a sufficient condition for the feasibility of the entire task set.
In other words, it is not sufficient to apply Theorem 2 only to T . This can be demonstrated by
modifying the example of Figure 1 such that D 7. Clearly, this change in D 3 may still result in
the same EDF schedule for T and thus will not change the calculation of Although the
application of Theorem 2 guarantees that - 4 will finish by its deadline in EDF F (T ), the recovery
of - 3 will not finish by D as seen in Figure 2.
Assume, without loss of generality, that the tasks in a given task set, T , are numbered such that
to be the extra work that still needs to be done
due to a fault pattern F , at time Noting that ffi (T ; t; F) increases only at
Equation (2) can be rewritten using the slack() function defined in Section 2 as follows:
where
The application of Theorem 2 for a given T and F requires the simulation of EDF (T ) and
the computation of e i , as well as slack(e The values of ffi i computed from
Equations (3) and (4) can then be used to check the condition of the theorem. Each step in the
above procedure takes O(n) time, except for the simulation of the EDF schedule. Such simulation
may be efficiently performed by using a heap which keeps the tasks sorted by deadlines. Each task
is inserted into the heap when it is ready and removed from the heap when it completes execution.
Since each insertion into and deletion from the heap takes O(logn) time, the total simulation of
EDF takes O(nlogn) time. Thus, the time complexity of the entire procedure is O(nlogn).
Hence, given a task set, T , and a specific fault pattern, F , a sufficient and necessary condition
for the feasibility of EDF F (T ) can be computed using Corollary 1 in O(n 2 logn) time steps. This
is less efficient than simulating EDF F (T ) directly, which can be done in O(nlogn) steps. However,
as will be described in the next section, simulating EDF (T ) only is extremely advantageous when
we consider arbitrary fault patterns rather than a specific fault pattern.
5 Feasibility Test for a Task Set Under Any Fault Pattern
We now turn our attention to determining the feasibility of a given task set for any fault pattern
with k or less faults. We use F w to denote a fault pattern with exactly w faults. That is,
We also define the function ffi w which represents the maximum extra work at time t induced
by exactly w faults that occurred at or before time t. In other words, it is the extra work induced
by the worst-case fault pattern of w faults:
Note that, although the use of F w in the above definition does not specify that all w faults will
occur at or before time t, the value of reach its maximum when all possible w faults
occur by time t.
Theorem 3 : For a given task set, T , a given number of faults w, and any fault pattern, F w , the
lowest priority task, - ' , in T completes by D ' in EDF F w
t, e ' - t - D ' .
Proof: This theorem is an extension of Theorem 2 and can be proved in a similar manner ffl
In order to compute ffi w efficiently, we define the values
and use them to compute
which is directly derived from Equation (3).
The value of each ffi w
defined as the maximum extra work at induced by any fault
pattern with w faults. This maximum value can be obtained by considering the worst scenario in
each of the following two cases:
ffl all w faults have already occurred in - . Hence, the maximum extra work at e i is
the maximum extra work at e i\Gamma1 decremented by the slack available between e i\Gamma1 and e i .
have already occurred in - additional fault occurs in - i . In this
case, the maximum extra work at e i is increased by V i , the recovery time of - i .
Hence, noting that e and the function slack() are derived from EDF (T ) and do not
depend on any particular fault pattern, the values of ffi w
can be computed for
using the following recursive formula:
\Gammaslack(e
The computations in Equation (6) can be graphically represented using a graph, G, with n
columns and k rows, where each row corresponds to a particular number of faults, w, and each
column corresponds to a particular e i (see Figure 3b). The node corresponding to row w and
column e i will be denoted by N w
. A vertical edge between N w
and N w+1
represents the execution
of one recovery block of task - i , and thus is labeled by V i . A horizontal edge between N w
and N w
means that no faults occur in task - i , and thus is labeled by -
\Gammaslack(e to indicate that the
extra work that remained at e i\Gamma1 is decremented by the slack available between e i\Gamma1 and e i . Then,
each path starting at N 0
1 in G represents a particular fault pattern (see Figure 4). The value of ffi w
corresponding to the worst case pattern of w faults at is computed from Equation 6, which
corresponds to a dynamic programming algorithm to compute the longest path from N 0
1 to N w
R
e
(a) The task set
d
slack
(b) the fault free schedule and the computation of00 04
d
Figure
3: The calculation of
4.3
e
e
d
d
d00
Figure
4: Two fault patterns for the task set of Figure 3 and the corresponding paths in G.
Figure
3 depicts an example of the computation of ffi w
i for a specific task set and 2.
The value of ffi w
i is written inside node N w
. We can see that, for this example,
from Equation (5), which satisfies the condition of Theorem 3, and thus, the lowest
priority task, - 3 , will finish before D in the presence of up to any two faults.
Similar to Corollary 1 discussed in the last section, a necessary and sufficient condition for the
feasibility of EDF F (T ) requires the repeated application of Theorem 3.
Corollary 3: A necessary and sufficient condition for the feasibility of EDF F (T ) for a given T
R C D V1 122 6 29 1
e 14e e 3(b) the fault free schedule and the computation of d
(a) The task set
d 120+2
Figure
5: An example with three tasks.
and any fault pattern F with k or less faults can be obtained by applying Theorem 3 to the n task
sets contains the j highest priority tasks in T .
Figure
5 shows the computation of ffi for an example with three tasks. Note that although the
application of Theorem 3 to this example shows that the lowest priority task, - 3 will finish by its
deadline in the presence of any two faults, the set of three tasks is not feasible in the presence of
two faults in - 1 since in this case, either - 1 or - 2 will miss the deadline. This is detected when
Theorem 3 is applied to the task set T g.
To summarize, given a task set and the maximum number of faults, k, the
following algorithm can be used to optimally check if EDF F (T ) is feasible for any fault pattern of
at most k faults.
Algorithm "Exact"
is the highest priority task in T /* the one with earliest deadline */,
is the lowest priority task (the only task) in T 1 */
ffl For do
1. Simulate EDF (T j ) and compute e well as slack(),
2. Renumber the tasks in T j such that e 1 - e j ,
3. Compute
Equation (6),
4. Let e this is just for computational convenience */
5. If ffi w
6. If (j = n) then EDF F (T ) is feasible ; EXIT.
7. Let - ' be the highest priority task in
8.
note that - ' is the lowest priority task in T j+1 */
Hence, in order to determine if the lowest priority task in a task set can finish by its deadline
in the presence of at most k faults, steps 1-5 (which apply Theorem
steps to both generate EDF (T j ) and apply Equation (6). In order to determine the feasibility of
repeat the for loop n times for
Note, however, that with some care, EDF (T j+1 ) can be derived from EDF (T j ) in at most O(n)
steps, thus resulting in a total of O(n 2 for the feasibility test. Compared with the O(n k+1 logn)
complexity required to simulate EDF under the possible O(n k ) fault patterns, our algorithm has a
smaller time complexity, even for
As indicated in Corollary 2, a sufficient but not necessary feasibility test may be obtained by
computing ffi from a simulation of EDF (T ), and then making sure that, for each task - i , ffi is equal
to zero between e i and D i . This can be completed in O(nlog(n)+nk) time as shown in the following
algorithm.
Algorithm "Sufficient"
1. Simulate EDF (T ) and compute e as well as slack(),
2. Renumber the tasks in T j such that e 1 - e n ,
3. Compute
Equation (6),
4. Let e this is just for computational convenience */
5. For do
d
d
tee
d 200 0021
(a) The task set (b) the computation of d
Figure
An example in which f - can tolerate any two faults
The example shown in Figure 6 shows that a task - i ,
even if the value of ffi computed from the simulation of EDF (T ) does not equal zero between e i and
. In this example,
Yet, it is easy to see that
the shown EDF schedule can tolerate any two faults (two faults in - 1 , two faults in - 2 or one fault
in each of - 1 and - 2 ). To intuitively explain this result, we note that, although
2 represents the
maximum recovery work that needs to be done at no information is kept about the priority
at which this recovery work will execute in EDF F (T ). Specifically, in the given example, some of
the work in ffi 2
will execute in EDF F (T ) at the priority of - 1 , which is lower than the priority of
Thus, it is not necessary that
to finish before its deadline. This, in
general, may happen only because it is possible for a lower priority task to finish before a higher
priority task. That is, if for some i and j, e
Finally, we note that, from the observation given in the last paragraph, algorithm "Sufficient"
will provide a sufficient and necessary feasibility test in the special case where tasks complete
execution in EDF (T ) in the order of their priorities (deadlines). That is, if computed
from EDF (T ) satisfy e i - e i+1 and D i - D i+1 . In this case, the recovery work in any ffi w
would
have to execute in EDF F (T ) at a priority higher than or equal to that of - i , and thus it is necessary
for this work to be completed by D i if - i is to complete by its deadline.
6 Related Work
Earlier work dealing with tolerance to transient faults for aperiodic tasks was carried out from the
perspective of a single fault in the system [LC88, KS86]. More recently, the fault models were
enhanced to encompass a single fault occurring every interval of time, for both uniprocessors and
multiprocessor systems [BJPG89, GMM94, GMM97]. Further, tolerance to transient faults for
periodic tasks has also been addressed for uniprocessors [RT93, RTS94, OS94, PM98, GMM98] and
multiprocessor systems [BMR99, OS95, LMM98].
In [KS86], processor failures are handled by maintaining contingency or backup schedules. These
schedules are used in the event of a processor failure. To generate the backup schedule, it is assumed
that an optimal schedule exists and the schedule is enhanced with the addition of "ghost" tasks,
which function primarily as standby tasks. Since not all schedules will permit such additions, the
scheme is optimistic. More details can be found in [KS97].
Duplication of resources have been used for fault-tolerance in real-time systems [OS92]. How-
ever, the algorithm presented is restricted to the case where all tasks have the same period. More-
over, adding duplication for error recovery doubles the amount of resources necessary for scheduling.
In [BJPG89], a best effort approach to provide fault tolerance has been discussed in hard real-time
distributed systems. A primary/backup scheme is used in which both the primary and the
backup start execution simultaneously and if a fault affects the primary, the results of the backup
are used. The scheme also tries to balance the workload on each processor.
More recently, work has been done on the problem of dynamic dispatching algorithms of frame-based
computations with dynamic priorities, when one considers a single fault. In [LLMM99], it
was shown that simply generating n EDF schedules, one for each possible task failure, is sufficient
to determine if a task set can be scheduled with their deadlines. Also, the work in [Kop97] describes
the approach taken by the Mars system in frame-based fault tolerance. Mars was a pioneer system
in the timeline dispatching of tasks through the development of time-triggered protocols. It takes
into account the scheduling overhead, as well as the need for explicit fault tolerance in embedded
real-time systems. However, MARS requires special hardware to perform fault-tolerance related
tasks such as voting and, thus, it cannot be used in a broad range of real-time systems.
7 Conclusion
We have addressed the problem of guaranteeing the timely recovery from multiple faults for aperiodic
tasks. In our work, we assumed earliest-deadline-first scheduling for aperiodic preemptive
tasks, and we developed a necessary and sufficient feasibility-test for fault-tolerant admission con-
trol. Our test uses a dynamic programming technique to explore all possible fault patterns in the
system, but has a complexity of O(n 2 \Delta k), where n is the number of tasks to be scheduled and k is
the maximum number of faults to be tolerated.
EDF is an optimal scheduling policy for any task set T in the sense that, if any task misses its
deadline in EDF (T ), there is no schedule for T in which no deadlines are missed. EDF is also an
optimal fault-tolerant scheduling policy. Specifically, EDF F (T ) for a fault pattern F is equivalent
to EDF (T 0 ) where T 0 is obtained from T by replacing the computation time, C i , of each task - i in
Hence, the work presented in this paper answers the following question optimally:
Given a task set, T , is there a feasible schedule for T that will allow for the timely
recovery from any combination of k faults?
Acknowledgments
The authors would like to thank Sanjoy Baruah for proposing the problem of tolerating k faults
in EDF schedules and for valuable discussions and feedback during the course of this work. The
authors would also like to acknowledge the support of DARPA through contract DABT63-96-C-
0044 to the University of Pittsburgh.
--R
Workload Redistribution for Fault Tolerance in a Hard Real-Time Distributed Computing System
Layered control of a Binocular Camera Head.
Single Event Upset Rates in Space.
Derivation and Caliberation of a Transient Error Reliability Model.
Implementation and Analysis of a Fault-Tolerant Scheduling Algorithm
A Program Structure for Error Detection and Recovery.
A Measurement-Based Model for Workload Dependence of CPU Errors
Measurement and Modeling of Computer Reliability as Affected by System Activity.
On Scheduling Tasks with a Quick Recovery from Failure.
A Fault-tolerant Scheduling Problem
Architectural Principles for Safety-Critical Real-Time Applications
Global Fault Tolerant Real-Time Scheduling on Multiprocessors
An Efficient RMS Admission Control and its Application To Multiprocessor Scheduling.
An Imprecise Real-Time Image Magnification Algorithm
An Algorithm for Real-Time Fault-Tolerant Scheduling in a Multiprocessor System
Enhancing Fault-Tolerance in Rate-Monotonic Scheduling
Allocating Fixed-Priority Periodic Tasks on Multiprocessor Sys- tems
Minimum Achievable Utilization for Fault-tolerant Processing of Periodic Tasks
Fault Tolerant Computing: Theory and Techniques.
Enhancing Fault Tolerance of Real-Time Systems through Time Redundancy
Scheduling Fault Recovery Operations for Time-Critical Applications
Algorithm Based Fault Tolerance for Matrix Inversion With Maximum Pivoting.
--TR
--CTR
Alireza Ejlali , Marcus T. Schmitz , Bashir M. Al-Hashimi , Seyed Ghassem Miremadi , Paul Rosinger, Energy efficient SEU-tolerance in DVS-enabled real-time systems through information redundancy, Proceedings of the 2005 international symposium on Low power electronics and design, August 08-10, 2005, San Diego, CA, USA
Alireza Ejlali , Bashir M. Al-Hashimi , Marcus T. Schmitz , Paul Rosinger , Seyed Ghassem Miremadi, Combined time and information redundancy for SEU-tolerance in energy-efficient real-time systems, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, v.14 n.4, p.323-335, April 2006
Xiao Qin , Hong Jiang, A novel fault-tolerant scheduling algorithm for precedence constrained tasks in real-time heterogeneous systems, Parallel Computing, v.32 n.5, p.331-356, June 2006 | real-time scheduling;fault-tolerant schedules;fault recovery;earliest-deadline first |
361273 | On Load Balancing in Multicomputer/Distributed Systems Equipped with Circuit or Cut-Through Switching Capability. | AbstractFor multicomputer or distributed systems that use circuit switching, wormhole routing, or virtual cut-through (the last two are collectively called the cut-through switching), the communication overhead and the message delivery time depend largely upon link contention rather than upon the distance between the source and the destination. That is, a larger communication overhead or a longer delivery delay occurs to a message when it traverses a route with heavier traffic than the one with a longer distance and lesser traffic. This characteristic greatly affects the selection of routes for interprocessor communication and/or load balancing. We consider the load-balancing problem in these types of systems. Our objective is to find the maximum load imbalance that can be eliminated without violating the (traffic) capacity constraint and the route to eliminate the imbalance while keeping the maximum link traffic as low as possible. We investigate the load-balancing problem under various conditions. First, we consider the case in which the excess load on each overloaded node is divisible. We devise a network flow algorithm to solve this type of load balancing problem optimally in polynomial time. Next, we impose the realistic assumption that the system uses a specific routing scheme so that the excess load transferred from an overloaded node to an underloaded node must use the route found by the routing scheme. For this case, we use a graph transformation technique to transform the system graph to another graph to which the same network flow algorithm can be applied to solve the load balancing problem optimally. Finally, we consider the case in which the excess load on each overloaded node is indivisible, i.e., the excess load must be transferred as an entity. We show that the load-balancing problem of this type becomes NP-complete and propose a heuristic algorithm as a solution. | Introduction
In multicomputer or distributed systems, dynamic creation/deletion of data and/or files may
temporarily overload some nodes'/sites' storage space while leaving some others' underloaded. Since
storage resources at a node/site are usually limited, uneven data/file distribution may result in
inefficient use of storage space and affect future data/file creation. For example, some nodes/sites
may not have sufficient space to store new data/files even if the overall system has sufficient space
for all the data/files. Load balancing in this respect is thus to transfer the excess (data) load
on overloaded nodes to underloaded nodes to balance the (data) load among all the nodes in the
system.
For multicomputer or distributed systems that use circuit switching, wormhole routing [9], or
virtual cut-through [8], the communication overhead and the message delivery time depend largely
upon the link contention rather than upon the distance between the source and the destination.
That is, a larger communication overhead or a longer delivery delay results when a message traverses
a route with heavier traffic than one with a longer distance and lesser traffic. This characteristic
greatly affects the selection of routes for interprocessor communication (IPC) and/or load balancing.
The objective of selecting a route for IPC or load balancing is thus to minimize the traffic volume on
each link so that the communication overhead/delay due to link contention can be minimized. (Note
that this objective also reduces the probability of blocking future messages.) While transferring load
from overloaded nodes to underloaded ones balances the storage load among all nodes, minimizing
the maximum link contention among all links balances the communication load among all links.
The major difference between IPC and load balancing is that in the former case we must
select a route or routes for each pair of communicating processors, while in the latter case we can
select a route or routes from an overloaded node to one or more underloaded nodes. (Note that the
excess load on an overloaded node can be transferred to any underloaded node or nodes, instead of
a particular one.) Because of this difference, most, if not all, of the variations of the IPC routing
problem are NP-hard, while optimal algorithms of polynomial-time complexity exist for several
variations of the load balancing problem.
Kandlur and Shin [7] studied the route selection problem for interprocessor communication
in multicomputer networks equipped with virtual cut-through switching capability. In this paper,
we study instead the route selection problem for load balancing in multicomputer or distributed
systems that use circuit switching, wormhole routing, or virtual cut-through. Our main concern
is to find the maximum load imbalance that can be eliminated (and the routes to eliminate the
imbalance) without violating the (traffic) capacity constraint 3 on each link, while keeping the
maximum link contention as low as possible.
Our work is a significant extension to Bokhari's work [3]. He solved the load balancing problem
under several restricted assumptions: 1) there is only unit load imbalance on each overloaded or
underloaded node, i.e., each node has either one unit of excess load or one unit of deficit load, or
is neutral, and 2) contention is not allowed on any link, i.e., no more than one unit of excess load
can be transferred via any link. Moreover, his solution approach does not take into account the
contention between the excess load transferred among processors and the other IPC traffic. In this
paper, we relax these assumptions: the load imbalance on each node can be any arbitrary value
instead of one unit only, and more than one unit of excess load can be transferred via a link as long
as the link's (traffic) capacity constraint is not violated. Two cases are studied. First, we consider
the case in which the excess load on each overloaded node is divisible, i.e., can be arbitrarily divided
and transferred to one or more underloaded nodes. Second, we consider the case in which there
may be one or more entities of excess load on each node, and each of them is indivisible and must
be transferred to an underloaded node as an entity. We also take into account the effect of existing
IPC traffic on route selection for transferring excess load. As a result, the load balancing problem
considered in this paper is much more general and practical, and more difficult to solve, than the
one treated in [3].
In [3], Bokhari considered multicomputer systems that use some specific routing schemes.
In particular, he considered mesh and hypercube interconnection networks that use row-column
(column-row) and e-cube routing schemes, respectively. He used a graph transformation technique
and a network flow algorithm to solve the load balancing problem in these systems. The graph
transformation schemes used for meshes and hypercubes are different, and their correctness is not
trivial to prove, especially for the case of hypercube interconnection networks. In contrast, we
propose a simple, unified graph transformation scheme and a network flow algorithm to solve the
load balancing problem in multicomputer/distributed systems with and without specific routing
schemes. The proposed graph transformation scheme, together with the network flow algorithm,
works for a larger class of routing schemes, including both the row-column (column-row) and e-
cube schemes. The proposed scheme also has an intuitive appeal, and its correctness is very easy
to prove.
With the proposed graph transformation scheme and the network flow algorithm, we show
that for the case of divisible excess load, the load balancing problem with or without specific
routing schemes can be solved optimally in polynomial time, i.e., we can find the maximum load
3 to be defined later.
imbalance that can be eliminated without violating the traffic capacity constraint on each link while
minimizing the maximum contention among all links. For the case of indivisible excess load, we
first prove that the load balancing problem is NP-complete, and then propose a heuristic algorithm
for it.
The rest of the paper is organized as follows. In Section 2, we formally define the load
balancing problem considered in this paper and briefly review a network flow problem whose solution
algorithms will be used to solve our load balancing problem. In Section 3, we discuss how to apply
the network flow algorithm described in Section 2 to optimally solve the load balancing problem
under the assumptions that excess load is divisible and there is no specific routing scheme in the
system under consideration. In Section 4, we show how to transform the representing graph of a
system with a specific routing scheme to another graph, so that the technique described in Section 3
can be used to find an optimal solution for the load balancing in the system. In Section 5, we give
an NP-complete proof and a heuristic algorithm for the load balancing problem with indivisible
excess load. The paper concludes with Section 6.
Problem formulation and a network flow algorithm
2.1 Problem formulation
The system under consideration is either a distributed point-to-point network or a multicomputer
with an interconnection structure, such as a mesh or a hypercube. We will use a directed
graph E) to represent the system, where the vertex/node set V represents the set of
nodes/processors in the system, and the edge set E represents the set of communication links.
Also, a traffic capacity (or simply, capacity) function C is defined on the edge set E, i.e., each edge
associated with a (traffic) capacity C(v which is the maximum communication
volume (measured in data units, such as bits, bytes, or packets) that can take place from node v i
to node v j . If there is no such constraint on a link (u; v), C(u; v) is defined to be 1. Note that
the traffic capacity defined in this paper is not the link bandwidth which is the maximum data
transmission rate of a link. We can think of the traffic capacity of a link as the maximum contention
(to be defined later) allowed on the link. Also note that using the notation (v to denote an
edge allows at most one edge from a vertex to another vertex. This, however, does not impose
any unnecessary constraints as multiple edges from a vertex to another vertex can be transformed
into single edges by introducing a new vertex for each of them and properly redefining the capacity
function.
We assume that when the system needs to perform load balancing, each node is either over-
loaded, underloaded, or neutral. The (total) excess load on an overloaded node s
by e i , and the deficit load on an underloaded node t j 2 V is denoted by d j . The excess and deficit
loads can be any arbitrary values, as opposed to only one unit as assumed in [3]. As in [3], we
assume that the global state of the system and the degree of load imbalance on each node are known
to the central load balancing controller. (The determination of the degree of load imbalance on each
node is beyond the scope of this paper.) We require at most e i units of load to be transferred from
an overloaded node s i to other underloaded nodes, and at most d j units of load to be transferred
to an underloaded node t j from other overloaded nodes. We call this requirement the load transfer
constraint. We also assume, without loss of generality, that transferring one unit of load over a link
incurs one unit of communication volume on the link. The link capacity constraint requires that
the total communication volume on a link (v not exceed the the traffic capacity C(v
of the link.
The system may or may not use a specific routing scheme. If a specific routing scheme is
used, a path P from v i to v j is said to be feasible (under the underlying routing scheme) if the
routing scheme will find P as one possible path from v i to v j . We assume that there is at least one
feasible path from a vertex to any other vertex (so jEj - jV j). Note that there may be one or more
feasible paths from a vertex to another vertex. Since we assume there is at most one edge from a
vertex to another vertex, we can use a sequence of vertices
, to denote a path. In
this paper, we consider only the routing schemes that satisfy the following properties:
P1. Each edge in E is a feasible path, i.e., if (v is a feasible path from v i to v j .
P2. Any sub-path of a feasible path is also feasible, i.e., if path v i 1
is feasible then path
is also feasible, for all 1
P3. If the last edge of a feasible path overlaps the first edge of another feasible path, then the path
formed by combining these two feasible paths is also feasible, i.e., if v i 1
and
are two feasible paths, then v i 1
is also feasible, for
Fig. 1).
The routing schemes commonly used in meshes and hypercubes are the row-column (column-
row) and e-cube routing algorithms, respectively. A mesh interconnection network can be considered
as a two dimensional array, in which each processor, denoted by hx; yi, is connected to its four
neighboring processors hx \Sigma 1; yi and hx; y \Sigma 1i (the boundaries of the array may be wrapped
around). On the other hand, an n-dimensional hypercube (n-cube) has 2 n processors labeled from
each processor is labeled by the binary representation/address, hb
feasible feasible
Figure
1: Property P3 of routing schemes considered in this paper.
Two processors are connected to each other if and only if their binary addresses differ in exactly
one bit.
The row-column algorithm on meshes routes a message/packet first horizontally from its
source node to the node that is at the same column as its destination node, and then vertically to
its destination node. For example, a message with source node h3; 4i and destination node h5; 2i will
be routed via the path h3; 4i, h4; 4i, h5; 4i, h5; 3i, h5; 2i. The e-cube algorithm on hypercubes always
routes a message to the node that more closely matches the address of the destination node with the
comparison beginning from the least significant bit of the addresses. For example, a message with
source node h01110i and destination node h10101i will be routed via the path h01110i, h01111i,
h01101i, h00101i, h10101i.
It is easy to check that the row-column (column-row) routing algorithm on meshes satisfies
properties P1-P3, and the e-cube routing algorithm on hypercubes satisfies properties P1-
P2. To check that the e-cube routing algorithm also satisfies P3, let v
and
be two feasible paths in an n-cube under the e-cube routing algorithm, and let the
addresses of v i j
and v i j+1
be hb
respectively (recall that
since
and v i j+1
are two adjacent nodes, their binary addresses differ only in one bit). According
to the e-cube algorithm, the addresses of all the nodes v i j+1
must have the form
it is easy to see that the path v i 1
formed
by combining the two feasible paths is also a feasible path.
We want to find the maximum load imbalance that can be eliminated (and the routes to
eliminate the imbalance) without violating the link capacity and load transfer constraints while
minimizing the maximum link contention.
2.2 The minimax flow algorithm
Our solution approach to the load balancing problem considered in this paper is based on the
flow problem and its solution algorithm described in [6]. 4 This minimax flow algorithm
finds a maximum flow for a network with a 0/1 weight function that also minimizes the maximum
edge cost (the cost of an edge is defined to be the weight times the flow of the edge). Applying this
algorithm to the load balancing problem, one can view the maximum flow as the maximum load
imbalance that can be eliminated, and the edge cost as link contention.
Before describing our solution approach for the load balancing problem, we first give a brief
review on the minimax flow problem/algorithm. Details on the network flow problem and the
flow problem/algorithm can be found in [10] and [6], respectively.
t; C) be a network with vertex/node set V , edge set E, source s, sink t,
and capacity function E) is the underlying directed graph 5
with is the set of positive real numbers. Each edge (u; v) 2 E is
also associated with a nonnegative real-valued weight w(u; v). For ease of discussion, we define
both C and w are defined on V \Theta V ). If
we say that the network has a 0/1 weight function w.
A flow in a network N is a function that satisfies the following
properties:
1. Capacity constraint
2. Conservation condition:
For each edge (u; v) 2 E, f(u; v) is called the flow in (u; v). For each (u; v)
is called the net flow from u to v. The capacity constraint states that the flow in (u; v) is bounded
by the capacity C(u; v), and the conservation condition states that the net flow going into a node,
except the source and the sink, is equal to the net flow going out of the node. The value of a flow
f , denoted as jf j, is the net flow going out of the source, i.e.,
For the load balancing problem with divisible excess load, transferring f(u; v) units of load
from u to v and f(v; u) units of load from v to u (assuming that f(u; v) - f(v; u)) can be replaced
4 It has been brought to our attention that Ahuja [2] has designed a similar algorithm to solve the minimax
transportation problem, and his algorithm can also be adapted to solve the minimax flow problem.
5 Without loss of generality, we assume that G is a simple graph, i.e., it has no loop (an edge from a vertex v to
itself) and no multiple edges (edges from a vertex u to another vertex v). Therefore, each edge can be represented
by the two end vertices of the edge. Note, however, that this assumption can be easily enforced by introducing
"dummy" vertices and properly redefining the capacity and the weight functions if graph G does not originally satisfy
the assumption.
by transferring f(u; v) \Gamma f(v; u) units of net load from u to v without changing the value of the flow
(load) and without increasing the contention (flow) in any edge of the network. Therefore, f is a
flow with f(u; v) - f(v; u) ? 0, the flow in (u; v) can be simply replaced by f(u; v) \Gamma f(v; u) and
the flow in (v; u) by 0 (note that f(u; v) - f(v; u) ? 0 implies that both (u; v) and (v; u) belong to
E). Note, however, that the above replacement operation is not valid if excess load is indivisible
since one indivisible excess load of f(u; v) units transferred from u to v and another indivisible
excess load of f(v; u) units transferred from v to u cannot cancel each other and be replaced by a
single load transferred from one vertex to the other.
If (u; v) 2 E and f(u; we say that flow f saturates edge (u; v) and call (u; v) an
f-saturated edge in N . The (edge) cost (with respect to flow f) of each edge (u; v) 2 E is defined
to be w(u; v) \Delta f(u; v), and the (total) cost of a flow f is defined to be
v). The
flow problem [6] is to find a maximum flow f which minimizes the maximum edge cost,
i.e., minimizes max (u;v)2E w(u; v) \Delta f(u; v). We will show that our load balancing problem can be
transformed to the minimax flow problem with a 0/1 weight function. We henceforth concentrate
on networks with 0/1 weight functions.
Definition. Given a network t; C) with a 0/1 weight function w, define
to be a new network with
for each edge (u; v) 2 E. An edge (u; v) 2 E fi is called a
critical edge if w(u;
Let f be a maximum flow in N and f fi a maximum flow in N(fi). Since C fi (u; v) - C(u; v)
for all (u; v) 2 E, we have jf
v)g. It is easy to see
that
fi. Therefore, jf
fi. The capacity fi of the critical
edges in N(fi) is the maximum edge cost (note that the weight of a critical edge is 1) allowed for
the network N(fi), and hence, the minimum value of the maximum edge cost for a maximum flow
in N is fi , where fi is the minimum value of fi such that jf fi
is a maximum flow in N(fi). 2
We propose in [6] a minimax flow algorithm, MMC01, as a solution to the minimax flow
problem with a 0/1 weight function. MMC01 simply finds fi and constructs a maximum flow
for the network N(fi ). For completeness, we list Algorithm MMC01 in Fig. 2 and summarize
it below. However, for the sake of conciseness, we omit the proofs of the correctness and time
complexity of the algorithm. The interested reader is referred to [6] for details.
The idea behind Algorithm MMC01 is that in each iteration, variable fi of the constructed
network N(fi) is set to the maximum edge cost allowed in that iteration. With this maximum edge
Algorithm MMC01
Step 1. Find a maximum flow f and its value jf j for the network t; C).
Step 2. Let ' be the number of edges with nonzero weights in N (w.l.o.g. assume ' - 1).
Step 3. Construct network
Find a maximum flow f fi and its value jf fi j for N(fi).
If jf to Step 5.
Step 4. Let \Delta := jf
Let R be the set of f fi -saturated critical edges in N(fi), i.e.,
Go to Step 3.
Step 5. A maximum flow, f fi , that minimizes the maximum edge cost is found, and the maximum
edge cost with respect to flow f fi is fi.
Figure
2: Algorithm for minimax flow problem with a 0/1 weight function.
cost, the capacity of an edge (u; v) with w(u; set to min(C(u; v); fi), i.e., the flow allowed to
go through edge (u; v) is restricted to min(C(u; v); fi), and hence, the cost of edge (u; v) is bounded
by min(C(u; v); fi) \Delta w(u; v) - fi. The algorithm repeatedly constructs maximum flows for networks
N(fi) with increasing values of fi (Steps 3-4). Initially, fi := 0 (Step 2). If jf there is a
maximum flow with zero cost. Otherwise, if jf
fi, the
optimal value of fi (i.e., the minimum value of the maximum edge cost, fi ) is found.
In Step 3, if jf the optimal value of fi has not been found. For each (u; v)
if w(u; is an f fi -saturated critical edge in N(fi).
Therefore, to get a larger flow, we need to increase the capacities of critical edges. Let \Delta and - be
defined as in the algorithm (Step 4). It has been shown in [6] that fi + \Delta=- fi . Hence, we set
\Delta=- and repeat the process (Step 4). This assignment guarantees that the value of fi is
always less than or equal to the optimal value fi , and upon termination jf
. It has also been shown in [6] that Algorithm MMC01 terminates in at most ' iterations,
and hence has a time complexity of O(' \Delta M(n;m)), where ' is the number of edges with nonzero
weight and M(n;m) is the time complexity of the algorithm used in Algorithm MMC01 to find a
maximum flow in a network with jV vertices and edges.
3 Systems without specific routing schemes
In this section, we discuss the load balancing problem for systems without being constrained
by any specific routing scheme, i.e., the excess load to be transferred from an overloaded node s i
to an underloaded node t j can use any route (path) from s i to t j .
We first consider the case in which the excess load on each overloaded node can be arbitrarily
divided and transferred to one or more underloaded nodes. For example, if node v i has excess
load e i , and nodes v j and v k have deficit load d j and d k , respectively, we can transfer e ij and e ik
units of load from v i to v j and v k , respectively, where
of e are real numbers. In the case where excess load is indivisible, i.e., each
overloaded node may have one or more entities of excess load each of which can only be transferred
to an underloaded node as an entity, the load balancing problem becomes more difficult. We defer
the discussion of this case until Section 5.
For the case that excess load is arbitrarily divisible, Algorithm MMC01 described in Section
2.2 can be easily applied to find the maximum amount of load imbalance that can be eliminated
while minimizing the maximum link contention. Given the graph representation E) of a
multicomputer or distributed system, and its capacity function C, let ae V be
the set of overloaded nodes with node s i having excess load e i , and
be the set of underloaded nodes with node t j having deficit load d j , where e i 's and d j 's are
all real numbers (note that Recall that e i is the (maximum) amount of load
on s i to be transferred to other underloaded nodes, and d j is the maximum amount of load
can receive from other overloaded nodes. (As mentioned in Section 1, we assume that e i 's
and d j 's are given and their determination is beyond the scope of this paper.) We construct
a new graph G 0 by adding to G a new source node s, a new sink node t, and
t)g. Define a new capacity function C 0
for (v is the current communication volume on link (v due to the
interprocessor communication traffic, i.e., C 0 (v is the maximum amount of load that can be
transferred on link (edge) (v violating its traffic capacity constraint. Finally, the
weight function w is defined as w(u;
Recall that in the load balancing problem considered in this paper, we want to find the
maximum amount of load imbalance that can be eliminated while minimizing the maximum link
contention. This is equivalent to find a minimax flow in the network with the
0/1 weight function w. Let f(v be the amount of load that will be transferred on link (v
when the load balancing procedure is activated. There are two possible ways to define the link
contention, depending on the type of communication traffic to be minimized on a link: C1) if we
are concerned with minimizing the amount of total communication traffic on link (v
the contention to be F (v if we are concerned with minimizing the amount of
load to be transferred on link (v we define the contention to be f(v
For C2, we simply use Algorithm MMC01 to find a minimax flow f for the network
(with the weight function w). The value jf j and the maximum edge cost of the
flow f found by MMC01 are the maximum load imbalance that can be eliminated and
the maximum edge contention under that flow, respectively. For C1 where the link contention is
defined as F (v the network N(fi) should be redefined as follows.
F be defined as above. Define
to be a new network with
for each edge (u; v) 2 E. An edge (u; v) 2 E fi is called a
critical edge if w(u;
Moreover, the initial value of fi in Step 2 of MMC01 should be changed to max (u;v)2E F (u; v). In
this case, the value of fi in each iteration of MMC01 is the maximum contention, F (u; v)
allowed for that iteration.
Since both cases C1 and C2 can be solved similarly except that the graph representations
need to be appropriately defined, unless otherwise stated, we assume in the following discussion
that the contention of an edge (u; v) is defined to be f(u; v), i.e., the total amount of excess load
that is to be transferred on that edge.
Suppose excess load is not arbitrarily divisible. Without loss of generality, we assume that
the smallest indivisible load entity is one unit. In this case, the flow f and the capacity C should be
redefined as functions from V \Theta V to Z is the set of positive integers. Algorithm
MMC01 can still be applied to find a (integral) minimax flow for the network N , except that the
statement fi := fi + \Delta=- in Step 4 should now be changed to fi := fi
G G'
x
xy
xz
jl
ik
Figure
3: Illustration of graph transformation.
4 Systems with specific routing schemes
In this section, we discuss the load balancing problem for systems with special routing schemes
(that satisfy properties P1-P3). As discussed in Section 2, both the row-column (column-row)
routing scheme for meshes and the e-cube routing scheme for hypercubes satisfy properties P1-P3.
In [3], these two routing schemes were handled differently in solving the load balancing problem.
In fact, using the approach described in [3], different graph transformation methods need to be
designed for different routing schemes. In contrast, we propose a unified graph transformation
scheme which can be applied to different routing schemes as long as they satisfy properties P1-P3.
Given a system graph E) and a specific routing scheme that satisfies properties P1-
P3, we first transform G into another graph G according to the following rules (see
Fig.
R1. For each vertex v x 2 V , there are d(v x
x;d are the in-degree, out-degree, and
total degree, respectively, of vertex v x in G. Each vertex v i
called an
in-vertex/node (of v x ), corresponds to an incoming edge (v and each vertex v
xz ,
called an out-vertex/node (of v x ), corresponds to an outgoing edge (v x
v x in G.
R2. Let v
ik
and v i
xy correspond to the edge (v G. There is a corresponding edge (v
ik
xy
in G 0 with the same capacity as (v and with the weight of 1.
R3. If v is a feasible path from v i to v j in G, add an edge (v i
xz ) in graph G 0 with a capacity
of 1 and a weight of 0, where (v
xy
jl
are the edges in G 0 corresponding
to the edges (v
R4. For each overloaded node v i , add to G 0 a node s i and d
ik
Figure
4: The transformed graph of a 3 \Theta 3 mesh that uses the row-column routing scheme.
each of which has a capacity of e i and a weight of 1, where e i is the (total) excess load on v i .
For each underloaded node v j , add to G 0 a node t j and d i (v j ) edges (v i
jl
each of which has a capacity of d j and a weight of 1, where d j is the deficit load on v j .
R5. Add to G 0 a source s, a sink t, an edge (s; s i ) with a capacity of e i and a weight of 0 for each
overloaded node v i (s i ), and an edge with a capacity of d j and a weight of 0 for each
underloaded node v j (t j ).
After the system graph G is transformed into G 0 , we can treat the system represented by G 0 as one
without any specific routing scheme, and solve the load balancing problem by finding a minimax
flow for the network with the weight function w as described in Section 3,
are the capacity and weight functions defined in rules R1-R5.
The transformed graphs (obtained by applying only rules R1-R3) of a 3 \Theta 3 mesh that uses
the row-column routing scheme and a 3-cube that uses the e-cube routing scheme are shown in
Fig. 4 and Fig. 5, respectively. Note that we assume each link between two adjacent nodes u and v
in a mesh or a hypercube is a bi-directional communication link, and thus, there are two directed
edges (u; v) and (v; u) corresponding to this link in the graph representation of the mesh or the
hypercube. From Fig. 4, it is easy to see that whenever a routing path uses a horizontal edge, it
will no longer be able to use a vertical edge, and hence, each directed path from an out-vertex to an
Figure
5: The transformed graph of a 3-cube that uses the e-cube routing scheme.
in-vertex in the transformed graph corresponds to a feasible path found by the row-column routing
scheme in the mesh and vice versa.
The transformed graph also satisfies the requirement of the e-cube routing scheme for hy-
percubes. For example, consider node h011i in Fig. 5. From the definition of the e-cube routing
scheme, a path going into node h011i from node h010i can go to either node h001i or node h111i, a
path going into node h011i from node h001i can only go to node h111i, and a path going into node
h011i from node h111i can go nowhere.
To formally prove the correctness of the transformation, it suffices for us to prove the following
theorem.
Theorem 1: Suppose the routing scheme used in the system under consideration satisfies properties
P1-P3. Every feasible routing path from a vertex v x to another vertex v y in G corresponds
to a unique directed path from an out-vertex v
xz of v x to an in-vertex v i
yw of v y in the transformed
graph G 0 and vice versa (assume G 0 is derived by applying rules R1-R3 to G).
y is a feasible path from v x to v y in G. From property
P2, we know that each (sub-)path v j l
feasible path in G,
and from transformation rules R1-R3, it is easy to see that there exists a directed path v
yw in G 0 .
On the other hand, suppose
zw is a directed path in G 0 . From the
transformation rules, it is easy to see that each edge must satisfies that either u 0 is
an in-vertex and v 0 is an out-vertex or vice versa. Moreover, if u 0 is an in-vertex and v 0 is an
out-vertex, then u 0 and v 0 correspond to the same vertex in G. Therefore, k must be even and all
are out-vertices, and all v 0
are in-vertices. Moreover, each pair of
vertices
2l
and v 0
correspond to the same vertex, say v
2l
is an in-vertex of v j l
and v 0
is an out-vertex of v j l
, for some i l and . Now, from the
transformation rules R1 and R2, we know that v x
y is a (directed) path from v x to
v y in G. For notational simplicity, let v
and
. We next prove by induction that path
is feasible. Specifically, we will show that all the paths v j 0
are feasible. Since (v
E, from property P1, we have that path v
is feasible in G.
Suppose path v j 0
feasible in G. Since (v 0
) is an
edge in G 0 , from the transformation rule R3 we know that v j
is a feasible path in G.
Then, by property P3, we conclude that path v
5 Systems with indivisible excess load
In this section, we discuss the case in which excess load is indivisible. We assume that there
is no specific routing scheme in the system. For systems that use certain specific routing schemes,
one can first apply the graph transformation rules described in Section 4 to the representing graph
and then treat the transformed graph as a system with no specific routing scheme.
As discussed in Section 3, in a system with indivisible excess load, each overloaded node s i
has one or more entities of excess load e i1
, for some k i - 1, each of which can only be
transferred to an underloaded node as an entity. Without loss of generality, we assume that each
overloaded node s i has exactly one entity of indivisible excess load e i since if s i has
of excess load, we can add a new overloaded node s ij with one entity of indivisible excess load e ij
and a new edge each entity of excess load e ij neutral
node. Note that we use e to refer to either the entity of excess load or its amount.
We first show that the load balancing problem with indivisible excess load is NP-hard in the
strong sense [5] (in fact, we show that the problem of finding the maximum load imbalance that
can be eliminated without considering the link contention is already NP-hard in the strong sense if
Figure
Instance construction in the NP-completeness proof.
the excess load is indivisible). We then propose a heuristic algorithm as a solution to the NP-hard
case of the load balancing problem. The decision version of the load balancing problem of finding
the maximum load imbalance that can be eliminated is to ask, given a number B, whether or not
it is possible to eliminate at least B units of load imbalance (without violating the link capacity
and load transfer constraints).
Theorem 2: The decision version of the load balancing problem of finding the maximum load
imbalance that can be eliminated is NP-complete in the strong sense if the excess load is indivisible.
Proof: It is easy to see that the decision version of the load balancing problem is in NP. To
complete the proof, we reduce to it the multiprocessor scheduling problem [5]: Given a set A
of n tasks, a length l(a i ) for each 1 - i - n, a number p of processors, and a
deadline D, is there a partition of A such that max 1-i-p (
Given an instance of the multiprocessor scheduling problem, we construct an instance of the
load balancing problem (shown in Fig. 6) in which (1) each s i an overloaded node with
indivisible excess load of l(a i ) units, and t is an underloaded node with deficit load of
units; (2) there are p node-disjoint paths from u to v, all the edges on these paths have a capacity
D, all the other edges have an infinite capacity, and
Note that the construction
can be done in polynomial time.
It is easy to see that at least B units of load imbalance can be eliminated without violating the
link capacity and load transfer constraints if and only if there exists a solution for the multiprocessor
scheduling problem. Since the multiprocessor scheduling problem is NP-compete in the strong sense,
the decision version of the load balancing problem with indivisible excess load is also NP-complete
in the strong sense. 2
Since it is unlikely to find a polynomial time optimal algorithm for the load balancing problem
with indivisible excess load, we propose below a heuristic algorithm for the problem. Let E)
be the graph representation of the multicomputer or distributed system under consideration, and
C(u; v) be the capacity (for load transferring purpose) of edge (u; v), for all (u; v) 2 E. Let s i ,
be the overloaded nodes and t i q, be the underloaded nodes. Each overloaded
node s i has indivisible excess load e i which must be routed to an underloaded node as an entity, and
each underloaded node t i has deficit load d i which is the maximum amount of load it can receive
from overloaded nodes. Without loss of generality, we assume that e i 's are sorted in non-increasing
The heuristic algorithm (see Fig. 7) consists of two phases. In Phase I, we treat the excess load
as if it were divisible and use the network flow technique described in Section 3 to find a minimax
flow f . If the excess load was indeed divisible, f would be an optimal solution in which the value jf j
is the maximum load imbalance that can be eliminated with the maximum link contention (cost)
minimized. In Phase II, we use the minimax flow f found in Phase I as a "template" and route
the entities of excess load one by one in such a way that the resulting flow on each link will be as
close to the corresponding minimax flow as possible, i.e., the value f(u; v) found in Phase I serves
as the target flow (load) for edge (u; v) to be achieved in Phase II. Since in general larger amounts
of excess load are more difficult to route than smaller amounts, we will route the excess load in
non-increasing order of load amount.
During the execution of Phase II, f 0 (u; v) is the total load currently routed through edge
v). If the excess load currently being routed is e i , we say that an edge (u; v) is feasible if
a path from s i to t is feasible 6 if all edges on the path are feasible.
We will route excess load e i from the overloaded node s i to an underloaded node t j (actually, to
node t) only via a feasible path, i.e., excess load can only be routed via a path in which each edge
has enough (remaining) capacity. Note that in Phase II, vertex s and edges (s; s i
are, in fact, not used, i.e., the underlying graph is G
qg.
The excess load e i is routed using a greedy type algorithm, called ordered depth-first search
(O-DFS). Note that f(u; v) \Gamma f 0 (u; v) is the difference between the target flow f(u; v) and the total
load f 0 (u; v) currently routed through edge (u; v). A large f(u; implies that the
current load routed through edge (u; v) is still far from the target value (note that f(u;
may be negative). Therefore, at each vertex u, we always choose to traverse next the edge (u; v)
that has the largest f(u; \Delta) \Gamma f 0 (u; \Delta) value among all feasible outgoing edges at u, and hence, reduce
6 Note that we overload the word "feasible" here. The term "feasible path" defined here is different from that
defined in Section 2.1.
Phase I.
Step 1. Construct a network
and
Let w(u;
Step 2. Treat each excess load e i as a divisible load, and use Algorithm MMC01 to find a minimax
flow f for N .
Phase II.
Step 1. Set f 0 (u; v) := 0, for all (u; v)
Step 2. For i / 1 to p do the following:
Step 2.1. Use the ordered depth-first-search (O-DFS) algorithm to find a feasible path from
s i to t, where the ordered DFS algorithm is similar to the DFS graph traversal algorithm
[4], except that when branching out from a vertex we always choose to traverse next the
edge that has the largest f(\Delta; \Delta) \Gamma f 0 (\Delta; \Delta) value among all untraversed feasible edges at
that vertex.
Step 2.2. If there does not exist any feasible path from s i to t, it means that the excess
load e i will not be eliminated when the next round of load balancing is performed. If
there exists a feasible path from s i to t, let P be the (first) path found by the O-DFS
algorithm (P will be used as the route to eliminate the excess load e i when the next
round of load balancing is performed). Reset f 0 (u; v) := f 0 (u; v) each edge (u; v)
on P .
Figure
7: A heuristic algorithm for the case that excess load is indivisible.
the maximum f(u; \Delta) \Gamma f 0 (u; \Delta) value at u.
Note that the heuristic algorithm is not an optimal algorithm. It may not find the maximum
load imbalance that can be eliminated, and in cases in which it does find the maximum load
imbalance, it may not minimize the maximum link contention. We use the following example to
further illustrate the heuristic algorithm.
Example 1: Suppose the constructed network of a system graph E)
and the minimax flow f found at the end of Phase I are shown in Fig. 8(a). The maximum edge
7/7
7/4
9/4
9/3
7/5
s
(a)
9/3
(b)
9/3
(c)
9/3
(d)
9/3 2/2
Figure
8: An example that shows how the heuristic algorithm works.
cost (link contention) shown in the figure is 4 (note that only edges in E are considered).
In Phase II, we initially set f 0 (u; We first route excess load
e 1 . Starting from vertex s 1 , since f(s 1 the
O-DFS algorithm will first visit vertex v 2 . Since (v is the only feasible outgoing edge (i.e.,
the next vertex visited is t 2 . Then, vertex t is visited and a
feasible path from s 1 to t is found for e i , i.e., the path s t. For each edge (u; v) on that path,
we reset f 0 (u; . The current values of C 0 (u;
all are shown in Fig. 8(b).
We next route excess load e 2 . Using the O-DFS algorithm, we find the feasible path s
for e 2 . Note that at vertex s 2 , although f(s
still choose edge 5). For
each edge (u; v) on the path found for e 2 , we reset f 0 (u; . The current values of
are shown in
Fig. 8(c).
The next excess load to be routed is e 3 , and the path found for e 3 is s t. The values of
routed
and f 0 is updated are shown in Fig. 8(d).
Finally, we route excess load e 4 . Starting from s 4 , O-DFS first traverses edge
vertex t 3 , since there is no feasible outgoing edge, O-DFS backtracks to vertex s 4 , and then traverses
edge since t 3 has been visited, O-DFS next traverses (v
both 2, the next
edge traversed is and the path found for e 4 is s t. The values of C 0 (u;
and routed and f 0 is updated
are shown in Fig. 8(e).
The amount of load imbalance that can be eliminated in this example is 7
and the maximum link contention is 7. 2
The time complexity of the heuristic algorithm is shown in the following theorem.
Theorem 3: Phase I of the heuristic algorithm has a worst-case time complexity of O(m \Delta
M(n;m)), where is the time complexity of finding a maximum
flow in a network of O(x) vertices and O(y) edges. Phase II of the heuristic algorithm has a
worst-case time complexity of O(p is the number of excess load entities.
Proof: The complexity of Phase I is discussed in Section 2.2. (note that there are m edges with
nonzero weight in N ).
As mentioned earlier, the underlying graph in finding paths from overloaded nodes to underloaded
nodes in Phase II is G
where q is the number of underloaded nodes. The well-known DFS algorithm can be done in
x is the number of vertices and y is the number of edges of the graph
traversed. For our O-DFS algorithm, each time when we first visit or backtrack to a vertex, we
always traverse an untraversed edge with the maximum f(\Delta; \Delta) \Gamma f 0 (\Delta; \Delta) value. Therefore, traversing
all outgoing edges at a vertex u takes at most O(d is the out-degree
of u and the logarithm is to the base 2. Thus, the total time to route excess load entities is at most
[1, 4]
Since we need to route p excess load entities, e i the worst-case time complexity of Phase
II is O(p m). (Note that since we assume there is
only one entity of indivisible excess load on each overloaded node and there is at least one directed
path from a vertex to any other vertex in G, we have m
6 Conclusion
In this paper, we consider the load balancing problem in multicomputer or distributed systems
that use circuit switching, wormhole routing, or virtual cut-through, with the objective of
finding the maximum load imbalance that can be eliminated without violating the (traffic) capacity
constraint on any link while minimizing the maximum link contention among all links.
We solve the problem under various conditions. We give an O(m \Delta M(n;m)) optimal algorithm
for the load balancing problem with divisible excess load, where n is the number of nodes/processors
and m is the number of links in the system, and M(x; y) is the time complexity of finding a maximum
flow in a network of x vertices and y edges. We propose a graph transformation technique for
systems with specific routing schemes that satisfy properties P1-P3 described in Section 2.1. This
graph transformation technique transforms the representing graph of a system to another graph
with which the load balancing problem can be solved optimally in the same manner and with the
7 It is easy to see that if each overloaded node has more than one entity of indivisible excess load, the time
complexity of Phase II should be changed to O(p since we will add p vertices and p edges to
the system graph (however, we still have m - n ? q).
same time complexity as the load balancing problem for systems without specific routing schemes.
We also consider the load balancing problem for the case in which excess load is indivisible. We
prove that the problem is NP-hard and propose, based on the O(m \Delta M(n;m)) optimal algorithm
for the problem with divisible excess load, an O(m \Delta M(n;m)+ m) heuristic algorithm as
a solution to the problem, where p is the number of excess load entities.
The result obtained in this paper is a significant extension to Bokhari's work reported in [3].
We generalize his work in several directions: 1) we relax the assumption of unit load imbalance; 2)
we relax the assumption of unit link contention; 3) we consider the effect of existing IPC traffic on
the selection of routes for load balancing; 4) our graph transformation technique and network flow
model can be applied to a larger class of routing schemes. Moreover, our solution approach and
algorithms are more intuitive and simpler than those proposed in [3].
--R
The Design and Analysis of Computer Algorithms.
Algorithms for the minimax transportation problem.
A network flow model for load balancing in circuit-switched multicomput- ers
Introduction to Algorithms.
Computers and Intractability: A Guide to the Theory of NP-Completeness
A fast algorithm for the minimax flow problem with 0/1 weights.
Traffic routing for multicomputer networks with virtual cut-through capability
Virtual new computer communication switching technique
A survey of wormhole routing techniques in direct networks.
Data Structures and Network Algorithms.
--TR
--CTR
Michael E. Houle , Antonios Symvonis , David R. Wood, Dimension-exchange algorithms for token distribution on tree-connected architectures, Journal of Parallel and Distributed Computing, v.64 n.5, p.591-605, May 2004
Patrick P. C. Lee , Vishal Misra , Dan Rubenstein, Distributed algorithms for secure multipath routing in attack-resistant networks, IEEE/ACM Transactions on Networking (TON), v.15 n.6, p.1490-1501, December 2007 | minimax flow problem;overloaded/underloaded nodes;load balancing;excess/deficit load;link traffic |
361502 | An Implementation of Constructive Synchronous Programs in POLIS. | Design tools for embedded reactive systems commonly use a model of computation that employs both synchronous and asynchronous communication styles. We form a junction between these two with an implementation of synchronous languages and circuits (Esterel) on asynchronous networks (POLIS). We implement fact propagation, the key concept of synchronous constructive semantics, on an asynchronous non-deterministic network: POLIS nodes (CFSMs) save state locally to deduce facts, and the network globally propagates facts between them. The result is a correct implementation of the synchronous input/output behavior of the program. Our model is compositional, and thus permits implementations at various levels of granularity from one CFSM per circuit gate to one CFSM per circuit. This allows one to explore various tradeoffs between synchronous and asynchronous implementations. | Introduction
Our purpose is to reduce the gap between two distinct models of concurrency that are fundamental
in the embedded systems framework, the synchronous and asynchronous models, with
application to systems written in the Esterel synchronous programming language and implemented
in the POLIS system developed at UC Berkeley and Cadence.
The synchronous or zero-delay model is used in circuit design and in synchronous programming
languages such as Esterel [6], Lustre [12], Signal [10], and SyncCharts [2] (a synchronous
version of Statecharts [13]), see [11] for a global overview. In this model, all bookkeeping actions
such as control transmission and signal broadcasting are conceptually performed in zero-
time, only explicit delays taking time. Thus, a conceptual global clock controls precisely when
statements simultaneously compute and exchange messages. The model makes it possible to
base design on deterministic concurrency, which is much easier to deal with than classical non-deterministic
concurrency. Compiling, optimizing, and verifying programs is done using powerful
Boolean computation techniques, see [5].
The synchronous model is well-suited for direct speci-cation and implementation of comparatively
compact programs such as protocols, controllers, human-machine interface drivers,
and glue logic. In this case, one can build a global clock slow enough to react to each possible
environmental input.
In an asynchronous model, processes exchange information through messages with non-zero
travel time. Asynchronous models are well-suited for network-based distributed systems speci-
-cation and for hardware/software codesign, where the relative speed of components may vary
This work was begun while the -rst author was visiting Cadence Berkeley Laboratories, August 1998.
widely. There are many asynchronous formalisms with varied communication policies. For ex-
ample, CSP processes [14] communicate by rendezvous, while data-AEow processes [15] exchange
data through queues or buoeers.
The POLIS [3] mixed synchronous/asynchronous model has been developed at UC Berkeley
and Cadence, with primary focus on codesign. It is a Globally Asynchronous Locally Synchronous
(GALS) model, in which synchronous nodes called CFSMs (Codesign Finite State
Machines) are arranged in an asynchronous network and communicate using non-blocking 1-
place buoeers, and through a synthesized real-time operating system (RTOS) for the software
part. The CFSMs can be programmed in a concurrent synchronous language such as Esterel,
thus taking maximal advantage of the synchronous model at the node level. The model can be
eOEciently simulated and implemented in hardware and/or software; notice that 1-place buoeers
are much simpler to implement than FIFOs, especially at the hardware/software boundaries.
However, POLIS networks have much less intrinsic semantic safety than FIFO-based dataAEow
Kahn networks [15], which are behaviorally deterministic, and their behavior must be carefully
controlled. In particular, buoeer overwriting in POLIS can lead to non-deterministic behaviors
that can be hard to analyze and prove correct.
Here, we show that the behavior of a synchronous circuit or program can be nicely implemented
in a POLIS network. Of course, one can implement a synchronous program in a single
CFSM node in a straightforward way. Here, we are interested in distributed implementations
where the synchronous behavior is split between asynchronously communicating units, without
a global clock. In practice, this is useful when the application behavior is naturally synchronous
but the execution architecture is distributed and possibly heterogeneous, with physical inputs
and outputs linked to dioeerent computing units. We retain the synchronous philosophy when
specifying an application and we bene-t from the AEexibility and eOEciency of CFSM networks
in the implementation. We propose a solution in which the CFSM granularity can be chosen at
will: any part of the synchronous program can be implemented in a single synchronous CFSM,
which makes it possible to partition the program according to the architecture constraints and
the best synchrony/asynchrony compromise.
Other authors have proposed such distributed implementations of synchronous programs on
asynchronous networks, see for example [9, 8], and we draw much from their work. However, our
implementation takes maximal advantage of the semantics of the objects we deal with and it is
presented dioeerently, with a trivial correctness proof. Technically speaking, we present a POLIS
implementation of constructive synchronous circuits [5, 18], which is a class of well-behaved cyclic
circuits that generalizes the usual class of acyclic circuits. Since Esterel programs are translated
into constructive circuits [4], this implementation handles Esterel as well.
The key of any implementation of synchronous programs is the realization of a conceptual
zero-delay reaction to an input assignment. In a distributed asynchronous network, this must
be done by a series of message exchanges. In our implementation, the messages are CFSM-
events that carry proven facts about synchronous circuit wire or expression values. Such facts
are exactly the logical information quanta on which the constructive semantics are based. The
CFSM nodes generate output facts from input facts according to the semantic deduction rules.
This is done over a series of computations since conceptually simultaneous facts now arrive at
dioeerent times.
For a single reaction of a program, the number of events is uniformly bounded. No buoeer
overwrite can occur in the network. Although the internal behavior is non-deterministic, the
overall behavior respects the synchronous semantics of the original program and thus is de-
terministic. This is true independently of the schedule employed by the RTOS. In addition,
execution of successive synchronous reactions can be pipelined.
Finally, the implementation takes full advantage of the mathematical properties of the constructive
semantics. In particular, the compositionality property makes it possible to arbitrarily
group elementary circuit gates into CFSM nodes: this allows any level of granularity, from one
single CFSM for the program at one extreme to one CFSM per individual gate at the other.
Clearly, there are many applications for which using only the synchronous formalism at spec-
i-cation level makes no sense, in which case our results are not directly applicable. Nevertheless,
we think that they show that the apparent distance between synchrony and (controlled) asyn-
IJYXFigure
1: Circuit C 1
chrony can be reduced, and we hope that the technology we present can serve as a basis for
future mixed-mode language developments.
We start in Section 2, by presenting the logical, semantical, and electrical views of constructive
circuits. In Section 3, we brieAEy present the POLIS CFSM network model of computation. Our
implementation of constructive circuits in this model is presented in Section 4, We discuss possible
applications and synchrony/asynchrony tradeooes in Section 5, and we conclude in Section 6.
Constructive Circuits
Constructive circuits are iwell-behavedj possibly cyclic circuits that generalize the class of acyclic
circuits. Acyclic circuits can be viewed in two dioeerent ways:
ffl as Boolean equation systems, then de-ning a Boolean function that associates an output
value assignment with each input value assignment.
ffl as electrical devices made of wires and gates that propagate voltages and have certain
delays: if the inputs are kept electrically stable long enough to one of two binary voltages
(say 0V and 3V), the outputs stabilize to one of the binary voltages.
Relating the Boolean and electrical approaches is easy for acyclic circuits: when the outputs electrically
stabilize, they take the voltages corresponding to the results of the Boolean input/output
function. Constructive circuits have exactly the same characteristics even in the presence of cycles
2.1 The Behavior of Cyclic Circuits
A circuit has input, output, and internal wires; the latter we also call local variables. In our
examples, we use the letters I ; J for the inputs and X;Y for the outputs and locals, making it
precise which are the outputs where necessary. Each output or local variable is de-ned by an
equation is an expression built using variables and the operators : (negation),
- (conjunction), an - (disjunction). For simplicity, we assume that an expression E is either a
variable, the negation of a variable, or a single n-ary operator - or - applied to variables or the
negation of variables. Any circuit can be put into this form by adding enough auxiliary variables.
A circuit can also be considered as a network of gates, as pictured in Figure 1. Each wire
has a single source and multiple targets. The gates correspond to the operators.
As a running example, we shall consider the following circuit C 1
, with outputs X and Y :
ae
Notice that C 1 is cyclic: Y appears in the equation of X and conversely.
2.1.1 Circuits as Boolean Equations
In the Boolean view, we try to solve the circuit equations using Boolean values 0 and 1. An
input assignment i associates 0 or 1 with some input variables. An input assignment is complete
if it associates a value with any input variable. For a complete assignment i, A Boolean solution
of the circuit is an assignment of values 0 or 1 to the other variables that satis-es the equations.
An acyclic circuit has exactly one Boolean solution for each complete input assignment. A
cyclic circuit may have zero, one or several solutions for a given complete input assignment. For
example, consider the case where there is no input and one output X . For there is no
solution. For there is a unique solution there are two solutions
For
, there is a unique solution if The solution is
the equations reduce to
and there are two solutions,
2.1.2 Circuits as Electrical Devices
In the electrical view, one preferably uses the graphical presentation and vocabulary. Wires
associated with variables carry two dioeerent voltages, also called 0 and 1 for simplicity, and
logic gates implement the Boolean operators. Wires and gates can have propagation delays.
We shall not be very accurate here about delays; technically, the delay model we refer to is the
up-bounded inertial delay model described in [7, 17]. A complete input assignment is realized
by keeping the input wires stable over time at the appropriate voltages. Voltages propagate in
the circuit wires according to the laws of electricity, and the property we are interested in is
wire voltage stabilization after a bounded time. The non-input wires are assumed to be initially
unstable.
Outputs of acyclic circuits always stabilize. Outputs of cyclic circuits may or may not stabi-
lize. For example, the output of stabilizes, while that of
0 and 1. The output of remains unstable. When wires stabilize, their values always
satisfy the equations.
Stabilization may depend on delays. For example, in the Hamlet circuit 1 de-ned by
, the output X stabilizes to 1 for some delays and does not stabilize for others, see [5].
Stabilization may also depend on the input assignment: for C 1
, outputs stabilize to the right
Boolean values unless I in which the behavior is delay-dependent, with no stabilization
for some delays.
2.2 Constructive Boolean Logic
Notice that the perfect match between Boolean and electrical solution is lost for cyclic circuits:
for Hamlet, the Boolean output function is well-de-ned and yields
electrical stabilization may not occur. Hamlet has a unique Boolean solution because 1 happens
to be a solution while 0 is not. Finding the solution involves propagating non-causal information
and this cannot be done by non-soothsaying electrons in wires. Fortunately, Boolean logic can
be weakened into constructive Boolean logic, in which the solution to Hamlet is rejected,
thereby rendering the Boolean and electrical results the same: no solution exists. Constructive
Boolean Logic precisely models electrical behavior.
2.2.1 Facts and Proofs
Constructive Boolean logic deals with facts and proofs. A fact has the form
where E is a Boolean expression. An input fact is I = 0 or I = 1 for an input variable I . An
input assignment i is a set of input facts. Facts are deduced from other facts by deduction rules.
There are deduction rules for each type of gate and one rule to handle equations. Here are the
rules for the - conjunction operator:
1 Think of X as to be.
(l-and)
(r-and)
(b-and)
The facts above the horizontal bar are the premises and the fact below the bar is the conclusion.
Rule (b-and) reads as follows: from the facts 1. The rules
for - (or-gate) are dual. The rules for negation are:
Notice that X - Y behaves as :(:X - :Y ), just as in classical Boolean logic. The rules for a
circuit equation are
where b can be either 0 or 1.
A proof is a sequence of facts that starts by the facts of an input assignment and such that
any other fact can be deduced from the previous facts using a rule. The following consistency
lemma shows the soundness of the proof system. It is easily shown by induction on the length
of the proof.
Lemma 1 If there exists a proof of a fact 1), then there is no proof of
2.2.2 Proof Examples
We give some proof examples for C 1
. We present them in annotated proof form, writing at each
step the deduced fact, the premises, and the applied deduction rule. Here is an annotated proof
for
with complete input assignment I = 0;
from (1) by (l-and)
from (2) and (5) by (b-and)
Here is the dual proof
for
from (1) by (l-and)
from (2) and (5) by (b-and)
Notice that the deduction ordering is X -rst, Y next in P 01
, while it is the reverse ordering Y
-rst, X next in P 10
. This is the main dioeerence between acyclic and constructive circuits: in
acyclic circuits, one can -nd a data-independent variable ordering valid for all input assignments.
In constructive circuits, such an ordering exists for each input assignment, but it may be data-dependent
2.2.3 Example of Non-Provable Circuits
The circuits are both rejected as having no output proof, and for the very
same reason: there is no way to start a proof. Notice that the existence or non-existence of a
Boolean solution is not relevant. The circuit for example, has two Boolean solutions:
1. However, to verify either solution one would have to -rst make an assumption
about the solution, and then verify the validity of the assumption. Constructive proofs must
only propagate facts; they are not allowed to make assumptions
Constructive Boolean logic rejects the Hamlet circuit which no output fact
can be proven. As above, there is no way to start a proof without making an assumption. The
law of excluded middle X - does not hold in constructive logic, unless X has already
been proved to be 0 or 1.
2.2.4 Output Proofs and Complete Proofs
An output proof is a proof that proves a fact for each output variable. A complete proof is a
proof that proves a fact for each variable. A circuit is output constructive w.r.t. a complete input
assignment i if there is an output proof starting with the facts in i. The circuit is completely
constructive w.r.t. i if there is a complete proof starting with the facts in i.
The dioeerence is that no fact is needed for an intermediate variable in an output proof if
this variable is not needed to prove the output facts. It is even allowed that no fact about this
variable can be proved. Consider for example only X is an output. If
no fact for Y can be proved. The circuit is output constructive but not
completely constructive for this input assignment.
Although output constructiveness seems more general, we shall deal with complete construc-
tiveness in the sequel since it is much easier to handle. Complete constructiveness is also required
by the semantics of Esterel [4].
2.2.5 Constructive Logic Matches Delay Independence
Constructive Boolean logic exactly represents delay independence: given a complete input as-
signment, a circuit electrically stabilizes its output wires (resp. all its wires) for any gate and
wire delays if and only if it is output constructive (resp. completely constructive). This fundamental
result is shown in [18, 17] using techniques originally developed for asynchronous circuit
analysis [7].
Notice that a given fact can have several proofs. Delay assignments actually select proofs.
Consider is the output. For there are two
proofs of 0: the -rst one deduces the second one deduces the same fact
from deduced from Electrically speaking, the -rst proof occurs
when propagates through X's and gate before while the second proof occurs if
there is a long delay on the I input wire, long enough for to propagate through X's and
gate before I = 0.
2.3 Scott's Fixpoint Semantics
The classical model of Boolean logic is binary, variables taking values in 1g. Constructive
Boolean logic has a natural ternary semantic model.
2.3.1 The Ternary Model
The ternary domain is 1g. The unde-ned value ? (read bottom) represents absence or
non-provability of information. The domain is partially ordered by Scott's information ordering
the total values 0 and 1 being incomparable 2 . Tuples
are partially
ordered componentwise: x - y ioe x k - y k for all k. Functions are required to be monotonic
must have f(x) - f(y) in B n
? . A
composition of monotonic functions is monotonic. Functions are partially ordered by f - g if
2.3.2 The Fixpoint Theorem
The key result in Scott's semantics is the -xpoint theorem, which we state here in a simple case.
? be monotonic, and let a -xpoint of f be an element x of B n
? such that
x. The theorem states that f has a least -xpoint lfp(f), which is the (-nite) limit of the
increasing sequence
The function lfp that associates the least -xpoint lfp(f) with f is itself monotonic.
2.3.3 The Basic Ternary Operators
The Boolean operators are extended as follows to the ternary logic. There is no choice for
negation, which must be monotonically de-ned by
-, we choose the parallel extension, which is the least monotone function such that
closely corresponds to electrical gate behavior and to our proof rules. The
extension of disjunction - is dual.
Other possible extensions of - are the strict extension such that 0 -
the left sequential extension such that and the symmetrical right
sequential extension. They are de-nable from the parallel extension in constructive logic (hint:
the expression X-:X has value 1 if and only if X is de-ned).See [16, 1] for a complete discussion
of these extensions. It is interesting to note that the parallel extension cannot be de-ned in
sequential languages such as C and requires a parallel interpretation mechanism, hence its name.
2.3.4 Circuits as Fixpoint Operators
A circuit with input vector
? and other variables in vector x
? de-nes an equation
of the form where the k-th component of f is given by the right-hand-side of the
equation for x k . Given an input assignment i, let us write f i
from B n
? to itself. We call a solution of the circuit w.r.t. i the least -xpoint lfp(f i ) of f i . For
example, in circuit C 1 , the least -xpoint for input I = 0; while the least
-xpoint for
The next theorem shows that the constructively deducible facts exactly correspond to the
-xpoint solution.
Theorem 1 Given a circuit C de-ning a function f and an input assignment i, a fact
constructively provable if and only if the X-component of the least -xpoint of f i
has value b.
Unfortunately, some authors use f0; to mean the same thing!
The proof is standard and left to the reader (use inductions on term size and proof length).
Notice that the theorem does not require the input assignment to be complete. It is also
valid when some inputs are ?. Then, no fact for these inputs can be used in deductions.
This concludes the theory of constructive circuits: electrically stabilizing in a delay-independent
way is the same as being provable in constructive Boolean logic or as having a non-? value in
the least -xpoint.
2.4 Algorithms for Circuit Constructiveness
There are algorithms to detect whether a circuit is constructive for a given input assignment
or for all complete input assignments. Here, we present a linear-time algorithm that works
for one complete input assignment. It is used in the Esterel v5 compiler, for interpretation
mode (option -I). Algorithms checking constructiveness for all inputs or for some input classes
are much more complex. The BDD-based algorithm used in the Esterel v5 compiler (option
-causal) is presented in [18, 17, 19]. It will not be considered here.
2.4.1 An Interpretation Algorithm
The running data structure of the algorithm is composed of two sets of facts called
and TODO and of an array PRED of integer values indexed by non-input variable names.
The TODO set initially contains the input facts, and the DONE set is initially empty. The
array entry PRED[X ] is initialized to the number of predecessors of X , which is the number of
variable occurrences in the de-nition equation of X , also called the fanin number in the electrical
presentation.
The algorithm successively takes a fact from TODO, puts it in DONE, and propagates its
constructive consequences, which may add new facts to TODO and decrement the predecessor
counts. Propagating the consequences of a fact works as follows:
ffl All variables that refer to V in their de-nition decrement their predecessor count according
to the number of occurrences of V in their de-nition.
immediately determines that then that fact is added to TODO. This occurs
de-ned by a conjunction where V appears positively, in which case
or by a disjunction where V appears negatively, in which case (symmetrically if
1). This fact propagation rule corresponds to deduction rules such as (l-and) and
(r-and ), possibly combined with (not-0 ) and (not-1 ).
ffl If the predecessor count of a variable W falls to 0 and the value of W is not yet determined,
a new fact is added to TODO, where c is the identity of the de-nition operator of
W , i.e. 1 for - and 0 for -. This corresponds to rules such as (b-and ).
2.4.2 Execution Example
For
with inputs I = 0; we start in the following state:
We remove I = 0 from TODO and put it in DONE. We decrement the predecessor count of X .
immediately implies we add that fact to TODO:
We now process 1. The only consequence is that the number of predecessors of Y is
decremented, since does not determine Y by itself:
We now process This fact does not directly determine the value of Y , but it exhausts its
predecessor list:
We can now deduce that the value of Y is 1 since Y is an empty conjunction. We add this fact
to TODO:
We have computed all the facts we need. However, it is useful to perform the last step, which will
bring us back to a nice clean state. Processing puts this fact in DONE and decrements
X's predecessor count:
Since we build proofs, the result of the algorithm does not depend on the order in which we pick
facts in TODO.
For the input I = 0; here is a run where the output values are computed faster but
cleanup is longer:
For the non-constructive input I = rapidly reach a deadlock:
There are no remaining facts in TODO, and yet no fact has been established for X or Y and
their predecessor counts are positive.
The following result shows that our algorithm is correct and complete:
Theorem 2 Let C be a circuit with n variables and i be a complete input assignment. The
circuit is output constructive w.r.t. i if and only if the algorithm starts with i and computes a
fact for each output variable. The circuit is completely constructive w.r.t. I if and only if the
algorithm terminates with all predecessor counts 0.
For a completely constructive circuit, the algorithm always takes the same number of steps,
which is the sum of all the fanin counts.
3 POLIS and the CFSM model
Recall our goal is to implement synchronous circuits within the POLIS system. POLIS [3]
is a software tool developed at UC Berkeley for the synthesis of control-dominated reactive
systems that are targeted for mixed hardware/software implementations. The primary feature of
POLIS is its underlying CFSM model of computation; it is within this model that we implement
synchronous circuits.
3.1
Overview
The model of computation consists of a network of communicating Codesign Finite State Machines
(CFSMs). The communication style is called GALS: globally asynchronous locally syn-
chronous. At the node level, each CFSM has synchronous semantics: when run, a CFSM reads
inputs, computes, and writes outputs instantaneously. At the network level, the CFSMs communicate
asynchronously: communication is done via data transmission through buoeers, and no
assumptions are made about the relative delays of the computations performed by each CFSM
or about the delays of the data transmission.
3.2 CFSM Communication
Each CFSM has a set of inputs and outputs, and CFSMs are connected with nets. A net
associates an output of one CFSM to some inputs of other CFSMs. The information transmitted
between CFSMs is composed of a status and a value which are stored in 1-place communication
buoeers. For each net, there is one associated value buoeer and multiple status buoeers, one for
each attached CFSM input. Thus, each CFSM has a local copy of the status of each of its inputs,
while the value is stored in a shared buoeer. A CFSM input buoeer is composed of the local status
buoeer and the shared value buoeer. 3 The status buoeer stores either 1 or 0, representing presence
or absence of valid data in the value buoeer.
A CFSM input assignment is the set of values stored in the input buoeers for a CFSM. It
is equivalent to the circuit input assignment given in Section 2.1.1. A CFSM input assignment
may be complete or partial. A captured input assignment corresponds to the statuses and values
that are actually read from the buoeers when a CFSM in run.
3.3 CFSM Computation
A CFSM computation is called a CFSM execution or CFSM run. When a CFSM executes, it
reads its inputs, makes its computation, writes its outputs, and resets (consumes) its inputs.
Input reading: A CFSM atomically reads and resets the status buoeers: it simultaneously
reads all status buoeers and sets them to 0, ready for the arrival of new inputs. 4
It subsequently reads the values of the present inputs. This determines the captured input
assignment.
Computation: The CFSM uses the captured input assignment to make its computation: it
computes its outputs and next states based on the values given in its state transition table. The
computation is done synchronously, which means that the CFSM reacts precisely to the captured
input assignment, regardless of whether the inputs change while the CFSM is computing.
Output writing: For each output, a CFSM writes the value buoeer and subsequently atomically
sets the status buoeers for each associated CFSM input. 5
A CFSM-event consists of an output emitting its data and the corresponding input status
buoeers being set to 1.
3.4 CFSM Network Computation
A network computation is called a network execution or network run and corresponds to several
CFSM executions.
Each CFSM network has an associated scheduler(s). The scheduler continuously reads the
current input assignments, determines which CFSMs are runnable, and chooses the order in
3 Note that in [3], the word event is used both for the status alone and for the status/value pair.
In POLIS, a CFSM may have an empty execution, which means that it does not react to its current inputs.
In this case, the current inputs are saved, and any inputs that are received while the CFSM is determining its
empty reaction are added to the input assignment, which is restored and thus read at the next run. We do not
use this feature here.
5 Atomic reads and writes are more expensive, since they require an implementation that guarantees that these
actions can happen simultaneously. The decision was made in POLIS to make status buoeer reading and writing
atomic, and not value-buoeer reading and writing, because atomically reading and writing of short bit strings can
be implemented eOEciently, and because this guarantees certain desirable behavioral properties in the system.
which to run them. 6 A CFSM is runnable if it has at least one input status buoeer set to 1. A
CFSM is run by the scheduler sometime after it is runnable.
Typically, an input assignment is given to the network, and the scheduler runs the CFSMs
according to its schedule until there are no further changes in the communication buoeers. This
is called a complete network execution.
Time eoeectively passes when control is returned to the scheduler, and thus instantaneous
communication between CFSM modules is not possible.
Implementing Constructive Circuits in CFSM Networks
In this section, we explain our realization of the synchronous behavior of a circuit on a CFSM
network. To facilitate the exposition, we restrict ourselves to the extreme case of one CFSM per
gate. More realistic levels of granularity will be handled in Section 5.
In Section 2.4, we presented an algorithm to compute the behavior of a circuit for a given
circuit input assignment. The essential ingredients were a set TODO of facts to propagate, a
set DONE of established facts, and a predecessor counter for each variable. The basic idea of
the CFSM network implementation presented here is to distribute a similar algorithm over a
network of CFSMs, associating a CFSM with each circuit gate (equation).
We start by studying the reaction to a single input assignment and then present various ways
of chaining reactions to handle circuit input assignment sequences, to obtain the cyclic behavior
characteristic of synchronous systems.
4.1 Fact propagation in a CFSM network
We implement each gate as a CFSM that reads and write facts, which are encoded in POLIS
CFSM-events sent by one gate to its fanouts. The arrival of a fact at a gate makes the gate
runnable, and, when run, if there is a provable output fact from the facts received so far, the
gate CFSM outputs it. Fact propagation between gates is directly performed by the underlying
POLIS scheduling and CFSM-event broadcasting mechanisms. A POLIS execution schedule is
thus precisely a proof (fact propagation) ordering.
Facts arrive sequentially at a gate CFSM. Therefore, a combinational circuit gate must be
implemented by a sequential CFSM that remembers which facts it has received so far. The sequential
state of a gate CFSM encodes the number of predecessors of the interpretation algorithm
of Section 2.4.
4.2 The Basic Gate CFSM
For ease of exposition, we write the gate CFSMs in Esterel. This makes the gate speci-cation
very AEexible, which will be useful in the next sections. No preliminary knowledge of Esterel is
required.
To handle our running example
, it suOEces to describe the AndNot gate
Other gates are similar. The Esterel program for AndNot has the following interface:
module AndNot :
Here, A, B, and C are Esterel signals of type boolean, the values of which are called true and
false. Esterel signals are just like POLIS buoeers, with some additional notation. An event of a
boolean-valued signal such as A has two components: a binary presence status component, also
written A, which can take values present and absent , and a value component of type boolean,
written ?A. We choose to encode the fact present with value true (resp.
6 In POLIS, scheduler is automatically synthesized with parameters, such as the type of scheduling algorithm,
given by the user.
false) 7 . Notice that we use two pieces of information, the status and the value, to represent a
fact, i.e. the stable value of a wire. A present status component indicates stability, i.e. that a
fact has been propagated to this point, and the value component represents the Boolean value
of the fact.
Like a POLIS captured input assignment, An Esterel input assignment de-nes the presence
status of each input signal and the value of each present signal. For instance, for AndNot,
A(true).B(false) is an Esterel input assignment in which A is present with value true and B
is present with value false, encoding the facts is an input
assignment where A is present with value false and B is absent, encoding the fact
Like a CFSM, an Esterel program repeatedly reacts to an externally provided input assignment
by generating an output assignment. The processing of an input assignment is also called
a reaction or an instant. In POLIS, a run of an Esterel CFSM triggers exactly one reaction of
the Esterel program, with the same input assignment.
Unlike in POLIS, communication in Esterel is instantaneous: a signal emitted by a statement
is instantaneously received by all the statements that listen to it. Similarly, control propagation
is instantaneous; for example, in a sequence 'p; q', q immediately starts when p terminates. The
only statements that break the AEow of control are explicit delays such as iawait Sj that waits
for the next occurrence of a signal S.
Finally, in Esterel, signal presence status is not memorized from reaction to reaction, but
value is : the value of the Esterel expression ?A of A in a reaction where A is absent is the one it
had in the previous reaction. Notice that the value of a signal may change only when the signal
is present.
Our -rst attempt to write the Esterel body of AndNot is:
await A;
if not ?A then emit C(false) end if
if ?B then emit C(false) end if
if (?A and not ?B) then emit C(true) end if
The program reads as follows. First, we start two parallel threads. The -rst thread waits for
the presence of A, and the second threads waits for the presence of B. The -rst input assignment
can have A present, B present, or both (an empty assignment with neither A nor B present would
leave the program in the same state; such an assignment is permitted in Esterel but will never
be generated by the POLIS scheduler). If A is absent, the -rst thread continues waiting. If A
is present, the -rst thread immediately checks A's value ?A and immediately outputs C(false)
if ?A is false, thus mimicking the (l-and) deduction rule; the thread terminates immediately in
either case. The second thread behaves symmetrically but checks for the truth of ?B to emit
C(false). If both A and B are present, the threads evolve simultaneously.
The Esterel parallel construct '-' terminates immediately when both branches have termi-
nated. Therefore, the above parallel statement terminates exactly when both A and B have been
received, either simultaneously or in successive input assignments. In that instant, C(true) is
emitted if the possibly memorized values ?A and ?B are respectively true and false, mimicking
the (b-and) deduction rule with negated second argument.
4.2.1 Avoiding Double Output
Our gate CFSM almost works, but not quite, since C(false) can be emitted twice (possibly at
dioeerent instants) if ?A is false and ?B is true. The gate should output C only once. To correct
this problem, we use an auxiliary Boolean signal Caux:
7 Other equivalent encodings can be considered. One can for example use a pair of pure signals for each
variable, one for presence and one for value. The encoding we use makes a clear dioeerence between availability
and value.
Sd
Figure
2: Partial state transition graph for module AndNot
signal Caux : combine boolean with and in
await A;
if not ?A then emit Caux(false) end if
if ?B then emit Caux(false) end if
if (?A and not ?B) then emit Caux(true) end if
await Caux;
emit C(?Caux)
signal
The -rst branch of the outermost parallel behaves as before but emits Caux instead of C. The
second branch waits for Caux to emit C with the same value, and immediately terminates. If Caux
is emitted twice in succession by the -rst branch, the second emission is simply unused since the
iawait Cauxj statement has already terminated. The icombine boolean with andj declaration
smoothly handles simultaneous double emission, also called collision. For this example, collision
occurs if A(false) and B(true) occur simultaneously, in which case both iemit C(false)j
statements are simultaneously executed. The combine declaration speci-es that the result value
?Caux is the conjunction of the separately emitted values. Here, we could as well use disjunction,
for only false values will be combined.
4.2.2 The Gate CFSM State Graph
The gate CFSM state transition graph (STG) is partially shown in Figure 2. The transitions are
shown for the cases in which A is received before B, the other cases (B arriving -rst or A and B
arriving simultaneously) are similar and not pictured. This partial STG is shown to help visualize
the sequential state traversal in a familiar syntax, but is not a practical input mechanism for
reactive modules compared to the Esterel language. For example, a module that waits for n
signals concurrently will have 2 n states, while the Esterel description has size n. Note also that
the C aux signal is shown in the output list for visualization purposes; it is an internal signal that
is not seen by any other module.
4.2.3 Gate CFSM Execution Example
To become familiar with the Esterel semantics, let us run the AndNot program on two dioeerent
input assignment sequences. We start in state S 0
where we are waiting for the inputs A and B
and internally for Caux, pictured by underlining the active await statements:
signal Caux : combine boolean with and in
await A;
if not ?A then emit Caux(false) end if
if ?B then emit Caux(false) end if
if (not ?A and ?B) then emit Caux(true) end if
await Caux
emit C(?Caux)
signal
Assume the -rst gate input assignment is A(true) and B absent. Then, iawait Aj terminates,
and we execute the test for inot ?Aj; since the test fails, the -rst parallel branch terminates
without emitting Caux. We then reach state S 1
, in which we continue waiting for B and Caux:
signal Caux : combine boolean with and in
await A;
if not ?A then emit Caux(false) end if
if ?B then emit Caux(false) end if
if (?A and not ?B) then emit Caux(true) end if
await Caux;
emit C(?Caux)
signal
If we now input B(false), we execute the ?B test, which also fails. Since the second parallel
branch terminates, the parallel statement terminates immediately; we execute the i?A and not ?Bj
test, which succeeds. We emit Caux(true), which makes the iawait Cauxj statement instantaneously
terminate; the output C(true) is emitted, since We reach the dead state
S d where no signal is awaited.
Assume now that the -rst gate input assignment is A(false) and B absent. Then, starting
from S 0
, we execute the -rst test, which succeeds and emits Caux(false). The iawait Cauxj
statement immediately terminates and C(false) is emitted. We continue waiting for B, in the
following
signal Caux : combine boolean with and in
await A;
if not ?A then emit Caux(false) end if
if ?B then emit Caux(false) end if
if (?A and not ?B) then emit Caux(true) end if
await Caux;
emit C(?Caux)
signal
Then, when B occurs in a later input assignment, the iawait Bj statement terminates and the
program reaches the dead state S d . If ?B is true, the emission of Caux(false) is performed but
unused. This last step of waiting for B mimics the last cleanup step of the propagation algorithm
of Section 2.4. It will be essential to chain cycles in Section 4.4.
If A and B occur together in the -rst input assignment, then AndNot immediately emits C
with the appropriate value and transitions directly from state S 0
to dead state S d .
Notice that the number of predecessor waited for in the algorithm of Section 2.4 is exactly
the number of underlined statements among iawait Aj and await Bj.
Scheduler
A
A
Y
I
Figure
3: CFSM network for circuit C 1
4.3 Performing a Single Reaction on a Network of Gates
Given a circuit C, the CFSM network for C is obtained by creating an input buoeer for each
input signal in C, an output buoeer for each output signal, and a gate CFSM for each equation
in C. Gate CFSM outputs are broadcast to the gate CFSMs that use them, as speci-ed by the
circuit equations.
To run the network for a given circuit input assignment i, it suOEces to put the input values
de-ned by i in each of the network input buoeers. Then, the gate CFSMs directly connected
to inputs become runnable. As soon as a gate has computed its result, it puts it in its output
buoeer, the result's value is automatically transferred to all fanout CFSM input buoeers by the
network, and these CFSMs become runnable.
4.3.1 An Execution Example
Consider the network for C 1 , pictured in Figure 3, where the CFSMs for X and Y are called
CX and CY. The rectangular buoeers are the 1-place buoeers used to communicate CFSM-events
between modules. Note that there are two information storage mechanisms at work during the
execution of this circuit:
1. The CFSM-gates as implemented by the Esterel modules internally store which signals
they have received and thus which they are still waiting for using their implicit states.
2. The CFSM-network as implemented in POLIS stores a copy of each CFSM-event, one for
each fanout of that event, using the 1-place buoeers.
Consider the input assignment I = 0 and -rst put false in I's buoeer and true in
J's buoeer. The CFSMs CX and CY become runnable. Assume CX is run -rst. Then it captures the
partial input assignment A(false) and B absent, which encodes I = 0. The CX CFSM outputs
C(false), which is the encoding for goes to state S 1
. The false event is made
visible at CY's B input buoeer after some time.
ffl Assume -rst that CY is run before the arrival of CX's output. Then CY captures the partial
input assignment A(true) and B absent, which encodes the fact 1. The CY CFSM
emits no output and continues waiting for its B input, in state S 2
. When X's false
value is written in CY's B input buoeer, CY is made runnable and runs with captured input
assignment A absent and B(false); it emits C(true), which encodes Y = 1, and goes to
the dead state.
ffl Assume instead that CX's false output is written in CY's input buoeer B before CY is run.
Then, when CY is later run, it captures the complete input assignment A(true):B(false),
which encodes the facts It emits C(true) and goes directly to the dead
state.
Once CY has emitted its output C(true), the true value is written in CX's input buoeer B, and
CX is made runnable again. Then, CX is run with input assignment B(true) and A absent, which
encodes goes to the dead state.
4.3.2 Correctness of the CFSM Implementation
The CFSM network computes a proof in the same way as the interpretation algorithm of Section
2.4, but with dynamic and concurrent scheduling of fact propagation. Building a new fact
is equivalent to generating a CFSM-event. Propagating a fact is equivalent to broadcasting the
CFSM-event to the fanouts and running the fanout CFSMs, which is exactly what the network
automatically provides.
The following theorem summarizes the results:
Theorem 3 Let C be a circuit. Let n be the number of output or local variables (fanouts), and
let f be the number of variable occurrences in the right-hand-sides of C's equations (fanins). Let
i be a circuit input assignment. For any run of the network associated with C initialized with i,
the following holds:
1. The number of created CFSM-events is bounded by n, and the number of CFSM runs is
bounded by f . No buoeer overwrite can occur.
2. If, in some complete network execution sequence, exactly n CFSM-events have been created,
then the implemented circuit is completely constructive w.r.t. i, and the output gate CFSM
generated events are the encodings of the output values of C w.r.t. i. All complete execution
sequences give the same result independent of the schedule, and all gate CFSMs terminate
in the dead state once all CFSM-events have been processed.
3. If, for some complete run, less than n CFSM-events have been created, then this is true
for all runs and C is not completely constructive w.r.t. i.
Output constructive circuits can be handled by a slight modi-cation of the result, but loosing
the nice fact that all gate CFSMs terminate in the dead state, which if useful when chaining
reactions, which we demonstrate in the next section.
4.4 Chaining Reactions
A synchronous circuit or program is meant to be used sequentially, the user or RTOS providing
a sequence of input assignments and reading a sequence of output assignments. In our POLIS
implementation, the user alternates writing circuit input assignments in the network input buoeers
and reading the computed circuit output assignments in the network output buoeers. Since POLIS
uses 1-place buoeers for communication, we must make sure that no buoeer overwrite occurs in the
network. In particular, we cannot let the user overwrite an input buoeer until its value has been
completely processed by the gates connected to it. Here are four possible user-level protocols:
ffl Wait for a given amount of time. This is the technique used for single-clocked electrical
circuits. Since the number of operations to be performed is uniformly bounded, if
the underlying machinery (CPUs, network, etc.) has predictable performance, we are
Figure
4: Circuit C 1
guaranteed that the reaction is complete after a maximal (predictable) time and that no
buoeer overwriting occurs. This solution is often used in cycled-based control systems implemented
in software and in Programmable Logic Controllers (PLCs). This protocol can
be realized in our implementation with the addition of performance estimation, in order
to compute the frequency with which new inputs can be fed to the synchronous circuit.
ffl Compute and return a termination signal. If the circuit is completely constructive w.r.t.
the input, we know that the computation has -nished when all the gate CFSMs have read
all their inputs, i.e. when the network has processed a given number of CFSM-events.
We can either modify the scheduler to have it report completion to the user or build an
explicit termination signal by having each gate output a separate CFSM-event when it has
processed all inputs. These CFSM-events are gathered by an auxiliary gate that generates
a termination event for the user when all its input have arrived. These centralized solutions
are not in the spirit of distributed systems.
ffl Implement a local AEow control protocol at each gate CFSM. This is a much more natural
solution in a distributed setting and it makes it possible to pipeline the execution: for each
input, the user may enter a new value as soon as the AEow-control protocol says so, without
waiting for the reaction to be complete. The protocol must ensure that an input for a
conceptual synchronous cycle never interferes with values for other cycles.
ffl Queue input events: this solution is used in [9, 8]. It implies that the user can always write
new inputs and is never blocked. In our implementation, the same AEow control problem is
simply pushed inside the network, since CFSMs do not communicate using queues.
We now present a AEow-control protocol that supports pipelining. The reactions remain globally
well-ordered as required by the synchronous model: the n-th value of input I is processed in the
same conceptual synchronous cycle as the n-th value of input J ; however, because of pipelining,
internal network CFSM scheduling and CFSM-event generation can occur in intricate orderings.
To make the gate reusable, is suOEces to embed their bodies into an Esterel iloop.endj
in-nite loop. Then, instead of going to the dead state, a gate CFSM returns to its initial state.
This is why it is much easier to handle complete proofs. To deal with more general output
proofs, we should add complicated gate reset mechanism, while reset is automatically performed
by complete proofs.
Thanks to the AEexibility of Esterel code, the protocol only requires a slight modi-cation of
our basic gate code, and the addition of a new module. The corresponding CFSM network is
shown in Figure 4.
Consider an output X of a CFSM M, read for example by two other CFSMs N and P. With
X and N (resp. P) we associate a signal X-Free-N (resp. X-Free-P) that is written by N (resp.
P). With X and M we associate a signal X-Free-M read by M and written by an auxiliary module
X-CFSM which consumes X-Free-N and X-Free-P and writes X-Free-M when both X-Free-N and
X-Free-P have received a value. The buoeers in Figure 4 for each signal are those used by POLIS;
the actual information determining when the signal X is free to be written by M is contained in
the implicit states of X-CFSM. The new module is written as follows:
module X-CFSM:
input X-Free-N, X-Free-P;
output X-Free-M;
loop
await X-Free-N
await X-Free-P
emit X-Free-M
loop
module
Similarly, for a network input I broadcast to N and P, we generate a network output buoeer
I-Free -lled by the auxiliary CFSM reading I-Free-N and I-Free-P, and for any network
output O a network input buoeer O-Free -lled by the user when it is ready to accept a new value
of O.
We require M to write its X output only when X-Free-M holds 0, then consuming that value.
We require N (resp. P) to write 0 in X-Free-N (resp. X-Free-P) when it reads its local copy of
the input X. The AndNot CFSM is modi-ed as follows:
module AndNot :
output A-Free, B-Free;
input C-Free;
loop
signal Caux : combine boolean with and in
await A;
emit A-Free;
if not ?A then emit Caux(false) end if
emit B-Free;
if ?B then emit Caux(false) end if
if (?A and not ?B) then emit Caux(true) end if
await Caux;
await C-Free
emit C(?Caux)
signal
loop
The output C is emitted only when the last of Caux and C-Free has been received.
When the gate CFSM is instantiated at a node M, the A-Free, B-Free, and C-Free buoeers
must be appropriately renamed A-Free-M, B-Free-M, and C-Free-M, to avoid name clashes.
The AEow-control mechanism acts in two ways. First, it prevents buoeer overwriting. Second,
it makes pipelining possible. Given a circuit input assignment i n at cycle n, the new value of a
circuit input I for cycle can be written in I's network input buoeer as soon as I-Free is
full. Therefore, it is not necessary to wait for the global end of a cycle to locally start a new one.
We have a last technical problem to solve. Assume that an AndNot gate CFSM starts circuit
cycle n. Assume that the gate CFSM receives an A input event, say A(false) with B absent.
The gate sends back A-Free. From then on, the gate can receive two inputs:
ffl The B input event that holds B's value in cycle n. This input should be processed normally
since the gate CFSM is currently processing cycle n.
ffl The out-of-order A input event that holds A's value for cycle n + 1. Processing this input
should be deferred until B has been processed.
In the current POLIS network model, a CFSM is made runnable as soon as it receives an input
event. Therefore, the gate can be made runnable with input A for cycle while it is still
processing cycle n. At this point, the gate should either internally memorize A's value or rewrite
it in the A buoeer, leaving in both cases the A-Free AEow control buoeer empty until it has -nished
cycle n. Both solutions are expensive and somewhat ugly.
We suggest a slight modi-cation to the POLIS scheduling policy. A CFSM should tell the
scheduler which input buoeers it is currently interested in, and the scheduler should not make the
CFSM runnable if none of these buoeers holds an event. When the CFSM is run, its captured
input assignment should only contain the events in the buoeers the CFSM is explicitly waiting for,
leaving the rest in their input buoeers. In the above example, the gate CFSM tells the scheduler
it is only waiting for B. If the new value of A comes in, the CFSM is not made runnable. When B
occurs, the gate is made runnable, and it will run with input B only. Once the gate has processed
B, it tells the scheduler that it is now waiting for both A and B. Since A is already there, the gate
can be immediately made runnable again.
The -nal version of the gate CFSM involves the auxiliary Wait signals sent to the scheduler
to implement this mechanism:
module AndNot :
output A-Free, B-Free;
output A-Wait, B-Wait;
input C-Free;
output C-Free-Wait;
loop
signal Caux : combine boolean with and in
abort
sustain A-Wait
when A;
emit A-Free;
if not ?A then emit Caux(false) end if
abort
sustain B-Wait
when B;
emit B-Free;
if ?B then emit Caux(false) end if
if (?A and not ?B) then emit Caux(true) end if
await Caux;
abort
sustain C-Free-Wait
when C-Free
emit C(?Caux)
signal
loop
The iAwait Aj statement has become iabort sustain A-Wait when Aj. The isustain A-Waitj
statement emits A-Wait in each clock cycle. the iabort p when Aj aborts its body p right away
when A occurs, not executing p at abortion time. Therefore, A-Wait is emitted until A is received,
that instant excluded.
5 Mixed Synchronous/Asynchronous Implementation
We now have two very dioeerent levels of granularity for implementing an Esterel program in
POLIS: compiling the program into a single CFSM node or building a separate CFSM for each
gate of the program circuit. The -rst does not support distribution, while the second is clearly too
ineOEcient: the associated overhead is unacceptable for large programs since it involves scheduling
each individual gate CFSM multiple times.
We now brieAEy explain how we can deal with many other implementation choices with dioeer-
ent levels of granularity, using the compositional and incremental character of the constructive
semantics. When doing so, we retain the full synchronous semantics of the program, but we
trade ooe synchrony and asynchrony in the implementation.
The idea as one moves to a larger granularity implementation is to partition the set of gates
into gate clusters G 1
Each cluster G k groups its gates into a single CFSM, the clusters
being connected by the POLIS network as before. The partition can be arbitrary, and chosen
to match any locality or performance constraints. Facts are processed both synchronously and
asynchronously, but again their proofs are derived from the synchronous constructive semantics.
In particular, synchronous fact processing is done within a cluster using the algorithm of Section
2.4, in a single CFSM and in one computation of that CFSM; asynchronous fact processing
is done across the network and thus between CFSMs. Some facts will be both synchronously
and asynchronously processed, e.g. an output from gate g 1
that is an input to another gate g 0in the same cluster G 1
and to g 2
in another cluster G 2
What makes this possible is the ability of our centralized and distributed algorithms to deal
with partial deduction: given a partial input assignment i, both algorithms generate all the facts
that can be deduced from i. If a new fact is added to i, the algorithms incrementally deduce its
consequences. Therefore, it does not matter whether facts are handled synchronously in a gate
cluster or asynchronously in the POLIS cluster network.
Consider for example the following circuit C 2
obtained by adding an output Z to C 1
Consider -rst the clusters G fZg. Assume that we receive the fact
deduces outputs that fact to G 2
, which can make a local transition to
reach the state S 1
where it waits only for Y
also internally remembers in its local state that
Y has lost a predecessor. Thus, synchronously propagated to Y in the same cluster,
and asynchronously propagated to Z in the other cluster through another call to a CFSM,. If we
now receive sends that fact to G 2 , which can now output
With the same input sequence, consider the clusters G g. When receiving
instantaneously generates the facts determined
synchronously. The fact asynchronously propagated to G 2 by the network, and G 2 's
CFSM transitions to a state where it waits only for J . When occurs, the CFSM outputs
that fact is propagated to G 1
's CFSM, which goes back to its initial state.
Optimal solutions to the problem of determining a set of clusters is beyond the scope of
this paper. A number of clustering algorithms exist in the literature, and the design may be
entered in a partitioned fashion that leads to a natural clustering as well. In our case, clustering
according to the source code module structure is an obvious candidate for a clustering
heuristic, as well as clustering according to the frequency of use of signals (like clocks in Lus-
tre). Here, we simply point out that our algorithms and the semantics behind them permit
any level of granularity: from individual gates implemented as separate CFSMs, to an entire
synchronous program implemented as a single CFSM. Thus, the tradeooe between synchronous
and asynchronous implementation of a synchronous program can be fully explored.
6 Conclusions and Future Work
We have described a method for implementing synchronous Esterel programs or circuits on
globally asynchronous locally synchronous (GALS) POLIS networks. The method is based on
fact propagation algorithms that directly implement the constructive semantics of synchronous
programs. We have developed AEow-control techniques that automatically ensure that no POLIS
buoeer can be overwritten and that make pipelining possible.
Initially, we have associated a POLIS CFSM with each circuit gate, which is unrealistic in
practice. However, our method is fully compositional, and fact propagation can be performed
either synchronously in a node or asynchronously between nodes. This makes it possible to
cluster gates into bigger synchronous nodes and to explore the tradeooe between synchronous
and asynchronous implementation.
For simplicity, we have only dealt with the pure fragment of Esterel where signals carry
no value. Extension to full value-passing Esterel constructs raises no particular diOEculty. A
complete implementation is currently being developed.
--R
Domains and Lambda-Calculi
The Constructive Semantics of Esterel.
The Foundations of Esterel.
The Esterel Synchronous Programming Language: Design
asynchronous Circuits.
Distributing automata for asynchronous networks of processors.
Distributing reactive systems.
Programming Real-Time Applications with Signal
Synchronous Programming of Reactive Systems.
The Synchronous DataAEow Programming Language Lustre.
A Visual Approach to Complex Systems.
Communicating Sequential Processes.
The Semantics of a Simple Language for Parallel Programming.
LCF as a programming language.
Formal Analysis of Cyclic Circuits.
Constructive Analysis of Cyclic circuits.
Analyse Constructive et Optimisation S
--TR
Communicating sequential processes
Statecharts: A visual formalism for complex systems
The ESTEREL synchronous programming language
Formal verification of embedded systems based on CFSM networks
Hardware-software co-design of embedded systems
Domains and lambda-calculi
The foundations of Esterel
Synchronous Programming of Reactive Systems
Constructive Analysis of Cyclic Circuits
Formal analysis of synchronous circuits
--CTR
Gerald Lttgen , Michael Mendler, The intuitionism behind Statecharts steps, ACM Transactions on Computational Logic (TOCL), v.3 n.1, p.1-41, January 2002
Mohammad Reza Mousavi , Paul Le Guernic , Jean-Pierre Talpin , Sandeep Kumar Shukla , Twan Basten, Modeling and Validating Globally Asynchronous Design in Synchronous Frameworks, Proceedings of the conference on Design, automation and test in Europe, p.10384, February 16-20, 2004
Stephen A. Edwards , Olivier Tardieu, SHIM: a deterministic model for heterogeneous embedded systems, Proceedings of the 5th ACM international conference on Embedded software, September 18-22, 2005, Jersey City, NJ, USA
Stephen A. Edwards , Edward A. Lee, The semantics and execution of a synchronous block-diagram language, Science of Computer Programming, v.48 n.1, p.21-42, July | embedded systems;finite state machines;synchronous programming;asynchronous networks |
363459 | Wavelet and Fourier Methods for Solving the Sideways Heat Equation. | We consider an inverse heat conduction problem, the sideways heat equation, which is a model of a problem, where one wants to determine the temperature on both sides of a thick wall, but where one side is inaccessible to measurements. Mathematically it is formulated as a Cauchy problem for the heat equation in a quarter plane, with data given along the line x=1, where the solution is wanted for $0 \leq x < 1$.The problem is ill-posed, in the sense that the solution (if it exists) does not depend continuously on the data. We consider stabilizations based on replacing the time derivative in the heat equation by wavelet-based approximations or a Fourier-based approximation. The resulting problem is an initial value problem for an ordinary differential equation, which can be solved by standard numerical methods, e.g., a Runge--Kutta method.We discuss the numerical implementation of Fourier and wavelet methods for solving the sideways heat equation. Theory predicts that the Fourier method and a method based on Meyer wavelets will give equally good results. Our numerical experiments indicate that also a method based on Daubechies wavelets gives comparable accuracy. As test problems we take model equations with constant and variable coefficients. We also solve a problem from an industrial application with actual measured data. | Introduction
In many industrial applications one wishes to determine the temperature
on the surface of a body, where the surface itself is inaccessible for measurements
[1]. It may also be the case that locating a measurement device
(e.g. a thermocouple) on the surface would disturb the measurements so
that an incorrect temperature is recorded. In such cases one is restricted
to internal measurements, and from these one wants to compute the surface
temperature.
In a one-dimensional setting, assuming that the body is large, this situation
can be modeled as the following ill-posed problem for the heat equation
in the quarter plane: Determine the temperature u(x; t) for
temperature measurements g(\Delta) := u(1; \Delta), when u(x; t) satisfies
Of course, since g is assumed to be measured, there will be measurement
errors, and we would actually have as data some function g m 2 L 2 (R), for
which
where the constant ffl ? 0 represents a bound on the measurement error. For
can solve a well-posed quarter plane problem using g m as data.
For we have the sideways heat equation 1
Note that, although we seek to recover u only for 0 - x ! 1, the problem
specification includes the heat equation for x ? 1 together with the
boundedness at infinity. Since we can obtain u for x ? 1, also u x (1; \Delta) is de-
termined. Thus we can consider (1.3) as a Cauchy problem with appropriate
Cauchy data [u; u x ] given on the line
Although in this paper we mostly discuss the heat equation in its simplest
form u our interest is in numerical methods that can be used for
more general problems, e.g. equations with non-constant coefficients,
Also referred to as the Inverse Heat Conduction Problem (IHCP) [1].
or nonlinear problems,
which occur in applications. For such problems one cannot use methods
based on reformulating the problem as an integral equation of the first kind,
since the kernel function k(t) is explicitly known
only in the constant coefficient case. Instead, we propose to keep the problem
in the differential equation form and solve it essentially as an initial value
problem in the space variable ("space-marching", [1, 5, 6, 11, 12, 13, 22, 23]).
In [11, 12] we have shown that it is possible to implement space-marching
methods efficiently, if the time-derivative is approximated by a bounded op-
erator. In this way we obtain a well-posed initial-boundary value problem
for an ordinary differential equation (ODE). The initial value problem can
be solved by a standard ODE solver, see Section 2.1.
In this paper, we study three different methods for approximating the
time derivative and their numerical implementation. First, the time derivative
is approximated by a matrix representing differentiation of trigonometric
interpolants, see e. g. [14, Sec. 1.4]. In Section 3 we first give some error
estimates (for the non-discrete case, see also [24]), and then we discuss the
numerical implementation of this method.
Meyer wavelets have the property that their Fourier transform has compact
support. This means that they can be used to prevent high frequency
noise from destroying the solution 2 . In a previous paper [25] we studied
the approximation of the time derivative in (1.3) using Meyer wavelets, and
gave almost optimal error estimates. Here we discuss the numerical implementation
of this method, and, in particular, we show how to compute the
representation of the time derivative in wavelet basis.
Daubechies wavelets themselves have compact support and, consequently,
they cannot have a compactly supported Fourier transform. However, the
Fourier transform decays so fast that in practice they can be used for solving
the sideways heat equation in much the same way as Meyer wavelets. We
discuss this in Section 4.5.
We emphasize that although all theoretical results are for the model
problem with constant coefficients, we only deal with methods that can
be used for the more general problems (1.4)-(1.5). Also, the quarter plane
assumption is made mainly for deriving theoretical results; it is not essential
in the computations (provided that u x (1; \Delta) can be obtained, either from
measurements of by assuming symmetry, cf. Section 5.2). A constructed
2 It is very common in ill-posed problems that the ill-posedness manifests itself in the
blow-up of high frequency perturbations of the data.
variable coefficient example is given in Section 5. There we also present a
problem from an industrial application, with actual measured data.
Ill-posedness and Stabilization
The problem of solving the sideways heat equation is ill-posed in the sense
that the solution, if it exists, does not depend continously on the data. The
ill-posedness can be seen by solving the problem in the Fourier domain. In
order to simplify the analysis we define all functions to be zero for t ! 0.
Let
g(t)e \Gammai-t dt;
be the Fourier transform of the exact data function 3 . The problem (1.1)
can now be formulated, in frequency space, as follows:
bounded .
The solution to this problem, in frequency space, is given by
where
i- denotes the principal value of the square root,
ae
In order to obtain this solution we have used the bound on the solution at
infinity. Since the real part of
i- is positive and our solution b u(x; -) is
assumed to be in L 2 (R), we see that the exact data function, b g(-), must
decay rapidly as - !1.
Now, assume that the measured data function satisfies g m
(R), is a small measurement error. If we try to solve the
problem using g m as data we get a solution
Since we can not expect the error b ffi(-) to have the same decay in frequency
as the exact data bg(-) the solution bv(x; -) will not, in general, be in L 2 (R).
3 In this paper all Fourier transforms are with respect to the time variable.
Thus, if we try to solve the problem (1.3) numerically, high frequency components
in the error, ffi , are magnified and can destroy the solution. However,
if we impose an apriori bound on the solution at and in addition,
allow for some imprecision in the matching of the data, i.e. we consider the
then we have stability in the folllowing sense: any two solutions of (2.4), u 1
and
For this reason we call (2.4) the stabilized problem. The inequality (2.5)
is sharp and therefore we can not expect to find a numerical method, for
approximating solutions of (2.4) that satisfy a better error estimate.
Different methods for approximating solutions of (2.4) exist. One difficulty
is that for (2.4) we do not have uniqueness, so in order to get a
procedure that can be implemented numerically, it is necessary to somehow
modify the problem. Often, the dependence on ffl and M is included by
choosing the value of some parameter in the numerical procedure.
2.1 Stabilization by Approximating the Time Derivative
In this subsection we consider the "initial-value" problem
' u
x
@
'' u
with initial-boundary values 4
The initial values for u x can be obtained (in principle and numerically) by
solving the well-posed quarter plane problem with data u(1;
x - 1.
4 For a discussion of the problem of setting numerical boundary values at
to [12].
We can write the solution of (2.6)-(2.8) formally as
' u
hm (t)
@
Loosely speaking, since the operator B is unbounded, with unbounded eigen-values
in the left half plane, the solution operator
unbounded frequency noise in the data can be blown up and destroy
the numerical solution. Even if the data are filtered [4, 23], so that high
frequency perturbations are removed, the problem is still ill-posed: rounding
errors introduced in the numerical solution will be magnified and will
make the accuracy of the numerical solution deteriorate as the initial value
problem is integrated.
In a series of papers [11, 12, 25], we have investigated methods for solving
numerically the sideways heat equation, where we have replaced the operator
@=@t by a bounded operator. Thus we discretized the problem in time, using
differences [11] or wavelets [25], so that we obtained an initial value problem
U x
x
'' U
U x
where the matrix D is a discretization of the time derivative,
and U are semi-discrete representations of the solution and its
derivative, and Gm and Hm are vectors. Thus, (2.9) can be considered as a
method of lines.
The observation made in [11, 12] is that when D represents a discrete
approximation of the time derivative, then D is a bounded operator, and the
problem of solving (2.9) is a well-posed initial value problem for an ODE. In
[11, 25] error estimates are given for difference and wavelet approximations.
The coarseness of the discretization is chosen depending on some knowledge
about ffl and M (actually their ratio). For solving (2.9) numerically, we can
use a standard ODE solver. In [12] we show that, in the case of a finite
difference approximation, it is often sufficient to use an explicit method, e.g.
a Runge-Kutta code.
In the rest of this paper, we will discuss error estimates and numerical
implementation of approximations of (2.4) by discretized problems of the
type (2.9), by a Fourier (spectral) method, and by Meyer and Daubechies
wavelets.
5 Cf. (2.2), and also [12], where a discretized equation is considered.
3 A Fourier Method
Here we consider how to stabilize the sideways heat equation by cutting off
high frequencies in the Fourier space. Error estimates are given in Section
3.1. Similar results can be found in [24]. In Section 3.2 we then arque that
in order to useful for more general equations (1.4)-(1.5), the Fourier method
should be implemented as in (2.9).
We start with the family of problems in Fourier space,
bounded .
parameterized by -. In Section 2 the solution was shown to be
e
Since the principal value of
i- has a positive real part, small errors in high
frequency components can blow up and completely destroy the solution for
natural way to stabilize the problem is to eliminate all high
frequencies from the solution and instead consider (3.1) only for
Then we get a regularized solution.
e
where - max is the characteristic function of the interval [\Gamma- In
the following sections we will derive an error estimate for the approximate
solution (3.3) and discuss how to compute it numerically.
3.1
In this section we derive a bound on the difference between the solutions
(3.2) and (3.3). We assume that we have an apriori bound on the solution,
. The relation between any two regularized solutions (3.3) is
given by the following lemma.
Lemma 3.1 Suppose that we have two regularized solutions v 1 and v 2 defined
by (3.3) with data g 1 and ffl. If we select
then we get the error bound
Proof: From the Parseval relation we get
\Gamma-
je
6 je
Using
From Lemma 3.1 we see that the solution defined by (3.3) depends continuously
on the data. Next we will investigate the difference between the
solutions (3.2) and (3.3) with the same exact data g(t).
Lemma 3.2 Let u and v be the solutions (3.2) and (3.3) with the same
exact data g, and let - Suppose that ku(0; \Delta)k 6 M .
Then
Proof: As in Lemma 3.1 we start with the Parseval relation, and using
the fact that the solutions coincide for - 2 [\Gamma-
Z
je
Z
je
Now we use the bound ku(0; \Delta)k 6 M , and as before we have -
which leads to the error bound
Now we are ready to formulate the main result of this section:
Theorem 3.3 Suppose that u is given by (3.2) with exact data g and that v
is given by (3.3) with measured data g m . If we have a bound ku(0; \Delta)k 6 M ,
and the measured function g m satisfies we choose
then we get the error bound
Proof: Let v 1 be the solution defined by (3.3) with exact data g. Then
by using the triangle inequality and the two previous lemmas we get
From Theorem 3.3 we find that (3.3) is an approximation of the exact
solution, u(x; t). The approximation error depends continously on the
measurement error and the error bound is optimal in the sense (2.5).
3.2 Numerical implementation of the Fourier-based method
Here we discuss how to compute the regularized solution (3.3) numerically.
Given a data vector Gm with measured samples from g m on a grid ft i g, a
simple method is to approximate the solution operator directly. Thus
we have the following algorithm:
1. b
Gm := FGm .
2. b
ae exp(
3. V (0; :) := F H b
Vm (0; :),
where F is the Fourier matrix. The product of F and a vector can be computed
using the Fast Fourier Transform (FFT) which leads to an efficient
way to compute the solution (3.3). When using the FFT algorithm we implicitly
assume that the vector Gm represents a periodic function. This is not
realistic in our application; and thus we need to modify the algorithm. We
refer to Appendix A for a discussion on how to make the problem 'periodic'.
The main disadvantage of this method is that it cannot be used for problems
with variable coefficients (1.4), (1.5). Note also that by approximating
the solution operator directly, we make explicit use of the assumption that
our solution domain is the whole quarterplane, t ? 0 and x ? 0.
An alternative, more widely applicable, approach is to approximate the
time derivative and use the ordinary differential equation formulation given
in Section 2.1, i.e. we take the matrix D in (2.9) to be
where is a diagonal matrix which corresponds to differentiation of the
trigonometric interpolant, but where the frequency components with
are explicitly set to zero [14, Sec. 1.4]. This will filter the data and ensure
that we remove the influence from the high frequency part of the measurement
error in the solution. Thus in the ODE solver, multiplication by D F is
carried out as a FFT, followed by multiplication by and finally an inverse
FFT. Multiplication with D F thus requires O(n log(n)) operations. We conclude
the section with the remark that this method can be considered as a
Galerkin method (cf. Section 4.2) with trigonometric interpolants as basis
and test functions.
4 Wavelet Methods
4.1 Multiresolution analysis
Wavelet bases are usually introduced using a multiresolution analysis (MRA)
(see Mallat [18], Meyer [21]). Consider a sequence of successive approximation
spaces,
and [
For a multiresolution analysis we also require that
This means that all spaces V j are scaled versions of the space V 0 . We also
require that there exists a scaling function, OE, such that the set fOE jk g j;k2Z ,
where OE jk is an orthogonal basis for V j . Since OE
we find that the function OE must satisfy a dilation relation
The wavelet function / is introduced as a generator of an orthogonal basis
of the orthogonal complement W j of V j in V j+1 (V
function / satisfies a relation similar to (4.2)
where the filter coefficients fg k g are uniquely determined by the fh k g. Any
function f 2 L 2 (R) can be written
l-j
where P j is the orthogonal projection onto V j ,
4.2 Meyer Wavelets and a Galerkin Approach
The Meyer wavelet / 2 C 1 (R)) is defined by its Fourier transform [7,
e \Gammai-t
where
sin
\Theta -j
cos
\Theta -j
0; otherwise,
and j is a C k or C 1 function satisfying
ae
It can be proved that the set of functions
for Zis an orthonormal basis of L 2 (R) [20, 7]. The corresponding
scaling function is defined by its Fourier transform,
cos
\Theta -j
0; otherwise.
The functions b
OE jk have compact support (see e.g. [25])
for any k 2 Z.
In [25] we presented a wavelet-Galerkin method, where starting from the
weak formulation of the differential equation,
(v
with test functions from V j , and with the Ansatz
c (-) (x)OE j- (t);
we get the infinite-dimensional system of ordinary differential equations for
the vector of coefficients c,
ae c
The initial values fl m are defined
and the matrix D j is given by
(D
The well-posedness of the Galerkin equation (4.8) follows from the fact that
the matrix D j is bounded with norm [25]
In [25] we proved the following stability estimate for the wavelet-Galerkin
method.
Theorem 4.1 Let g m be measured data, satisfying kg
that chosen so that
log M
Then the projection onto V j of the Galerkin solution v j+1 satisfies the error
estimate
Note that the result in Theorem 4.1 is suboptimal in two ways. Firstly,
we get slower rate of convergence when ffl tends to zero than in the optimal
estimate (2.5). Secondly, the error is not for v j itself, but only for the
projection of v j+1 onto V j .
4.3 Discrete Wavelet Transform
Discrete wavelet transforms can be defined in terms of discrete-time multiresolution
analyses (see e. g. [26, Section 3.3.3]). We will use DMT as a
short form of 'Discrete Meyer (wavelet) Transform'. Given a vector c 2 R n ,
is assumed to hold the coefficients
of some function v in terms of the basis of V J . This is the fine level
all contributions on yet finer levels are assumed to be equal
to zero. The DMT of c is equivalent to a matrix-vector multiplication [16,
p. 11]
where G J
n\Thetan is an orthogonal matrix. The subscript j indicates that
the part of the vector ~ c that contains the coarse level coefficients has 2 j
components; it holds the coefficients of the projection of v onto V j . We
illustrate how the transformation by G J
breaks up the vector into blocks of
coefficients corresponding to different coarseness levels in Figure 4.1.
~ v:
Figure
4.1: Schematic picture of the discrete wavelet transform ~
J \Gamma3 c.
The rightmost part of the vector represents wavelet coefficients on the resolution
level block coefficients on the level 2 J \Gamma2 , etc. The
leftmost part represents the coarse level coefficients.
Algorithms for implementing the discrete Meyer wavelet transform are
described in the thesis of Kolaczyk [16]. These algorithms are based on the
fast Fourier transform (FFT), and computing the DMT of a vector in R n
requires O(n log 2
n) operations [16, p. 66]. The algorithms presuppose that
the vector to be transformed represents a periodic function; we will return
to this question in Appendix A. A further illustration is given in Figure 4.2.
Here we have taken a data vector from R 256 , added a normally distributed
perturbation of variance 0:5 extended the vector smoothly to
size 512 in order to make it "periodic". Then we computed the DMT with
Thus the partitioning of the transformed vector is the same as in
Figure
4.1.
We see that the wavelet transform can be considered as a low pass filter:
the noise is removed and the leftmost part of the transformed vector is
a smoothed version of the data vector. The noise is represented in the
data vector
Figure
4.2: The upper graph shows a data vector and the lower graph its
DMT.
finer level coefficients. Note also the locality properties of the fine level
coefficients: only the first half of the data vector is noisy, and consequently,
each segment of fine level coefficients has (noticeable) contributions only in
its left half.
We remark that we use periodicity only for computational purposes,
since the codes (based on discrete Fourier transforms) are implemented for
this case. The vectors used are considered as finite portions of sequences in
4.4 Numerical Implementation
In the solution of the sideways heat equation in V j , we replace the infinite-dimensional
ODE (4.8), by the finite-dimensional
x
represents the approximation of the solution in
Gm and Hm are the projections of the data vectors on V j , and j is chosen
according to Theorem (4.1). For simplicity we suppress the dependence on
j in the notation for vectors. The matrix D d
j represents the differentiation
operator in V j , and since we are dealing with functions, for which only
a finite number of coefficients are non-zero, D d
j is a finite portion of the
infinite matrix D j (we use superscript d to indicate this).
In the numerical solution of (4.12) by an ODE solver, we need to evaluate
matrix-vector products D d
C. The representation of differentiation operators
in bases of compactly supported wavelets are described in the literature, see,
e.g. [2]. For such wavelets exact and explicit representations can be found.
Also finite difference operators can be represented easily [19]. In our context
of Meyer wavelets, which do not have compact support, the situation is
different. The proof of (4.10) in [25] actually gives a fast algorithm for this.
For \Gamma- t - define the function
Extend \Delta periodically, and expand it in Fourier series. In [25, Lemma A.2]
is is shown that
where d k is the element in diagonal k of D 0 . From the definition of D j it
is easily shown that D . Thus, we can compute approximations of
the elements of D j by first sampling equidistantly the function \Delta, and then
computing its discrete Fourier transform.
4.5 Daubechies Wavelets
The Daubechies wavelets have compact support [7] and therefore only a finite
number of filter coefficients (fh k g and fg k g in (4.2) and (4.3)) are nonzero.
The filter coefficients fg k g are uniquely determined by the coefficients fh k g
where L is the number of filter coefficients. In addition we want the wavelet
to have a number of vanishing moments, that is
Daubechies wavelets are defined in such a way that they have the largest
Figure
4.3: The Daubechies db4 scaling function (left) and wavelet (right).
possible number of vanishing moments given the number of non-zero filter
coefficients; in fact, we have
There is no explicit expression for the Daubechies wavelet, instead we
can compute it from the filter coefficients fh k g k2Z . For example the filter
coefficients associated with the Daubechies wavelet db4 are [7]
Since we are interested in using a basis of Daubechies wavelets for solving
the system of ODE:s (2.6) in a stable way, we need to find an approximation
of the derivative operator in the approximation spaces V j . Here we will use
the Galerkin approximation, D j , which is a Toeplitz matrix with elements
given by
(D
where the last equality follows from the definition of OE jk . This means that
it is sufficient to compute the derivative approximation on the space V 0 .
Since the scaling functions OE jk have compact support, the matrix D j will
be banded. For compactly supported wavelets it is possible to compute the
matrix D j explicitly. This result is due to Beylkin [2]. Since the matrix D 0
is constant along diagonals and only a few diagonals are non-zero, we can
insert the dilation relation (4.2) into (4.16) and we will get a small system of
linear equations to solve for the elements in D 0 . In the paper by Beylkin, the
elements of D 0 have been listed for different bases of Daubechies wavelets.
As in Section 4.2 we want to use a Galerkin approximation of the differential
equation. For the problem to be well-posed it is sufficient that kD j k
is bounded. In the case of periodized wavelets, the differentiation matrix
per
0 is a circulant and we have kD per
[27]. The same estimate can be
expected to hold for non-periodic wavelets.
The implementation details are similar to those in Section 4.4. We have
measured data g m (t) - u(1; t) on the finite interval
A ). Thus we have to approximate (2.6) with a finite-dimensional system
similar to (4.12) for the coarse level coefficients.
For compactly supported wavelets it is possible to give a fast implementation
of the wavelet transform by convolution with the filter coefficients
g. Since the filters are of short length we can compute the
wavelet transform of a vector in R n with O(n log(n)) operations[26].
When we solve numerically the discretized version of (2.6) we are interested
in computing the product of D j and a vector. Since D j is banded
and explicitly known this is simply a product with a sparse matrix and thus
requires
5 Numerical Experiments
5.1 Experiments
Here we present some numerical experiments to illustrate the properties of
the methods presented in the previous sections. We have solved discretized
versions of (2.6) using Matlab in IEEE double precision with unit roundoff
1:1\Delta10 \Gamma16 . The space marching was performed using a Runge-Kutta-Fehlberg
method (ode45 in Matlab) with automatic step size control, where the basic
method is of order 4 and the embedded method is of order 5. In all tests
the required accuracy in the R-K method was 10 \Gamma4 .
The tests were performed in the following way: First we selected a solution
computed data functions u(1;
and u x (1; by solving a well-posed quarter plane problem for the
heat equation using a finite difference scheme. Then we added a normally
distributed perturbation of variance 10 \Gamma3 to each data function, giving vectors
m and hm . Our error estimates use the signal-to-noise ratio M=ffl and
therefore we computed
kfk
From the perturbed data functions we to reconstructed u(0; t) and compared
the result with the known solution.
We conducted two tests:
Test 1: We solved the model problem (1.1) using a discontinous function,
f(t) as the exact solution.
Test 2: We solved the more general problem (1.4) using the coefficient
function
The results from these tests are given in Figure 5.1. In both cases the
length of the data vectors g m and hm were 1024. The regularization parameters
were selected according to the recipes given in Theorems 3.3 and
4.1. In the Meyer case we used projection on the space V 5 in both tests and
computed 64 coarse level coefficients. We have used the auxiliary function
in the definition of the scaling function
(4.5). In the Fourier case we used the 42 lowest frequency components when
calculating the time derivative. Since we have no stability theory for the
Daubechies method we chose to solve the problem in space V 5 in both cases.
Thus we computed 70 coefficients on the coarsest level. We used Daubechies
wavelets with filters of length 8 in the Galerkin formulation. In all cases the
number of steps in the ODE-solver were between 6 and 11. Before presenting
the results we recomputed our coarse level approximation on the finer
scale, using the inverse wavelet transform.
Figure
5.1: Solution of the sideways heat equation using the different meth-
ods. From top we have the results from the Fourier method, the Meyer
method and the Daubechies method. The results from Test 1 are to the left
and the results from Test 2 are to the right. The dashed line illustrates the
approximate solution, the solid line represents the exact solution and the
dash-dotted line represents the data function, g m .
5.2 An Industrial Application
In this section we present an example of an industrial problem where the
methods presented in the previous sections can be useful. The viability of
the methods is demonstrated by an experiment conducted in cooperation
with the Department of Mechanical Engineering, Link-oping University.
Consider a particle board, on which a thin lacquer coating is to be ap-
plied. In order to reduce the time for the lacquer coating to dry, the particle
board is initially heated. Since the temperature gradients on and close to
the surface of the board influence the drying time and the quality of the
lacquer coating, it is important to estimate the temperature and the temperature
gradients close to the surface. Often it is difficult or impossible
to measure the temperature directly on the surface of the board. Instead a
hole was drilled from the other side of the board and a thermocouple was
placed close to the surface, as seen in Figure 5.2. After the thermocouple
had been placed, the hole was filled in using the same material as in the
surrounding board. In the experiment a particle board, initially heated to
Figure
5.2: The cross-section, in principle, of the particle board used in the
experiment. The temperature g m is measured by a thermocouple, and we
seek to recover the temperature, f m , on the surface of the board [15].
suddenly placed to cool in air of room temperature. The thermocouple
placed inside the plate, at distance 2:9 mm from the surface,
gave the temperature history, g m (t), for From the
measured temperature we reconstructed the temperature history, f m (t), on
the surface of the board. The experiment presented here does not include
the lacquer coating. The heat equation in this case is given by
Here the constant - represents the physical properties of the problem. We
can formulate this as an initial value problem similar to (2.6). One difficulty
is that the surface temperature is not determined by the measured data g m
alone. We need also the temperature gradient at L. If the thickness of
the particle board is large in comparison to the distance between the surface
and the thermocouple, then it is reasonable to consider this to be a quarter
plane problem. In that case we can solve the heat equation for x ? L and
compute u x (L; \Delta). Another possibility is to assume that the temperature
function u(x; t) is symmetric with respect to the center of the board. In
this particular experiment this should be a very accurate assumption. In
Figure
5.3 we present results obtained using both these assumptions. The
computations were performed using the Meyer wavelet method presented in
Section 4.2. Initially the length of the vector g m was 8192; we solved only
for 128 coarse level coefficients. It is interesting to note that the solutions
almost coincide the first three minutes.
Temperature
Time min
Temperature
gradient
U x
(0,t)
Figure
5.3: The measured temperature vector, g m , sampled at 10Hz (solid)
and the corresponding surface temperature, f m from a quarter plane assumption
(dashed) and from a symmetry assumption (dash-dotted) In the
rightmost plot we see the temperature gradients at the thermocouple computed
from a quarter plane assumption (dashed) and from a symmetry assumption
(solid).
Time min
Temperature
Time min
Figure
5.4: Attempt to reconstruct the surface temperature, f m , using a
coarse level grid of size 512 (left) and 1024 (right), in the Wavelet-Galerkin
method. This means that we use less regularization than was the case in
Figure
5.3. Clearly these solutions are unphysical.
Since the temperature inside the particle board must be between the
initial temperature, 70 the temperature of the surrounding air, approximately
C, we can easily get a bound, M , on the solution. The noise
level, ffl, must be estimated from the measured data. It is often possible
to get a rough estimate by inspecting the data vector visually. In our error
estimates we use only the signal-to-noise ratio, ffl=M , and since we know only
rough estimates of M and ffl, we solved the problem several times using different
values for these parameters, i.e. different levels of regularization. By
experimenting, it is often easy to find an appropriate level of regularization.
In this particular experiment we know that the solution is monotonically de-
creasing, and this information can be used to rule out unphysical solutions,
see
Figure
5.4.
6 Concluding Remarks
We have considered three methods for solving the sideways heat equation,
based on approximating the time derivative by a bounded operator (matrix),
and then solving the problem in the space variable using a standard ODE
solver. For the Fourier and the Meyer wavelet methods, there is a stability
theory, which is used for choosing the level of approximation. In the case of
Daubechies wavelets the theory is incomplete.
From the theory for the Fourier and Meyer wavelet methods one can
expect that these methods give results of comparable accuracy. Considering
the fast decay in frequency of Daubechies wavelets, one can hope that this
method is about as accurate as the other two. Our numerical experience
confirms this.
The Fourier method can be implemented in such a way that two FFT's
are computed in each step of the ODE solver of Runge-Kutta type, and
these are computed for a vector of the same length as the measured data
vector. On the other hand, the two wavelet methods reduce the problem size,
determined by the noise level, and multiplications by small dense (Meyer)
or banded (Daubechies) matrices are performed in the ODE solver. In this
respect, both wavelet methods, and in particular the Daubechies wavelet
method, have an advantage over the Fourier method, if the data vectors are
very long 6 .
The quarter plane assumption is used mainly because it makes it easier
to obtain stability estimates. The numerical methods can be used also
for the case when the equation is defined only for a bounded interval in
space, provided that also measurements for u x are available. In principle
this requires measurements with two thermocouples. In certain symmetric
configurations it is sufficient to use only one thermocouples.
All three methods considered can easily be applied to problems with
variable coefficients, as was seen in Section 5.1. Numerical methods for
non-linear equations will be the studied in our future research.
Acknowledgement
The numerical experiments in Section 5.2 were based on measurements conducted
at the Department of Mechanical Engineering, Link-oping university.
We are grateful to Ricardo Garcia-Padr'on and Dan Loyd for performing
the experiment and for several discussions on real world heat conduction
problems.
6 In our implementation we have assumed that also intermediate results, for
are of interest. If only the final solution, at needed, then, in principle, the Fourier
method can be implemented with only two FFT's altogether, which could make it faster
than the wavelet methods.
Appendix
A Periodization
As we remarked in Section 4.3, the algorithms for computing the DMT for
Meyer wavelets are based on the Discrete Fourier Transform (DFT) and
use the FFT (Fast Fourier Transform) algorithm. In using the DFT, it is
assumed that the sequence to be transformed is periodic (see, e.g. [3]). In
our application it is not natural to presuppose that the data vectors are
periodic: We have a function u(x; t) (temperature) equal to zero for t ! 0,
and we consider it in the unit square [0; 1] \Theta [0; 1]. Further, we assume that
there is a temperature change at represented by
is diffused through some medium (in the interval 0 - x - 1), and recorded
at 1. Thus we cannot assume that
the functions that we are interested in, are equal to zero for t - 1.
Nevertheless, for numerical reasons, in order to avoid wrap-around effects
(cf. [3]) in the computation of the FFT, it is necessary to assume periodicity.
Therefore we extended the (noisy) data function g m (t) to the interval 1 -
first computing a smoothed version of the last 16 components of
the original data vector, using a cubic smoothing spline. Then the last 7
components of the extended data vector were prescribed to be equal to zero,
and a cubic spline function was constructed that interpolated 16 equidistant
data points from the smooting spline and the 7 zero data points. Finally
this interpolating spline was sampled at equidistant points, as many as in
the original data vector. Thus the size of the data vector was doubled. The
periodization of data vectors is illustrated in Figure 4.1.
Therefore, in effect we solved the sideways heat equation for
but the computed values for t ? 1 are not used. Of course, the computed
values for t less than but close to 1 are affected by the artificial data for t ? 1.
However, these values should not be trusted anyway, see the discussions in
[8, 9, 10].
We used the same periodization for all three methods considered in this
paper.
--R
Inverse Heat Conduction.
On the representation of operators in bases of compactly supported wavelets.
The DFT: An Owner's Manual for the Discrete Fourier Transform.
Determining surface temperatures from interior obser- vations
Space marching difference schemes in the nonlinear inverse heat conduction problem.
Slowly divergent space marching schemes in the inverse heat conduction problem.
Ten Lectures on Wavelets.
The numerical solution of a non-characteristic Cauchy problem for a parabolic equation
Hyperbolic approximations for a Cauchy problem for the heat equation.
Numerical solution of the sideways heat equation.
Numerical solution of the sideways heat equation by difference approximation in time.
Solving the sideways heat equation by a 'method of lines'.
A mollified space marching finite differences algorithm for the inverse heat conduction problem with slab symmetry.
Time Dependent Problems and Difference Methods.
Heat Transfer
Wavelet Methods for the Inversion of Certain Homogeneous Linear Operators in the Presence of Noisy Data.
Continuous data dependence
Multiresolution approximation signal decomposition.
Principe d'incertitude
Ondelettes, fonctions splines et analyses gradu'ees.
The mollification method and the numerical solution of the inverse heat conduction problem by finite differences.
A stable space marching finite differences algorithm for the inverse heat conduction problem with no initial filtering procedure.
Sideways heat equation and wavelets.
Solving the sideways heat equation by a Wavelet-Galerkin method
Prentice Hall PTR
Spectral analysis of the differential operator in wavelet bases.
--TR
--CTR
Fu Chuli , Qiu Chunyu, Wavelet and error estimation of surface heat flux, Journal of Computational and Applied Mathematics, v.150 n.1, p.143-155, January
Chu-Li Fu, Simplified Tikhonov and Fourier regularization methods on a general sideways parabolic equation, Journal of Computational and Applied Mathematics, v.167 n.2, p.449-463, 1 June 2004
Xiang-Tuan Xiong , Chu-Li Fu, Determining surface temperature and heat flux by a wavelet dual least squares method, Journal of Computational and Applied Mathematics, v.201 n.1, p.198-207, April, 2007
Wei Cheng , Chu-Li Fu , Zhi Qian, A modified Tikhonov regularization method for a spherically symmetric three-dimensional inverse heat conduction problem, Mathematics and Computers in Simulation, v.75 n.3-4, p.97-112, July, 2007 | cauchy problem;ill-posed;heat conduction;wavelet;inverse problem;fourier analysis |
363462 | Accuracy of Decoupled Implicit Integration Formulas. | Dynamical systems can often be decomposed into loosely coupled subsystems. The system of ordinary differential equations (ODEs) modelling such a problem can then be partitioned corresponding to the subsystems, and the loose couplings can be exploited by special integration methods to solve the problem using a parallel computer or just solve the problem more efficiently than by standard methods.This paper presents accuracy analysis of methods for the numerical integration of stiff partitioned systems of ODEs. The discretization formulas are based on the implicit Euler formula and the second order implicit backward differentiation formula (BDF2). Each subsystem of the partitioned problem is discretized independently, and the couplings to the other subsystems are based on solution values from previous time steps. Applied this way, the discretization formulas are called decoupled.The stability properties of the decoupled implicit Euler formula are well understood. This paper presents error bounds and asymptotic error expansions to be used in controlling step size, relaxation between subsystems and the validity of the partitioning. The decoupled BDF2 formula is analyzed within the same framework.Finally, the analysis is used in the design of a decoupled numerical integration algorithm with variable step size to control the local error and adaptive selection of partitionings. Two versions of the algorithm with decoupled implicit Euler and BDF2, respectively, are used in examples where a realistic problem is solved. The examples compare the results from the decoupled implicit Euler and BDF2 formulas, and compare them with results from the corresponding classical formulas. | Introduction
. The numerical solution of stiff systems of ordinary differential
equations (ODEs) requires implicit discretization formulas. The implicit algebraic
problems are usually solved by a Newton-type iteration method which involves the
solution of systems of linear equations that often turn out to be sparse. Attempts to
parallelize a standard algorithm, as the one just outlined, often lead to disappointing
efficiency because the parallel algorithm is too fine-grained or has too large a sequential
fraction.
The waveform relaxation method [1], although not developed with parallel computation
in mind, leads to efficient parallel algorithms for the class of problems where
it works well. Multirate integration can be considered an integral feature of the wave-form
relaxation method. The relaxation part of the method is a potential source of
computational inefficiency. One relaxation iteration is fairly expensive and convergence
may be slow.
The decoupled integration methods in this paper were developed as a response
to the problems encountered in parallelizing standard integration methods and the
problems with waveform relaxation. The decoupled integration methods, like the
waveform relaxation method, exploit a partitioning of a system of ODEs into loosely
coupled subsystems. The decoupled integration methods employ only one and occasionally
two relaxation iterations. Multirate integration is not quite as natural as
for the waveform relaxation method, although it is still possible, and the parallel
implementations of decoupled integrations methods will be finer grained than parallel
waveform relaxation methods. The decoupled integration methods may be more
Department of Computer Science, University of Copenhagen, Universitetsparken 1, DK-2100
Copenhagen, Denmark, e-mail: stig@diku.dk
efficient on a sequential computer than standard integration methods, just like the
waveform relaxation method.
A previous paper introduced the decoupled implicit Euler method [2]. The existence
of a global error expansion was proved under very general choice of step size,
thus permitting the use of Richardson extrapolation. A sufficient condition for stability
of the discretization was given in [3]. This condition is called monotonic max-norm
stability, and it guarantees contractivity. Partitioned systems of ODEs are in qualitative
terms characterized as monotonically max-norm stable if each subsystem is
stable and if the couplings from one subsystem to the others are weak.
This paper is organized as follows. Section 2, "Partitioned systems of ODEs and
decoupled discretization formulas", gives preliminaries and definitions, including the
definition of monotonic max-norm stability and the presentation of the decoupled implicit
Euler and the decoupled implicit second order backward differentiation formula
(BDF2).
Section 3, "Error bounds", gives error bounds for the classical and the decoupled
implicit Euler formulas. The bounds are closely tied to the monotonic max-norm
stability condition which applies only to the Euler formulas, and the bounds are not
readily generalized to the BDF2 formula.
Section 4, "Asymptotic error formulas and error estimation", includes four subsections
treating explicit formulas, classical implicit formulas, decoupled implicit Euler,
and finally decoupled BDF2 formulas. The explicit formulas are used in error estimation
and for prediction in the decoupled formulas. The asymptotic errors of the
classical formulas are the smallest achievable for the decoupled formulas and therefore
of interest. The asymptotic errors of the decoupled formulas are given for various
modes of operation, and error estimation techniques are presented.
Section 5, "Integration algorithm", first presents general principles for decoupled
integration algorithms derived from the previous analysis. Then the details of an
implementation of the decoupled implicit Euler formula is given, followed by the
minor modifications required for replacing the Euler formula with the BDF2 formula.
The implementation employs variable step size to control the local error and adaptive
selection of partitionings among two predetermined alternatives.
These integration algorithms are used in section 6, "Examples: Chemical reaction
kinetics", where a real problem is solved using both decoupled and classical versions of
implicit Euler and BDF2. The examples show excellent performance of the decoupled
formulas.
2. Partitioned systems of ODEs and decoupled discretization formulas.
Define a system of ODEs,
continuous in Y . Stable
systems of differential equations are considered stiff when the step size of the discretization
by an explicit integration method is limited by stability of the discretization
and not by accuracy. Efficient numerical integration of stiff systems therefore
requires implicit integration methods.
Let the original problem (2.1) be partitioned as
y 0y 0y 0
y 1;0
y 2;0
necessary, the
partitioning of Y will be stated explicitly as in f r (t; y
2.1. Monotonic max-norm stability. The following stability condition introduced
in [3] plays a crucial role for the stability of decoupled implicit integration
methods.
stability.
The partitioned system (2.2) is said to be monotonically max-norm stable if there
exist norms k \Delta k r and functions a rj (t; U; V ) such that
a rj (t; U; V )ku
and the following condition holds for
the logarithmic max-norm -1 (\Delta) of the q \Theta q matrix (a rj
The condition (2.4) states that the matrix (a rj ) should be diagonally dominant
with non-positive diagonal elements. For a linear problem where A the
(a rj ) matrix can be chosen as a
Theorem 3 in [3] includes further results about monotonic max-norm stability,
including possible choices of (a rj ). Monotonic max-norm stability admits arbitrarily
stiff problems.
2.2. Decoupled implicit Euler: Stability, convergence. The decoupled
implicit Euler method is defined by the following discretization of the subsystems
by the implicit Euler formula [2]:
y q;n );
and the variables ~
y i;n are convex combinations
of values in fy i;k r. The convex combinations ~
y i;n will, in general,
depend on subsystem index r, but in order to simplify notation this dependency will
not be specified explicitly.
The method is called "decoupled" because the algebraic system resulting from
the discretization of (2.1) by Euler's implicit formula is decoupled into a number of
independent algebraic problems. The decoupled implicit Euler formula can be used
as the basis of parallel methods where (2.5) is solved independently and in parallel
for different r-values. The method can be used in multirate mode with h r;n 6= h j;n for
r 6= j, and the multirate formulation can be used in a parallel waveform relaxation
method [1].
The sequential solution of (2.5) for on a single processor will, in
general, be computationally cheaper than solving the complete system
Therefore the decoupled implicit Euler method may also be an attractive
alternative to the classical Euler formula on a sequential processor even when there
is no multirate opportunity.
Theorems 4 and 5 in [3] assure the stability of the discretization by (2.5) of a
monotonically max-norm stable problem and the convergence of waveform relaxation.
The stability condition poses no restrictions on the choice of the step sizes h r;n .
The convexity of ~ y r;n is necessary in the stability theorem of the decoupled implicit
Euler method. A convex combination would typically be a zero-order interpolation,
~
multirate organizations, where a linear interpolation is an
option.
The multirate aspect is discussed extensively in [2] and [3]. Most of the results
and techniques in this paper can readily be extended to multirate algorithms. How-
ever, presenting results for multirate algorithms would add substantially to notational
complexity, and furthermore, the algorithms and the example presented in sections
5 and 6, respectively, do not exploit multirate formulation. Therefore, the multirate
aspect will not be explored further.
2.3. Organization: Jacobi, Gauss-Seidel, relaxation. The definition of the
decoupled implicit Euler formula in (2.5) suggests a Jacobi-type organization of the
computation which is well suited for parallel computation.
Define the function G J (t; Y; ~
Y ) from the partitioning of F given in (2.2),
J;r (t; Y; ~
where
y q ). Assuming the same step
size hn for all subsystems, the decoupled implicit Euler formula can now be expressed
in the compact form,
Yn
The definition of G J given above corresponds to the Jacobi organization of the
computation. A similar GG\GammaS can be defined for the Gauss-Seidel organization,
G\GammaS;r (t; Y; ~
y q
The decoupled implicit Euler formula based on the Gauss-Seidel organization
in general, be more accurate than (2.7). The
Gauss-Seidel organization, although inherently sequential, may still be used in parallel
if the partitioning (2.2) admits a red-black reordering or a similar reordering based
on more colors (see, e.g. [4]). In the following the generic function G, meaning either
G J or GG\GammaS , will be used.
The decoupled implicit Euler formula can be used with relaxation iterations,
Y [m+1]
Yn . The computational cost of each iteration of (2.9) is approximately
the same as the cost of computing Yn from (2.7).
If the relaxation is carried to convergence, the resulting discretization is the classical
implicit Euler discretization.
When the numerical solution at t n has been computed and accepted, it is usually
used for computing the next step, and it is denoted by Yn , where Yn := Y [m]
n .
In the relaxation (2.9), each subsystem r is solved in each sweep, usually by a
Newton-type iteration carried to convergence. A relaxation iteration based on New-
ton's method with Jacobi or Gauss-Seidel organization as shown below may converge
just as fast and with substantially less computation per iteration.
@y r;n
y [m]
q;n
for
y r;n . Again the potential for parallelisation is less with
Gauss-Seidel organization than with Jacobi organization.
2.4. Decoupled BDF2. The BDF2 can also be used in a decoupled mode,
where
Stability results for this formula are only known for linear problems and constant
step size [5].
The decoupled BDF2 can, of course, be relaxed just as the Euler formula in (2.9)
or (2.10):
Y [m+1]
3. Error bounds.
3.1. Decoupled implicit Euler. The error of the decoupled implicit Euler formula
can be bounded as follows. Consider the local truncation error for subsystem r,
and the decoupled implicit Euler formula using Jacobi organization,
Y r
where ~
Y r
y q;n ).
Assume that the monotonic max-norm stability condition (2.3), (2.4) is fulfilled
with Y (t n ); ~
Y r
, and introduce the simplified notation a n
Y r
Then subtraction leads to
Y r
a n
A bound for the local error y r of the decoupled implicit
Euler formula is then (cf. Lemma 2.2, Section III.2. in [6]).
rr
a n
since
rr by (2.4) and '
Y r
3.2. Classical implicit Euler. A similar local error bound can be established
for the classical implicit Euler formula expressed as follows in a notation analogous
to (2.5):
Using the monotone max-norm stability condition, we obtain
r
where
and m1 (t) is defined as
A bound for the local error y r of the classical implicit
Euler formula analogous to (3.2) is then
are valid for all
values of step sizes but may be most interesting and useful for values where L(t n
The main difference between (3.2) and (3.3) is the last term in (3.2). The local
truncation error norm L(t n ) is O(h 2
). The order of the last term in (3.2) is O(h 2
since
If the off-diagonal elements of (a ij ) are very small, the last term of (3.1)
may be negligible, and the local error bound of the decoupled implicit Euler formula
is almost equal to the local error bound of the classical implicit Euler formula.
4. Asymptotic error formulas and error estimation.
4.1. Explicit formulas. The numerical solution of stiff systems of ODEs requires
implicit discretization formulas. A numerical integration algorithm typically
also includes an explicit formula to be used in error estimation and for computing an
initial value for the iterative (Newton-type) method used in solving the implicit discretization
problem. The decoupled implicit Euler and BDF2 formulas furthermore
require the computation of ~
Yn , where the use of an explicit formula is an option.
The explicit Euler formula Y e
is an obvious choice in
connection with the implicit Euler formula, as with
the implicit decoupled Euler formula. However, explicit formulas including F (t; Y )
should generally be avoided in connection with stiff problems when hk@F=@Y k AE 1.
The bound kY e
may be approached if Yn\Gamma1 is off
the smooth solution, which is the case if solved
approximately for Yn\Gamma1 . Therefore polynomial interpolation formulas are preferred
as predictors [7].
The linear interpolation formula,
where has the local error expansion
for
The second order polynomial predictor formula is
Y p2
where
The local error expansion for this formula is
assuming that
4.2. Classical implicit Euler and BDF2. With the classical implicit Euler
the local error Y can be expressed by
for
where all partial derivatives above and in the rest of section 4 are computed at Y (t n ).
The e-terms are obtained by substituting the expansion for Yn (4.4) into the Euler
formula (4.3), Taylor expanding Y (t at the point
finally identifying e-terms of equal power of hn .
The principal local error term h 2
can be estimated as
is the divided difference of the values Yn\Gamma2 ;
The classical BDF2 formula (cf. (2.11))
has the local error expansion
assuming that
The principal local error term of BDF2 can be estimated from
The error estimates (4.5) and (4.7), based on divided differences, are asymptotically
correct [8] if the step size varies according to
and OE(t) is sufficiently smooth. This is essentially the step size variation attempted
by the algorithms in section 5.2.
4.3. Decoupled implicit Euler. The local error of the decoupled implicit Euler
formula (2.7) depends on the definition of ~
Yn and the number of relaxation iterations
being performed (cf. (2.9)). Two different modes corresponding to different choices of
~
Yn are considered for different number of relaxation iterations.
If the partitioned system (2.2) is monotonically max-norm stable, then mode 1
discretizations are stable while the mode 2 discretizations cannot be guaranteed to be
stable.
4.3.1. Mode 1, ~
. The local error is expressed as follows:
The e [m]
r -terms of (4.8) are found analogously to the e r -terms of (4.4) from (2.7),
Y [1]
e [1]
@ ~
Y
e [1]
@ ~
Y
@ ~
The relation (@G=@Y
exploited in the expression for e [1]
3 .
Element i of the vector (@ 2 G=@ ~
evaluated as Y 0
The principal local error term h 2
can only be estimated using Y p
(cf.
)k. This inequality may be fulfilled when the
subsystems are loosely coupled, but it is by no means guaranteed by the monotonic
max-norm condition. Error estimation based on Y p
should only be used if the
quality of this estimate is somehow monitored continually.
The e [2]
r -terms of (4.8) are found from Y [2]
like the
analogous terms above:
e [2]
e [2]
@ ~
Y
The relation (@G=@Y
exploited in the expression for e [2]
3 .
For
of the classical implicit Euler formula. Therefore the
principal local error term h 2
2 can be estimated using (4.5) with Y [m]
substituted
for Yn .
The e [m]
1 and e [m]
2 terms are unchanged for further relaxation iterations, m - 3,
and
e [m]
which is identical to e 3 for the classical implicit Euler formula.
4.3.2. Mode 2, ~
. The local error is expressed as follows:
Y [m]
e [m]
e [m]
e [m]
The - e [1]
-terms for -
Y [1]
Y [1]
are identical to e 1 and e 2
(classical implicit Euler), respectively, while
e [1]
@ ~
Y
The computational cost of Y p
n is in general much less than the cost of Y [1]
n in
mode 1, and although they lead to different e 3 -expressions, - e [1]
3 , one value will
not be smaller than the other in general. However, the use of ~
n instead of
~
may compromise the stability of the decoupled implicit Euler formula; cf.
section 2.2.
As in mode 1, - e [m]
are unchanged by further relaxation iterations, m - 2
and - e [m]
2.
The development of the error terms e [m]
3 and -
e [m]
3 is given to illustrate the influence
of increasingly accurate values of Y [m]
. If the decoupling of the original problem into
subsystems is efficient, mode 1 or 2 of the decoupled implicit Euler formula gives
results very close to those of the classical implicit Euler formula and
2:
If the decoupling is poor, the differences in the higher order e [m]
r - or - e [m]
r -terms
will be significant and kY [m]
k.
The error expansions do not include multirate integration. The derivation of
error expansions analogous to the above covering multirate integration could have
been done using the basic definition (2.5), but it would be notationally complicated,
and the results are essentially the same.
4.4. Decoupled BDF2. The decoupled BDF2 (2.11) admits more "natural"
choices for ~
Yn than the decoupled implicit Euler formula. Three modes using increasingly
accurate ~
Yn are presented.
Mode 1 is expected to possess the best stability properties among the three different
modes although few theoretical results are available [5]. Mode 3 with its second
order predictor value for ~
Yn is expected to be the weakest in terms of stability properties
while mode 2 is somewhere in between.
4.4.1. Mode 1, ~
. The errors of the decoupled BDF2 are considered
for
Y [1]
and subsequent relaxation iterations (2.12). The local error is expressed as follows for
constant step size,
The e [m]
r -terms of (4.11) are found analogously to the e r -terms of (4.4):
e [1]
@ ~
Y
e [1]
@ ~
Y
@ ~
Y
@ ~
The full order of accuracy of the BDF2 is not reached so another relaxation
iteration is performed:
Y [2]
The error terms are now e [2]
e [2]
@ ~
Y
If kY (3) (t n )k AE k2(@G=@ ~
then the principal local error term can be
estimated using formula (4.7). The inequality may be fulfilled when the subsystems
are loosely coupled, but it is not guaranteed by the monotonic max-norm stability
condition.
Since the error resulting from (4.10) is O(h 2 ), this step might be replaced by the
decoupled implicit Euler formula mode 1, the result of which is denoted by Y e1[1]
n in
this subsection. The BDF2 relaxation (4.12) is then replaced by
Y [2]
Y [2]
The local error expansion for constant step size is
Y [2]
9 h 3
@ ~
Y
@ ~
Y
The decoupled Euler-BDF2 combination is expected to be the O(h 3 )-error de-coupled
formula with the best stability properties. This expectation is based on the
fact that the decoupled implicit Euler formula mode 1 is stable when the partitioned
system (2.2) is monotonically max-norm stable. Such a result does not exist for the
decoupled BDF2 formula (4.10).
Yet another relaxation iteration from Y [2]
n or -
Y [2]
Y [3]
leads to a local error expansion with the same principal local error term as the classical
BDF2 (4.6), including the case of variable step size. The principal local error term
can thus be estimated using formula (4.7).
The computational cost of using mode 1 is rather high, so therefore the following
modes are of practical interest.
4.4.2. Mode 2, ~
. Another possible choice for ~
Yn is ~
computed
from (4.1). The corresponding local error expansion for constant step size is
Y [1]
9 h 3
@ ~
Y
assuming that The principal local error term
can be estimated using formula (4.7) when the subsystems are loosely coupled.
Another relaxation iteration would lead to the same principal local error as for
the classical BDF2 so that the principal local error term can be estimated using (4.7).
4.4.3. Mode 3, ~
. Using this mode we obtain the same principal local
error term as for the classical BDF2, and therefore we also have the same possibility
of estimating the error using formula (4.7).
The use of ~
n with the decoupled BDF2 may give rise to some concern
about the stability of the discretization.
4.4.4. Summary of decoupled BDF2 formulas. The following table summarises
the local error results of the decoupled BDF2 formula. Any combination of
mode and number of relaxations having the error O(h 3
have the principal local
error C 3 h 3
after one or more additional relaxations.
~
Yn
Relax'ns
5. Integration algorithm.
5.1. General principles. The previous sections have presented the decoupled
(section 2.2) and various iteration techniques for approaching
the classical implicit Euler formula (section 2.3). Asymptotic local error expansions
have been given for different modes of employment and corresponding local error
estimation techniques (section 4.3). Finally, similar results are presented for the
decoupled BDF2 (sections 2.4 and 4.4).
All of these components can be used to construct numerous integration algorithms
where the design decisions may be guided by the properties of the problem to be
solved. The main objective of an integration algorithm is to solve a problem to a
specified accuracy using as few arithmetic operations as possible.
The following discussion only deals with the decoupled implicit Euler formula
since the stability results and error bound only apply to this formula. The BDF2
formula is expected to have analogous properties, and the implementation of the
decoupled BDF2 formula is very similar to the implementation of the decoupled Euler
formula.
5.1.1. Mode. The local error bound (3.1) shows how an accurate value of ~
Yn
reduces the influence of the partitioning on the local error. According to section 4.3,
mode 2 is to be preferred over mode 1 because of the accuracy of ~
which
can be computed at little additional cost. Although kY p
0, the reverse may be true for larger values of hn . Mode 1
should therefore be preferred if
An alternative approach for improving the accuracy of ~
Yn is relaxation (2.9).
After one relaxation iteration, ~
Yn can be considered having the value ~
Relaxation does not increase the mode, and it is attractive in this respect. However,
relaxation is computationally expensive and should only be used when it is strictly
necessary.
5.1.2. Partitioning error. The error due to the partitioning is described by
the matrix (a rj ). An aggressive partitioning with few small subsystems and otherwise
scalar equations may lead to an (a rj ) matrix with relatively large off-diagonal
elements, and it may not be diagonally dominant (2.4). According to (3.1), the error
term including kY (t n
may therefore contribute significantly.
A conservative partitioning will typically have some larger subsystems to accommodate
strong couplings and to assure numerically small off-diagonal elements in
(a rj ). The error bound (3.1) clearly shows how this may lead to a decoupled Euler
formula with essentially the same error properties as the classical formula.
The Gauss-Seidel organization (2.8) takes advantage of a non-symmetric structure
of (a rj ). Assume that the partitioned system (2.2) is reordered symmetrically in
equation number and variable number to make (a rj ) as close to lower triangular as
possible, k(a rj ) r?j k AE k(a rj ) r!j k. Then (3.1) is modified as follows,
j!r
a n
j?r
a n
and the larger lower triangular a n
rj -values are multiplied with the smaller
errors while the smaller upper triangular a n
rj -values are multiplied with the
larger errors.
5.1.3. Stability. Stability is assured by the matrix (a rj ) being diagonally dominant
(2.4) and by ~
Yn being a convex combination of previous solution values. In
mode 1, where ~
the latter condition is fulfilled. The diagonal dominance
condition may be fulfilled by a sufficiently conservative partitioning.
However, the monotonic max-norm stability condition in section 2.1 is a sufficient
condition but not a necessary condition. Therefore, mode 2 may be used without
encountering stability problems and also used when the diagonal dominance condition
(2.4) is not fulfilled. A more conservative partitioning where (2.4) is fulfilled or closer
to being fulfilled will not only improve stability but most likely also accuracy; cf. the
previous section on partitioning error. However, a conservative partitioning leads to
a more computationally expensive discretization than a more aggressive partitioning.
Relaxation iterations in mode 1 do not compromise stability, and furthermore,
the monotonic max-norm stability guarantees convergence of the process.
5.2. Implementation details. The algorithm will use the decoupled implicit
Euler formula or BDF2 with variable step size and choose between two different
partitionings: an aggressive partitioning and a conservative partitioning.
The aggressive partitioning uses the smallest subsystems possible in order to
computational cost. The conservative partitioning uses somewhat larger
subsystems in order to maintain accuracy during transient solution phases.
The decoupled implicit Euler formula is used in mode 2 ( ~
relaxation iteration (Y n := -
Y [1]
used with two relaxation iterations (Y n := Y [2]
The quality of the partitioning is monitored using (4.9). The classical Euler
solution should not be computed because of the incurred cost. In mode 1, Yn in the
partitioning criterion above is therefore replaced by Y [2]
for some oe ! 1. In mode 2, Y [1]
are replaced by -
Y [1]
Y [2]
mode 1 is used with two relaxation iterations, Y [2]
n is always available, and
the cost involved in monitoring the partitioning is negligible.
Mode 2 is used with just one relaxation iteration. Therefore the cost of a step
where the partitioning is monitored is double the cost of an ordinary mode 2 step
since -
Y [2]
n is required. When -
Y [2]
n is available in mode 2, it seems obvious to return
Y [2]
n as the result of a step, Yn := -
Y [2]
n , since -
Y [2]
n supposedly is more accurate than
Y [1]
n . However, Yn := -
Y [2]
n is only returned occasionally, with Yn := -
Y [1]
n being the
common result, and in the following example this generates oscillations in the local
error estimate while failing to improve accuracy. Therefore, Yn := -
Y [1]
n is always
returned from mode 2.
The integration algorithm can now be outlined as follows.
Initialisation, step 1:
ffl choose conservative partitioning, N
ffl no error estimation
Step n:
n from (4.1) and compute -
Y [1]
n from the decoupled backward Euler
formula using mode 2 (Y [2]
computed by mode 1 is only used when the
predicted solution Y p
n is inaccurate, as explained above)
monitor then Monitor partitioning
ffl estimate the principal local error term using (4.5), " est
ffl new step size,
" tol =" est )
(aggressive partitioning) then
The step size formula averages 1 and
" tol =" est to reduce the tendency of oscillations
in step size selection.
If the step size is decreasing, the solution may be entering a transient phase which
requires the conservative partitioning. If the aggressive partitioning is being used, the
number of steps until the next Monitor partitioning is reduced by decreasing N monitor .
The partitioning is chosen using the following conditions.
Monitor partitioning
ffl in mode 2, relax the decoupled implicit Euler formula an extra time to compute
Y [2]
n . In mode 1, use Y [1]
n in the following test
Y [1]
Y [2]
Y [1]
choose aggressive partitioning
else choose conservative partitioning
if shift from conservative to aggressive partitioning then
A switch from aggressive to conservative partitioning reflects that the aggressive partitioning
is not satisfactory. A switch in the opposite direction is tentative since
it is only known that the conservative partitioning is satisfactory so the aggressive
partitioning may also be satisfactory. Therefore the first step after a switch in this
direction is monitored.
The algorithm was developed and presented for the decoupled implicit Euler for-
mula. The modifications, beyond the obvious, to accommodate the decoupled BDF2
are small. The Euler formula is replaced by the decoupled BDF2 (2.11) in mode 3,
and the first order predictor is replaced by the second order predictor (4.2) except
the first integration step which is still taken by the Euler formula and the second step
which uses decoupled BDF2, mode 2.
The principal local error is estimated using (4.7), and a normalization by fi of the
local error estimate is introduced [6, Section III.2.], " est
=fik. The
step size selection scheme is modified slightly so that averaging is only used during
increasing step sizes:
est
The step size averaging is useful for increasing step sizes to reduce the risk of instability
while it prevents a very rapid reduction in step size at the transients.
6. Examples - Chemical reaction kinetics. The example problem is the
mathematical model of the chemical reactions included in a three-dimensional trans-
port-chemistry model of air pollution. The air pollution model is a system of partial
differential equations where each equation models transport, deposition, emission,
and chemical reactions of a pollutant. By the use of operator splitting, a number of
sub-models are obtained including the following system nonlinear ODEs:
The nonlinearities are mainly products, i.e. P i and L i are typically sums of terms
of the form, c ilm (t)Y l Ym and d il (t)Y l , respectively, for l; m 6= i.
The chemistry model is replicated for each node of the spatial discretization of the
transport part. The numerical solution of a system of 32 ODEs is not very challenging
as such, but the replication results in hundreds of thousands or millions of equations,
and a very efficient numerical solution is crucial. The problem and a selection of
solution techniques employed so far are described in [9] and [10].
The system of ODEs is very stiff with the real part of the eigenvalues of the
Jacobian along the solution ranging from 0 to \Gamma8 step sizes in the range
100 to 1000. Therefore implicit integration schemes are required, and the resulting
nonlinear algebraic problem is the main computational task involved in advancing the
numerical solution one time step. A method based on partitioning the system called
the Euler Backward Iterative method is described in [9]. It can be characterized as a
discretization by the implicit Euler formula with block Gauss-Seidel iteration for the
solution of the algebraic equations of the discretization.
An ideal partitioning would involve subsystems of size one, i.e., s
(cf. (2.2)). For this chemical reaction kinetics problem, it would be particularly
advantageous since Y i is only included in L i (t; Y ) in very few equations and never
in P i (t; Y ) (by definition). Therefore L i (t; Y )Y i is in general linear in Y i , and the
solution of a scalar equation by an implicit integration formula can be performed
without iterations.
A partitioning into all scalar equations is not viable for this problem. However,
the paper [9] identifies a total of 12 equations which should be solved in blocks of
4, 4, 2 and 2 equations. With the numbering of the chemical species used in Table
B.1 in [9], the block Gauss-Seidel iteration proceeds as follows, where the parentheses
denote the blocks of equations, f(1, 2, 3, 12), (4, 5, 19, 21), 6, 7, (8, 9), 10, 14, (15,
remaining scalar equationsg. The partitioning in [9] specified above is used as
the basis of the partitioning used for the results in [11].
In [9] the Euler Backward Iterative formula is relaxed until convergence to obtain
the equivalent of the classical implicit Euler formula. The results in [11] are obtained
from the decoupled implicit Euler formula mode 2 with one relaxation. Timing results
in [11] show good efficiency for this approach.
The aggressive partitioning and ordering used in this example for both decoupled
implicit Euler and BDF2 is described by f12, (4, 5), 20 scalar equations, (1, 2, 15, 16),
remaining scalar equationsg. The subsystems of two and four equations are enclosed
in parentheses, and the rest of the equations are being treated as scalar equations.
The conservative partitioning and ordering is specified by f12, (1, 2, 15, 16, 4, 5,
remaining scalar equationsg. Except for the block of 10 equations, the
partitioning is into scalar equations.
A partitioning is considered more aggressive than another one if it has fewer
equations appearing in blocks and/or the blocks are smaller. Concerning the two
partitionings presented here, it is obvious which partitioning is the more aggressive
since f(4, 5), (1, 2, 15, 16)g ae f(1, 2, 15, 16, 4, 5, 8, 10, 29, 30)g. The parentheses are
only retained to facilitate reference to the partitioning and ordering specifications.
The aggressive partitioning is clearly more aggressive than the partitioning presented
in [9] since f(4, 5), (1, 2, 15, 16)g ae f(1, 2, 3, 12), (4, 5, 19, 21), (8, 9), (15,
16)g, while the relation to the conservative partitioning is not obvious since one has
the larger block while the other has more equations appearing in smaller blocks.
The partitioning used in [9] is presumably based on knowledge of the chemical re-
actions, while the partitionings used here are obtained with a semiautomatic method
described in paper [12]. None of these partitionings are monotonically max-norm
stable although they come close, but despite this fact, no instability problems are en-
countered. This should not be too surprising since the monotonic max-norm stability
is a sufficient but not a necessary condition for the stability of the decoupled implicit
Euler formula.
The example used in this paper differs slightly from the example in [9], but the
differences appear in the equations being treated as scalar in all the considered partitionings
6.1. Decoupled implicit Euler. Figure 6.1 shows the errors obtained using the
classical implicit Euler formula (solid line) and the decoupled implicit Euler formula
(dash-dot line) implemented as described in section 5. The two different partitionings
mentioned above have been used adaptively. The step size is chosen by the decoupled
Global
integration
error
Time
Fig. 6.1. Integration error for the classical implicit Euler (solid line) and decoupled implicit
Euler (dash-dot line).
implicit Euler algorithm to obtain a local error estimate of 10 \Gamma3 or to a minimum
step size of 90, and the classical implicit Euler formula is applied with the same step
size selection. The discrepancy between the errors is seen to be insignificant.
A reference solution is computed using a variable step size variable order (max-
imum implementation of the backward differentiation formulas [13] with
a bound on the relative local error estimate of 10 \Gamma6 . The errors presented in the
figures are the maximum relative deviations from the reference solution measured
componentwise (the values of the components vary widely in magnitude).
The time axis is in seconds, and the initial time corresponds to 6 a.m. The
model includes the influence of the sun on some of the chemical reactions, and this
leads to very distinct transients in the solution at sunrise and sunset. The minimum
integration time step of 90 seconds is too large a step to integrate the transients
accurately, and large spikes in the global integration error are seen around 7 p.m., 5
a.m. (t=105,000) next day, and 7 p.m. (t=155,000).
The transient behaviour at sunrise and sunset can to some extent be considered
a modelling artifact. Since the large local contribution to the global error does not
influence the global error at large, it is essentially ignored by introducing the minimum
time step. The observed behaviour of the global error is not uncommon for stiff
systems of ODEs.
Figure
6.2 shows the estimated principal local error. The step size is adjusted
to keep the estimate at 10 \Gamma3 , and the algorithm is quite successful except at the
transients where the minimum step size of 90 is used.
Figure
6.3 shows the resulting step size selection (upper graph) and the selection
of partitioning (lower graph). The low value indicates the aggressive partitioning, and
the high value indicates the conservative partitioning. The values on the ordinate axis
only pertain to the step size. The total number of steps is 737, and only 182 (25%)
steps are using the conservative partitioning.
e
Estimated
local
error
Fig. 6.2. Estimated principal local error for decoupled implicit Euler.
Time
Step
size
and
partitioning
Fig. 6.3. Integration step size selected by the decoupled implicit Euler formula (upper graph)
and selection of partitioning (lower graph).
During integration using the conservative partitioning, single steps that use the
aggressive partitioning can be observed. At the first step after a switch to the aggressive
partitioning, the quality is monitored, and if it is not satisfactory, the Monitor
partitioning algorithm immediately returns to the conservative partitioning.
This example demonstrates the application of an implementation of the decoupled
Euler formula in solving a non-trivial practical problem. The computational
cost is substantially less than that for the classical implicit Euler formula, and there
is no trace of instability in the solution computed by the decoupled implementation.
e
Global
integration
error
Fig. 6.4. Integration error for the classical BDF2 (solid line) and decoupled BDF2 (dash-dot
line).
Time
Step
size
and
partitioning
Fig. 6.5. Integration step size selected by the decoupled BDF2 (upper graph) and selection of
partitioning (lower graph).
6.2. Decoupled BDF2. The numerical integration of the previous section is
repeated using the decoupled BDF2. The step size is controlled to keep the estimated
local normalized error " est at 10 \Gamma3 as before. The resulting global integration error
is shown in Figure 6.4 (dash-dot line) together with the corresponding error for the
classical BDF2 using the same step size selection (solid curve). Some deviation is
noticed, but the error of the decoupled BDF2 is comparable to the error of the classical
BDF2 formula. Comparing with Figure 6.1, it is seen that the global error of the
decoupled BDF2 formula is significantly smaller than the global error of the decoupled
implicit Euler formula. The difference originates mainly from the transients, including
the initial transient, where the minimum stepsize of 90 seconds is used.
Finally, Figure 6.5 shows step size and partitioning selection similar to Figure
6.3. The maximum step size of the decoupled BDF2 is greater than 1700 which is
three times the maximum step size of the Euler formula. The necessary number of
integration steps is 311 which is 42% of the number of steps needed by the Euler
formula. The conservative partitioning is only used for 73 steps out of 311 (23%).
There is a substantial pay-off to using the decoupled BDF2 instead of the decoupled
Euler formula, since the amount of work per step is essentially the same for the two
decoupled formulas.
The performance of the decoupled BDF2 algorithm in mode 3 is very convincing,
and there is no trace of instability, although the use of the second order polynomial
predictor for computing ~
Yn is somewhat risky from a stability point of view.
Acknowledgements
. The comments by the editor and the anonymous referees
have helped choosing asymptotically correct error estimates and improving the
presentation.
--R
The waveform relaxation method for time-domain analysis of large scale integrated circuits
Methods for parallel integration of stiff systems of ODEs
Stability of backward Euler multirate methods and convergence of waveform relaxation
A connection between the convergence properties of waveform relaxation and the A-stability of multirate integration methods
Solving Ordinary Differential Equations I
The control of order and steplength for backward differentiation methods
Estimation of errors and derivatives in ordinary differential equations
A photochemical kinetics mechanism for urban and regional computer modelling
Exploiting the natural partitioning in the numerical solution of ODE systems arising in atmospheric chemistry
Partitioning techniques and stability of decoupled integration formulas for ODEs
INTGR for the Integration of Stiff Systems of Ordinary Differential Equations
--TR | parallel numerical integration;absolute stability;multirate formulas;backward differentiation formulas;euler's implicit formula;partitioned systems |
363956 | Theory of dependence values. | A new model to evaluate dependencies in data mining problems is presented and discussed. The well-known concept of the association rule is replaced by the new definition of dependence value, which is a single real number uniquely associated with a given itemset. Knowledge of dependence values is sufficient to describe all the dependencies characterizing a given data mining problem. The dependence value of an itemset is the difference between the occurrence probability of the itemset and a corresponding maximum independence estimate. This can be determined as a function of joint probabilities of the subsets of the itemset being considered by maximizing a suitable entropy function. So it is possible to separate in an itemset of cardinaltiy k the dependence inherited from its subsets of cardinality (k 1) and the specific inherent dependence of that itemset. The absolute value of the difference between the probability p(i) of the event i that indicates the prescence of the itemset {a,b,... } and its maximum independence estimate is constant for any combination of values of Q &angl0; a,b,... &angr0; Q. In1p addition, the Boolean function specifying the combination of values for which the dependence is positive is a parity function. So the determination of such combinations is immediate. The model appears to be simple and powerful. | INTRODUCTION
A well known problem in data mining is the search for association rules, a powerful
and intuitive conceptual tool to represent the phenomena that are recurrent in a
data set. A number of interesting solutions of that problem has been proposed in
the last five years together with as many powerful algorithms [Agrawal et al. 1993b;
Agrawal et al. 1995; Agrawal and Srikant 1994; A.Savasere et al. 1995; Han and
Fu 1995; Park et al. 1995; H.Toivonen 1996; Brin et al. 1997; I.Lin and M.Kedem
1998]. They are used in many application fields, such as analysis of basket data of
supermarkets, failures in telecommunication networks, medical test results, lexical
features of texts, and so on.
An association rule is an expression of the form X ) Y, where X and Y are
sets of items which are often found together in a given collection of data. For
example, the expression f milk, coffee g ) f bread, sugar g might mean that
a customer purchasing milk and coffee is likely to also purchase bread and sugar.
The validity of an association rule has been based on two measures. The first
measure, called support, is the percentage of transactions of the database containing
both X and Y. The second one, called confidence, is the probability that, if X is
purchased, also Y is purchased. In the case of the previous example, a value of
2% of support and a value of 15% of confidence would mean that 2% of all the
customers buy milk, coffee, bread and sugar, and that 15% of the customers that
buy milk and coffee also buy bread and sugar.
Recently, Silverstein, Brin and Motwani [Silverstein et al. 1998] have presented
a critique of the concept of association rule and the related framework support-
confidence. They have observed that the association rule model is well-suited to the
market basket problem, but that it does not address other data mining problems. In
place of association rules and the support-confidence framework, Silverstein, Brin
and Motwani propose a statistical approach based on the chi-squared measure and
a new model of rules, called dependence rules.
This work can be viewed as a continuation of the line of the rules, even if the
model and the tools here proposed are rather different and in particular the concept
of dependence rules has been replaced by the concept of dependence values.
This paper is organized as follows. Section 2 contains a summary of the main
results of earlier work, the emphasis being placed on the framework support-
confidence, the critique of this model by Silverstein, Brin and Motwani and the
concept of dependence rules in opposition to the one of association rules. Section 3
contains the definition of dependence value and other basic definitions of the model
here proposed as well as the theorems following from these definitions. These theorems
suggest an easy and quick way to determine the dependence values, which
is described in Section 5, whereas Section 4 discusses the use of the well known
concept of entropy as a tool to evaluate the relevance of a dependence rule. Finally,
Section 6 draws the conclusions.
2. ASSOCIATION RULES AND DEPENDENCE RULES
As mentioned, this Section contains a summary of earlier work on association rules.
For ease of reference, the notation used by Silverstein, Brin and Motwani in their
paper will be adopted here.
Theory of Dependence Values \Delta 3
2.1 Association Rules
be a set of k elements, called items. Basket of items is any
subset of I. For example, in the market basket application, I =fmilk, coffee,
bread, sugar, tea, . g contains all the items stocked by a supermarket and a
basket of items such as f milk, coffee, bread, sugarg is the set of purchases from
one register transaction. As a second example, in the document basket application,
I is the set of all the dictionary words and each basket is the set of all the words
used in a given document.
An association rule X ) Y, where X and Y are disjoint subsets of I, was defined
by Agrawal, Imielinski and Swami [Agrawal et al. 1993b] as follows.
is a subset of at least s% (the support) of all
the baskets, and of all the baskets containing all the items of X at least c% (the
confidence) contain all the items of Y.
The concept of association rules and the related support-confidence framework
are very powerful and useful, but they suffer from some limitation, especially when
the absence of items is considered. An interesting example proposed by Silverstein,
Brin and Motwani is the following.
Consider the purchase of tea (t) and coffee (c) in a grocery store and assume the
following probabilities:
where c and t denote the events "coffee not purchased" and "tea not pur-
chased", respectively.
According to the preceding definitions, the potential rule tea ) coffee has a
support equal to 20% and a confidence equal to 80%, and therefore can be considered
as a valid association rule. However, a deeper analysis shows that a customer
buying tea is less likely to also buy coffee than a customer not buying tea (80%
against more than 90%). We would write tea ) coffee, but, on the contrary, the
strongest positive dependence is between the absence of coffee and the presence of
tea.
2.2 Dependence Rules
Silverstein, Brin and Motwani propose a view of basket data in terms of boolean
indicator variables, as follows.
be a set of k boolean variables called attributes. A set of baskets
bng is a collection of the n k-tuples from fTRUE, FALSEg k which
represent a collection of value assignments to the k attributes. Assigning the value
TRUE to an attribute variable I j in a basket represents the presence of item i j in
the basket.
The event a denotes A=TRUE, or equivalently, the presence of the corresponding
item a in a basket. The complementary event a denotes A=FALSE, or, the absence
of item a from a basket.
The probability that item a appears in a random basket will be denoted by
P(a)=P(A=TRUE). Likewise, will be the probability
that item a is present and item b is absent.
Silverstein, Brin and Motwani have proposed the following definitions of independence
and dependence of events and variables.
Definition 1. Two events x and y are independent if P(x "
Definition 2. Two variables A and B are independent if
for all possible values hv a , v b 2 fTRUE, FALSEgi.
Definition 3. Events, or variables, that are not independent are dependent.
Definition 4. Let I be a set of attribute variables. We say that the set I is a
dependence rule if I is dependent.
The following Theorem 1 is based on the preceding Definitions 1-4.
Theorem 1. If a set of variables I is dependent, so is every superset of I.
Theorem 1 is important in the dependence rule model, because it makes it possible
to restrict the attention to the set of minimally dependent itemsets, where a
minimally dependent itemset I is such if it is dependent, but none of its subsets is
dependent.
Silverstein, Brin and Motwani have proposed using the X 2 test for independence
to identify dependence rules. X 2 statistic is upward-closed with respect to the
lattice of all possible itemsets, as well as dependence rules. In other terms, if a set
I of items is deemed dependent at significance level ff, then all supersets of I are
also dependent at the same significance level ff and, therefore, they do not need to
be examined for dependence or independence.
3. DEPENDENCE VALUES
In this Section the new model based on the concept of dependence values is presented
and discussed. A Theorem proved in this section will provide the basic tools
to evaluate the dependence rules of a certain itemset. To simplify the presentation,
we shall procede from the most simple cases towards the most complex ones, in
the order of increasing cardinality of itemsets. In other terms, we shall discuss
dependence rules first for pairs of items, then for triplets of items, and finally for
m-plets of arbitrary cardinality m.
3.1 Dependence Rules for Pairs of Items
Assume we know the occurrence probabilities of all the items:
The evaluation of such probabilities is the first problem of data mining, but it is
seldom considered because of its simplicity. Generally, the maximum likelihood
estimate is adopted according which P(a) is assumed equal to O(a)/n, where O(a)
Theory of Dependence Values \Delta 5
is the number of baskets containing a and n is the total number of baskets. However,
more complex computations based on Bayes's Theorem, might also be used.
In the absence of specific determinations, if we know only
formulate the following conjectures:
These conjectures are equivalent to the assumption that variables A and B are
independent.
Assume that the exact determination of P(a,b), evaluated as O(a,b)/n (where
O(a,b) is the number of baskets containing both a and b), is different from the
conjecture
It is easy to prove the following Theorem.
Theorem 2 (Unicity of the value for second-order probabilities). If
P(A=TRUE) and P(B=TRUE) are known, determination of a single value
is sufficient to evaluate all the second-order joint probabilities P(a; b); P(a; b); P(a; b).
Proof. The proof is contained in the following simple relationships:
Analogously,
and
The fact that a single datum \Delta contains the whole information pertaining joint
probabilities of pair fA, Bg suggests the following Definitions.
Definition 5. (Dependence value of a pair) The dependence value of the pair
fA,Bg will be defined as the difference
Definition 6. (Dependence state of a pair) If the absolute value of
exceeds a given threshold th, A and B are said "dependent". If \Delta ? th dependence is
defined as "positive"; otherwise, it is defined as "negative". The following notations
will be adopted to indicate a positive dependence, a negative one or no dependence,
respectively.
Figure
1 shows that the difference between the joint probability P(a
(with
a
a or a
a and b
b) and the corresponding "a-priori" estimate
P(a
has always the same absolute value but a different sign in the various
cells of the Karnaugh's map of variables A and B. To represent this fact, we need
another definition and a new Theorem.
A
A
Fig. 1. The joint probabilities of P(a,b) in the cells of Karnaugh's map of fA,Bg.
Definition 7. (Dependence function of two variables) The boolean function of
variables A and B, whose minterms correspond to the values hv A ,v B
i for which
P(A=v A
will be called the dependence function of variables
A and B.
Theory of Dependence Values \Delta 7
Theorem 3 (Parity of two variables dependence functions). If
the dependence function of variables A and B is:
which is the parity function with parity odd (Figure 2).
If D 2
the dependence function of variables A and B is:
which is the parity function with parity even (Figure 3).
Fig. 2. The dependence function of variables A and B if D 2 (A,B)AE 0.
Fig. 3. The dependence function of variables A and B if D 2 (A,B) 0.
As a simple example, consider the case of purchases of coffee (c) and tea (t),
which was discussed in [Silverstein et al. 1998] to show the weakness of the traditional
support-confidence framework (Subsection 2.1). If
P(c,t)=0.2
then
Therefore P(c)\DeltaP(t)=0.225 and which shows
that dependence is negative (D 2 (C,T) 0).
One might wonder whether the usual notation X adopted in the well known
papers on data mining does make still sense and how to indicate a negative dependence
like The answer is
simple:
simply, D 2 (c; t) 0 contain all information on the
second-order dependencies. However, one might argue that \Delta is more significant
for the events having a lower probability. In the case of coffee and tea,
is the lowest probability in the cells of the dependence function; therefore, it
is not completely unreasonable to write: C ) T.
3.2 Dependence Rules for Triplets of Items
This Subsection is devoted to the generalization of Definitions and Theorems presented
in previous Subsection 3.1 to the case of triplets of items. As we shall see,
such generalization implies some new problems.
Consider the case of a triplet of the boolean variables A, B and C, and assume we
know the first- and second-order joint probabilities such as
and others.
We are interested in determining the third-order joint probabilities of triplets
such as P(a,b,c), P(a,b,c), and so on, from which also the third-order conditional
probabilities such as directly.
The following Theorem shows that the knowledge of a single third-order probability
is sufficient to determine all the third-order probabilities.
Theorem 4 (Unicity of the value for third-order probabilities). All
the third-order joint probabilities can be calculated as functions of first- and second-order
joint probabilities and a single datum such as a third-order joint probability.
Proof. Assume, for example, we know P(a,b,c). The other joint probabilities
can be determined as follows:
Theorem 4 may be viewed as an extension of Theorem 2 on the unicity of the
value for second-order probabilities shown in previous Subsection 3.1. However,
Theorem 2 makes reference to the differences between the determined P(a
and the estimated P(a
which correspond to the conjecture of independence
of a and b. In the case of triplets, the condition of independence is more difficult
to be identified. Our proposal is contained in the following considerations.
The relationships written in the proof of Theorem 4 can also be formulated in
the following form:
Theory of Dependence Values \Delta 9
They express the values of all the third-order joint probabilities as functions of the
known second-order probabilities P(a,b), and the unknown third-order
probability
Now consider the entropy
This function of the unknown x is the average amount of information needed to
know a, b and c. The maximum value of E(x) is reached when a, b and c are
at the maximum level of independency compatible with the dependencies imposed
by the second-order joint probabilities. This consideration explains the following
Definition 8.
Definition 8. (Maximum independence estimate for third-order probabilities) If
first- and second-order joint probabilities are known but no information is available
on the third-order probabilities, the conjecture x of P(a,b,c) maximizing the joint
entropy of A, B, C:
P(a; b; c) \Delta log P(a; b; c)
(where the sum is to be extended to all the combinations of values of a, b and c)
will be defined as the maximum independence estimate. Such maximum independence
estimate will be denoted with the symbol P(a,b,c)MI .
Analogously, for any combination x
i of values of a, b, c, we shall
define P(a
)MI as the value of P(a
) for which E(x ) is maximum.
Notice that in virtue of Theorem 4, for any combination of values ha ,b ,c i of a,
b, c, P(a ,b ,c )MI can be computed in terms of second-order joint probabilities
and P(a,b,c)MI by applying the following relationships
The meaning of Definition 8 is rather important for the model presented in this
paper. If D 2 (A,B) or D 2 (A,C) or D 2 (B,C) are AE 0 or 0, A, B, C are not in-
dependent, but they could own only the dependence inherited from the second-order
dependencies or their dependence might be stronger. In the former case,
P(a ,b ,c ) is equal to P(a ,b ,c )MI and there is no real third-order depen-
dence. In the latter, there is an evidence of a third-order dependence whose value
and sign depend on the differences between P(a ,b ,c ) and P(a ,b ,c )MI , as
shown in the following analysis.
Notice that in the case of the pairs of items the Definition 8 of maximum independence
coincides with the more known definitions of independence cited in previous
Subsection 3.1. Indeed, in this case, as is shown in Figure 1, the joint entropy of A
and B is
where x=P(a,b). It is easy to prove that function E has a maximum for
By applying the same algorithm, it is easy to prove the analogous results:
Unfortunately, in the case of triplets and k-plets the determination of the maximum
independence estimates is not so simple. However, as will be seen later, it
is not necessary to know all the estimates P(i 1
but one of them
is sufficient to determinate all the other ones. Besides, the numerical evaluation of
this estimate can be performed very quickly by applying the method which will be
described in Section 5.
The definition of maximum independence estimate is applied in the following
Theorem, which can be viewed as a specification of Theorem 4 on the unicity of
value for third-order probabilities and as the natural extension of the Theorem 2
proved in previous Subsection 3.1.
Theorem 5. If the first- and second-order joint probabilities and the third-order
maximum independence estimate are known, a single number \Delta defined as the difference
P(a,b,c) - P(a,b,c)MI is sufficient to specify all the third-order joint
probabilities.
Proof. Theorem 5 is a direct consequence of Theorem 4 on the unicity of the
value. Indeed, from the knowledge of the first- and second-order joint probabilities
we can obtain P(a,b,c)MI and from this
according to Theorem 4, the knowledge of P(a,b,c) is sufficient to determine all
the third-order joint probabilities.
Theory of Dependence Values \Delta 11
In virtue of Theorem 5, we can state the following Definitions which are an
extension of Definitions 5 and 6.
Definition 9. (Dependence value of a triplet) The dependence value of the triplet
fA,B,Cg will be defined as the difference
Definition 10. (Dependence state of a triplet) If the dependence value of fA,B,Cg
exceeds a given threshold th, A, B and C are defined
as "connected by a third-order dependence". If \Delta ? th, dependence is defined as
otherwise, it is defined as "negative".
The following notations
will be used to indicate the existence or not of a third-order dependence and its
sign.
Notice that in the model proposed by Silverstein, Brin and Motwani, the existence
of one or more second-order dependencies implies the existence of the third-order
dependence, whereas in our model D 2 (A,B), D 2 (A,C), D 2 (B,C) and D 3 (A,B,C) are
independent, in the sense that any combination of their values is possible. For
example, even if all the three second-order dependencies are positive, D 3 (A,B,C)
might be zero or negative. In Subsection 3.3 an example about the purchase of a
triplet of items is discussed and the differences with respect to the other models
are discussed.
The following Definition 11 on the dependence function of three variables and
Theorem 6 extend the statements of Definition 7 on the dependence function for
pairs and Theorem 3 on the parity function to third-order dependencies.
Definition 11. (Dependence function of three variables) The boolean function of
variables A, B and C, whose minterms correspond to the values hv A ,v B ,v C
i for which
P(A=v A
"C=v C )MI will be called the dependence function
of variables A, B and C.
Theorem 6 (Parity of three variables dependence functions). If
the dependence function of variables A, B and C is
that is the parity function with parity even (Figure 4). If D 3 (A,B,C) 0, the
dependence function is
that is the parity function with parity odd and the complementary function of the
preceding one (Figure 5).
Proof. By definition
Fig. 4. The dependence function when D 3 (A,B,C) AE 0.
Fig. 5. The dependence function when D 3 (A,B,C) 0.
From analogous computations the values presented in Figure 6 follow. From these
ones, it is immediate to derive the two maps of Figure 4 and 5, when D 3
or D 3
A P(a,b,c)MI P(a,b,c)MI P(a,b,c)MI P(a,b,c)MI
A P(a,b,c)MI P(a,b,c)MI P(a,b,c)MI P(a,b,c)MI
Fig. 6. The dependence function for the three variables A, B and C.
3.2.1 Justification of the maximum independence definition. The idea of maximum
independence introduced in this paper is not intuitively obvious and needs
some further justification.
First consider the simple case of two variables A and B. In this case, as shown
above, the definition of maximum independence coincides with the well known
definition of absolute independence, according which A and B are independent if,
and only if, P(A=v A ,B=v B any combination of values of A
and B.
It is well known that the joint entropy of A and B E(A,B)=E(A)+E(BjA)=
E(B)+E(AjB) where E(BjA) and E(AjB) are the equivocation of B with respect
to A and the equivocation of A with respect to B, respectively. Therefore, the maximum
value of E(A,B) is reached when E(BjA) (or E(AjB)) is maximum. When A
and B are independent, the amount of information needed to know B, if A is known,
Theory of Dependence Values \Delta 13
or to know A, if B is known, is maximum. Notice that in this case E(AjB)=E(A)
and E(BjA)=E(B). These equalities will not hold in the case of three variables.
Now consider the case of three variables A, B and C. In general, if the probabilities
of A, B and C, and the second-order joint probabilities P(A,B), P(B,C) and P(A,C)
have been assigned, there is no assignment of the probability P(A,B,C) for which A,
and C are independent, that is, P(A=v A ,B=v B ,C=v C
for any combination of values of v A
and v C
However, it makes sense to search the value of P(A,B,C) for which the joint
entropy E(A,B,C) is maximum and to define that condition as the one of the
maximum level of independence compatible with the dependencies imposed by the
second-order joint probabilities. Indeed,
Therefore, since E(A,B), E(A,C) and E(B,C) depend only on the values of
second-order probabilities, E(A,B,C) reaches its maximum for that assignment of
P(A,B,C) for which also E(CjA,B), E(BjA,C) and E(AjB,C) reach their maximum
values. In other terms, the maximum independency level corresponds to the condition
in which the maximumamount of information is needed to know the value of a
variable, the other two being known. However, in general, since A, B and C are not
independent,
and this is different from the case of pairs of variables for which the concepts of
maximum independence and absolute independence coincide.
3.3 The lattice of dependencies
Since the knowledge of the dependence value of an itemset of cardinality k, together
with the values of the joint probabilities of all its subsets of cardinality k-1,
is sufficient to know the probabilites of all the combinations of its values, the lattice
of the itemsets can be adopted to describe the whole system of dependencies of a
given database. Of course, in such a lattice every node should be labelled with its
associated dependence value. Besides, the nodes at the top of the lattice representing
the itemsets of cardinality 1 will be labelled with the values of the differences
between the probability estimates, P(a)=O(a)/n, so on, and
the corresponding starting estimates (typically, and in absence of other estimates,
equal to 0.5).
By way of example, Figure 7 represents the dependence lattice relative to the
sample reported by Silverstein, Brin and Motwani in their paper.
The following are the data of purchases of coffee (c), tea (t), and doughnuts (d)
and their combinations proposed by those authors:
c t d
ct cd td
ctd
Fig. 7. The lattice relative to the purchases of coffee (c), tea (t) and doughnuts (d).
P(c,t,d)=0.4
The dependence values of the nodes of the lattice have been calculated as follows.
\Delta(c)=P(c)-P(c) MI=O(c)/n-0.5=0.93-0.5=+0.43
\Delta(t)=P(t)-P(t) MI=O(t)/n-0.5=0.21-0.5=-0.29
\Delta(d)=P(d)-P(d) MI=O(d)/n-0.5=0.51-0.5=+0.01
\Delta(c,t)=P(c,t)-P(c,t) MI=O(c,t)/n-P(c)\DeltaP(t)=0.18-0.19=-0.01
\Delta(c,d)=P(c,d)-P(c,d) MI=O(c,d)/n-P(c)\DeltaP(d)=0.48-0.47=+0.01
\Delta(t,d)=P(t,d)-P(t,d) MI=O(t,d)/n-P(t)\DeltaP(d)=0.09-0.1=-0.01
\Delta(c,t,d)=P(c,t,d)-P(c,t,d) MI=O(c,t,d)/n-0.078=0.08-0.078=+0.002
P(c,t,d)MI has been computed maximizing the entropy E(x) with x=P(c,t,d)
as suggested in Definition 8 on the maximum independence estimate.
Notice that from the value of \Delta(c,t,d) and from Definition 10 on the state of
dependencies it follows, for example, that the dependence of itemset fc,t,dg is
positive, whereas, by adopting the model proposed by Silverstein, Brin and Mot-
wani, the same dependence, evaluated as P(a;b;c)
P(a)\DeltaP(b)\DeltaP(c)
would be negative. This is due
to the fact that in the model by Silverstein, Brin and Motwani, the dependencies
which the subset fc,t,dg has inherited by the subsets fc,tg, fc,dg and ft,dg are
not distinguished from the specific inherent dependence. The complete dependence
table showing the sign of dependence function for all the values of hc,t,di is shown
in
Figure
8.
The dependence lattice can also be viewed as an useful tool to display the results
of a data mining investigation on a given database. Of course, it will be convenient
Theory of Dependence Values \Delta 15
TD TD TD TD
Fig. 8. The sign of dependence function for the example of the purchases of coffee (variable C),
tea (variable T) and doughnuts (variable D).
to display only the sub-lattice of the nodes having sufficient support and positive
or negative dependencies - anyway different from zero in a significant way. Often,
the dependence value is not necessary, it being sufficient to introduce the indication
of the dependence state in the lattice produced.
3.4 Dependence Rules for k-plets of Items of Arbitrary Cardinality
The case of triplets discussed in previous Subsection 3.2 is absolutely general. How-
ever, for the sake of completeness, the Definitions and the Theorems presented in
Subsection 3.2 will be extended in this Subsection to the more general case of k-
plets of arbitrary cardinality. For the sake of brevity, the proofs of the Theorems
will be omitted, with the exception of Theorem 7 which needs a specific proof.
Consider the case of a k-plet of boolean variables I 1 , I assume we
know all the joint probabilities up to the order (k-1):
We want to determine the k-th-order joint probabilities like P(i 1
on. The following Theorem shows that the knowledge
of a single k-th-order joint probability is sufficient to determine all the k-th-order
probabilities.
Theorem 7 (Unicity of the value). All the k-th-order joint probabilities can
be calculated as functions of the joint probabilities of the orders less than k and a
single k-th-order joint probability.
Proof. Assume, for example, we know P(i 1 ,
First, we determine
Analogously, we determine all the other joint probabilities related to elementary
conditions in which a single literal is complemented:
and so on.
Then, we compute all the joint probabilities referring to elementary conditions
in which two literals appear complemented:
and so on.
In general, in order to determine all the joint probabilities related to elementary
conditions containing m complemented literals, we apply the following relationship
in which a 1 6= a
P(i a 1
,i a 2
P(i a 1
,i a 3
,i a2 ,i a 3
where at most (m-1) complemented literals appear in the right size.
Definition 12. (Maximum independence estimate) If the joint probabilities up to
order k-1 are known but no information is available on the joint probabilities of
order k, then the conjecture on P(i
maximizing the joint entropy
of I 1 ,I
will be considered as the maximum independence estimate. For any fi
the maximum independence estimate will be indicated with the symbol
Definition 13. (Dependence value) The difference
will be defined as the dependence value of the itemset fi 1 , g.
Theorem 8 (Unicity of the value). If the joint probabilities up to the order
are known, the knowledge of the dependence value P(i 1 ,
is sufficient to describe all the k-th order joint probabilities.
Definition 14. (Dependence state) If the absolute value of the dependence value
exceeds a given threshold th, I 1 , I 2 ,: : :, I k are defined as
connected by a dependence of order k. If \Delta ? th, the dependence is defined as
positive; otherwise, it is defined as negative.
The following notations:
will be used to indicate the existence or not of a dependence of order k and its sign.
Definition 15. (Dependence function) The boolean function of variables I 1 , I
I k , whose minterms correspond to the values hv
I will be called
the dependence function of variables I 1 , I
Theorem 9 (Parity of dependence functions). If D k
0, the dependence function of variables I 1 , I , that is
the parity function with even parity.
Theory of Dependence Values \Delta 17
the dependence function of variables I 1 , I
I 1
, that is the parity function with odd parity.
In both cases, and for all the values of I 1 , I I k the difference P(i 1 ,
has an absolute value equal to \Delta.
4. ENTROPY AND DEPENDENCIES
A less intuitive but for some aspects more effective approach to determine dependencies
can be based on the concept of entropy. In this Section only a summary of
a possible entropy based theory of dependencies is presented, the task being left to
the reader of developing such a theory following the scheme of Section 3.
First consider the case of the pairs of items. Assume P(a), P(a), P(b), P(b)
are known. The entropy of A
is the measure of the average information content of the events a j A=TRUE and
a j A=FALSE. An analogous meaning can be attributed to
Consider now the mutual information
a ;b P(a , b )\Delta logP(b ja ).
is a measure of the average information content carried by b
on the value of A,
and viceversa, and therefore, it can be assumed as an indication of independence
of A and B. Unfortunately, mutual information I(A;B) is always positive; so it is
necessary to verify whether P(a,b) ? P(a)\DeltaP(b) or not, in order to determine the
sign of dependence. Therefore, we propose the following definition:
Definition 16. (Entropy based second-order dependence) If I(A;B) exceeds a given
threshold, we shall state that D 2 (A,B) AE 0 or D 2 (A,B) 0 according to whether
not. If I(A;B) does not exceed that threshold, we shall
state that D 2 (A,B) 0.
The extension of such definition to triplets is not immediate, since the ternary
mutual information I(A;B;C) defined in information theory does not own the meaning
we need now. Therefore we suggest the following one:
Definition 17. (Entropy based third-order dependence) If both E(AjB) - E(AjB,C)
and E(AjC) - E(AjB,C) exceed a given threshold, we state that D 3 (A,B,C) AE 0
when P(a,b,c) is larger than all the following estimates:
P(a)\DeltaP(b,c)
P(b)\DeltaP(a,c)
or D 3 (A,B,C) 0 when P(a,b,c) is less than all the three preceding estimates;
otherwise, D 3 (A,B,C) 0.
Definition 17 might seem asymmetric with respect to variables A, B and C. In
order to understand the reasons for which the relationships written in Definition 17
are symmetric, remember that, for example, if E(AjB)-E(AjB,C)?th then also
E(CjB)-E(CjA,B)?th. Besides, E(AjB,C) != E(AjB) and E(AjB,C) != E(AjC).
When, for example, E(AjB,C), E(AjB,C) is maximum and, therefore,
(maximum independence condi-
tion). It follows that also E(BjA,C) and E(CjA,B) take their maximumvalues, equal
to E(BjA) or E(BjC) and to E(CjA) or E(CjB).
An analogous definition can be introduced to evaluate dependencies in k-plets
with arbitrary k.
Definition 18. (Entropy based general dependence) Let E k
MIN
the minimum of all
the conditional entropies
are the subsets of (k-2) variables of the set
of (k-1) variables.
MIN
exceeds a given threshold,
larger than the maximum of the estimates
we state that D k (A, B,
If the same difference E k
MIN
exceeds the given threshold and
smaller than the minimum of the estimates
we state that D k (A,B,C,: : :,Z) 0.
If none of the two above specified conditions holds, we state that D k (A,B,C,: : :,Z)
Theory of Dependence Values \Delta 19
Notice that the computation of entropies can be simplified by applying the following
Theorem.
Theorem 10. If the entropies of order k-1 are known, a single entropy needs
to be determined in order to calculate all the entropies of order k.
Proof. Assume we know E(I 1 j I 2 , I 3 First we determine E(I 2 j
I 1 , I 3 observing that:
where only the last entropy is unknown. A similar method can be applied to
determine all the others conditional entropies of the type
E(I j
I j+1
(with 2 j k).
Finally, any other entropy can be easily calculated in terms of the already calculated
ones. For example:
E(I 1
I 4
I 4
5. THE DETERMINATION OF THE DEPENDENCE VALUES
The analysis developed in this paper refers essentially to the concept of confidence
and does not concern the principles of support. Almost all the algorithms so far
proposed for data mining are based on a first step aimed at determining the k-plets
having a sufficient support, namely, a sufficient statistical relevance. Such solutions
are compatible with the following algorithm for determining all the relevant
dependencies up to a certain order.
(1) Determination of the k-plets having a sufficient support.
Most algorithms for determining the k-plets having a sufficient support proceed
in the order of increasing cardinalities. In other terms, they first determine the
single items, then the pairs of items, the triplets, and so on. Such algorithms
are well suited to the following procedure. Other algorithms should be modified
in order to examine a k-plet P after the (k-1)-plets contained in P.
The program which has been specifically developed to verify the ideas described
in this paper is based on an algorithm for the determination of the k-plets (also
called itemsets) having sufficient support [Meo 1999] which has been chosen in
virtue of its speed, but which produces the list of itemsets organized in a family
of trees. However, this data structure, as any other, can be transformed into a
lattice suitable to the above described computations in a relatively short time.
Notice that it is not necessary that the complete lattice of all the itemsets
having sufficient support is represented in the main memory at the same time.
What is really needed is that at the starting point every node of the structure,
that is every analyzed itemset, is represented by two sets of data:
a) the values of the joint probabilities describing that itemset;
b) the pointers to its parents.
For the sake of simplicity, the program here described is characterized, as concerns
preceding point a), by the choice of describing an itemset with a single
datum, as is possible in virtue of Theorem 7 on the unicity of the value. The
chosen datum is the number of occurrences n ab:::z of that itemset, which is proportional
to its probability P(a,b,: : :,z) (see Figure 9 where ptr x denotes
the pointer to itemset x).
abc
abc
ptr ab
ptr ac
ptr bc
bc
ptr c
bc
ac
ptr a
ptr c
ac
ab
ptr a
ab
a
a
c
c
Fig. 9. The data structure of the itemsets.
This choice makes it possible to store millions of itemsets in the main memory
at the same time and to perform all the following computations without storing
any partial results in the mass memory.
(2) Determination of all the joint probabilities of an itemset.
The computation of the joint probabilities P(i
k ) for all the combinations
of values of i
k can be performed recursively applying the
relationships presented in the proof of Theorem 7 on the unicity of the value.
Of course, recursion proceeds towards the parents and the grandparents. For
example, in the case of Figure 9,
where only the probabilities directly connected to the numbers of occurrences
introduced in Figure 9 appear.
Theory of Dependence Values \Delta 21
(3) Determination of maximum independence estimates.
The determination of the value x for which the joint entropy
takes its maximum value can be performed numerically, at the desired level of
accuracy, with conventional interpolation techniques.
(4) Computation of the dependence value and states.
A direct application of Definitions 13 and 14 leads to the final results.
5.1 Experimental Evaluation
The proposed approach has been verified with an implementation in C++ using
the Standard Template Library. The program has been run on a PC Pentium II,
with a 233 Mhz clock, 128 MB RAM and running Red Hat Linux as the operating
system. We have worked on a class of databases that has been taken as benchmark
by most of data mining algorithms on association rules. It is the class of synthetic
databases that project Quest of IBM has generated for its experiments (see [Agrawal
and Srikant 1994] for detailed explanations).
We have made many experiments on several databases with different values of
the main parameters and of the minimum support, but the obtained results are
all similar to the ones here proposed. In particular, the experiments have been
run with the value of minimum support equal to 0.2% and with a precision in the
computation of the dependence values equal to 10 \Gamma6 .
In the generation of the databases we have adopted the same parameter settings
proposed for synthetic databases: D, the number of transactions in the database
has been fixed to 100 thousands; N, the total number of items, fixed to 1000. T, the
average transaction length, has been fixed to 10, since its value does not influence
the program behaviour. On the contrary, I, the average length of the frequent
itemsets, has been varied, since its value really determines the depth of the lattice
to be generated. Each database contains itemsets with sufficient support having a
different average length (3,4,: : :,8). The extreme values of the interval [2-10] have
been discarded for the reasons that follow.
The low value has been discarded because it does not make much sense to maximize
the entropy related to itemsets having only two items: the direct approach,
that compares the probability of such an itemset with the product of the probabilities
of the two items, has been adopted in this case. The high values have been
avoided in consideration of the fact that longer are the itemsets the more probability
they have to occur under the threshold of minimum support. In this case too
few itemsets reveal to be over the threshold and the comparison of the different
experiments is no more fair.
Thus, it happens that even if the "nominal" average itemsets length is increased
the actual average length of the itemsets with sufficient support reveals to be significantly
lower. Table 10 reports the total number of itemsets with sufficient support
and their nominal and actual average lengths in the experiments.
In
Figure
11 two execution times are shown. T 1 is the CPU execution time needed
for the identification of all the itemsets with sufficient support; T 2 is the time for the
computation of the dependence values of the itemsets previously identified. Both
22 \Delta R. Meo
Total number nominal average actual average
of itemsets itemset length itemset length
Fig. 10. The number of itemsets and their average lengths in the experiments.
the times have been normalized with respect to the total number of itemsets, since
this one changes considerably in the different experiments.0.0050.0150.0250.0350.0452.81 3.28 3.67 4.05 4.82 4.96
CPU
time[s]
Average itemset length
Execution times per itemset
Fig. 11. Experimental results.
You can notice that time T 1 decreases with the actual average itemset length.
This is a particularity of the algorithm adopted (called Seq) for the first step,
because this one builds, during its execution, temporary data structures that does
not depend on the itemset length. Furthermore, Seq has been proved to be suitable
to very long databases and to searches characterized by very low values of resolution.
On the contrary, time T 2 increases with the average itemset length, since the
depth of the resulting lattice increases. Thus, this fact is not surprising. On the
other side, you can observe that the increments are moderate with the exception of
the experiment having the average itemset length equal to 4.82 (corresponding to a
nominal average itemset length equal to 6). In that experiment, as Table 10 reports,
the total number of itemsets that exceed the minimumsupport threshold, compared
Theory of Dependence Values \Delta 23
to the itemsets length, grows suddenly: in these conditions, the generated lattice
is very heavily populated. Therefore, the high dimension of the output explains
the result. Finally, these experiments demonstrate that this new approach to the
discovery of knowledge on the itemsets dependencies is feasible and suitable to the
high resolution researches that are typical of data mining.
6. CONCLUSIONS
In this paper it has been shown that a single real - the dependence value - contains
all the information on dependencies relative to a given itemset. Besides, by
virtue of the Theorem stating that dependence functions are always parity func-
tions, the determination of the combinations of values for which dependencies are
positive or negative is immediate. Furthermore, the feasibility of this new theory
is demonstrated in practical cases on a set of experiments on different databases.
Some themes are worth being developed further. The first one concerns the maximum
independence estimates. Is it possible to find a closed formula giving the
maximum independence probability of a given itemset as a function of the lower
level probabilities? The second theme to be developed is the definition of the confidence
levels. Which percentage of the probability P(a,b,: : :) must be exceeded by
the dependence value in order to state that dependence is strong? This question has
been discussed adequately neither in the known framework support-confidence, but
probably the model here introduced is more suitable to a sound theoretical analysis.
A third area of investigation concerns the algorithms for determining the dependence
values. The method proposed in this paper assumes that the itemset having
sufficient support have been determined by adopting one of the known methods
and on these ones performs the computation of the dependence values. However,
it is likely that an integrated method combining the two steps is more rapid and
effective. Theoretical analysis based on probability and information theory and the
development of new algorithms should be combined and integrated in this area of
research.
--R
Online generation of association rules.
Database mining: A performance perspective.
Mining association rules between sets of items in large databases.
Fast discovery of association rules.
Fast algorithms for mining association rules in large databases.
An efficient algorithm for mining association rules in large databases.
Knowledge discovery in databases: An attribute-oriented approach
Sampling large databases for association rules.
From file mining to database mining.
A statistical perspective on kdd.
Practitioner problems in need of database research: Research directions in knowledge discovery.
A new approach for the discovery of frequent itemsets.
An effective hash based algorithm for mining association rules.
Mining quantitative association rules in large relational tables.
Beyond market baskets: generalizing association rules to dependence rules.
Mining generalized association rules.
--TR
Practitioner problems in need of database research
Mining association rules between sets of items in large databases
An effective hash-based algorithm for mining association rules
Mining quantitative association rules in large relational tables
Dynamic itemset counting and implication rules for market basket data
Fast discovery of association rules
Beyond Market Baskets
Database Mining
Pincer Search
Online Generation of Association Rules
Set-Oriented Mining for Association Rules in Relational Databases
Knowledge Discovery in Databases
Fast Algorithms for Mining Association Rules in Large Databases
Discovery of Multiple-Level Association Rules from Large Databases
An Efficient Algorithm for Mining Association Rules in Large Databases
Mining Generalized Association Rules
Sampling Large Databases for Association Rules
A New Approach for the Discovery of Frequent Itemsets
--CTR
Alexandr Savinov, Mining dependence rules by finding largest itemset support quota, Proceedings of the 2004 ACM symposium on Applied computing, March 14-17, 2004, Nicosia, Cyprus
Elena Baralis , Paolo Garza, Associative text categorization exploiting negated words, Proceedings of the 2006 ACM symposium on Applied computing, April 23-27, 2006, Dijon, France
Peter Fule , John F. Roddick, Experiences in building a tool for navigating association rule result sets, Proceedings of the second workshop on Australasian information security, Data Mining and Web Intelligence, and Software Internationalisation, p.103-108, January 01, 2004, Dunedin, New Zealand
Elena Baralis , Silvia Chiusano, Essential classification rule sets, ACM Transactions on Database Systems (TODS), v.29 n.4, p.635-674, December 2004 | entropy;dependence rules;variables independence;association rules |
364001 | Discontinuous enrichment in finite elements with a partition of unity method. | We present an approximate analytical method to evaluate efficiently and accurately the call blocking probabilities in wavelength routing networks with multiple classes of calls. The model is fairly general and allows each source-destination pair to service calls of different classes, with each call occupying one wavelength per link. Our approximate analytical approach involves two steps. The arrival process of calls on some routes is first modified slightly to obtain an approximate multiclass network model. Next, all classes of calls on a particular route are aggregated to give an equivalent single-class model. Thus, path decomposition algorithms for single-class wavelength routing networks may be readilt extended to the multiclass case. This article is a first step towards understanding the issues arising in wavelength routing networks that serve multiple classes of customers. | Introduction
The modeling of evolving discontinuities with the finite element method is
cumbersome due to the need to update the mesh topology to match the
geometry of the discontinuity. In this paper, we present a technique to model
discontinuities in the finite element framework in a general fashion. The
essential feature is the incorporation of enrichment functions which contain a
presently Assistant Professor of Civil and Environmental Engineering, Duke University
y Research Associate in Mechanical Engineering, Northwestern University
z Walter P. Murphy Professor of Mechanical Engineering, Northwestern University
discontinuous field. In the application of the technique to fracture mechanics,
functions spanning the appropriate near-tip crack field can also be included
to improve accuracy. The enrichment of the finite element approximation in
this manner provides for both the modeling of discontinuities and accurate
moment intensity factors with minimal computational resources.
The concept of incorporating crack fields in a finite element context is
not new, see for example [1]. In addition, there are several well established
techniques for modeling cracks and crack growth such as boundary element
methods, finite elements with continuous remeshing [2], and meshless methods
[3]. Recently, the trend has focused on the development of finite element
methods which model discontinuities independently of element boundaries.
These include the incorporation of a discontinuous mode in an assumed strain
framework [4], and enrichment with near-tip fields for crack growth with
minimal remeshing [5]. The latter method has recently been extended by
enriching with a discontinuous function behind the crack tip [6], [7], such
that no remeshing is necessary.
In this paper, the method of discontinuous enrichment is cast in a general
framework, and we illustrate how both two-dimensional and plate formulations
can be enriched to model cracks and crack growth. The enrichment
of the approximation with discontinuous near-tip fields requires a mapping
technique, and so an alternative near-tip function is developed. The present
method offers several advantages over competing techniques for modeling
crack growth. In contrast to traditional finite element methods, this technique
incorporates the discontinuity of the crack independently of the mesh,
such that the crack can be arbitrarily located within an element. The present
technique has a distinct advantage over boundary element methods as it is
readily applicable to non-linear problems, anisotropic materials, and arbitrary
geometries. The method does not require any remeshing for crack
growth, and as it is an extension of the finite element method, it can exploit
the large body of finite element technology and software. Specific examples
of augmenting well established two-dimensional and plate elements with both
discontinuous and asymptotic near-tip functions are presented.
The present technique exploits the partition of unity property of finite elements
first cited by [8], which allows global enrichment functions to be locally
incorporated into a finite element approximation. A standard approximation
is 'enriched' in a region of interest by the global functions in conjunction with
additional degrees of freedom and the local nodal shape functions. The application
of this idea to capture a specific frequency band in dynamics can
be found in [9]. The utility of the method has found application in solving
the scalar Laplacian problem for domains with re-entrant corners [10]. In
the context of fracture mechanics, the appropriate enrichment functions are
the near-tip asymptotic fields and a discontinuous function to represent the
jump in displacement across the crack line. In contrast to the work of [4], the
enrichment is not through an assumed-strain method, so the displacement
field is continuous along either side of the crack.
This paper is organized as follows. Following this introduction, we review
the construction of an enriched approximation and we develop a discontinuous
near-tip function which does not require a mapping. For specific applications
we consider two-dimensional linear elastic fracture mechanics and the
fracture of Mindlin-Reissner plates in Section 3. Numerical results to verify
the accuracy of the formulation are given in Section 4, with a summary and
some concluding remarks provided in the last section.
2 Construction of a Finite Element
Approximation With Discontinuities
In this section, we present the construction of a finite element approximation
with discontinuous enrichment. Emphasis is placed on modeling cracks, in
which a standard approximation is enriched with both the asymptotic near-tip
functions and a discontinuous 'jump' function. The incorporation of
discontinuous near-tip functions requires a mapping for kinked cracks, and
so an alternative near-tip function is presented. The manner in which nodes
are selected for enrichment and the modifications to the numerical integration
of the weak form are also given.
2.1 General Form
To introduce the concept of discontinuous enrichment, we begin by considering
the
domain\Omega bounded by \Gamma with an internal boundary \Gamma c as shown in
Fig. 1a. We are interested in the construction of a finite element approximation
to the field u
2\Omega which can be discontinuous along \Gamma c .
Consider the uniform mesh of N nodes for the domain shown in Fig. 1b
which does not model the discontinuity. The discrete approximation u h to
the function u takes the form
I
I ((x))u I (1)
where N I is the shape function for node I in terms of the parent coordinates
((x)), and u I is the vector of nodal degrees of freedom. The nodal shape
function N I is non-zero over the support of node I, defined to be union of
the elements connected to the node.
We now pose the question of how to best incorporate the discontinuity in
the field along \Gamma c . The traditional approach is to change the mesh to conform
to the line of discontinuity as shown in Fig. 1c, in which the element edges
align with \Gamma c . While this strategy certainly creates a discontinuity in the
approximation, it is cumbersome if the line \Gamma c evolves in time, or if several
different configurations for \Gamma c are to be considered.
In this paper we propose to model the discontinuity along \Gamma c with extrinsic
enrichment [11], in which the standard approximation (1) is modified as
I ((x))@u I
a Il G l (x)A (2)
where G l (x) are enrichment functions, and a Il are additional nodal degrees
of freedom for node I. In the above, the total number of enriched degrees of
freedom for a node is denoted by nE (I). If the enrichment functions G l are
discontinuous along the boundary \Gamma c , then the finite element mesh does not
need to model the discontinuity. For example, the uniform mesh in Fig. 1d
is capable of modeling a jump in u when the circled nodes are enriched with
functions which are discontinuous across \Gamma c .
The above form of a finite element approximation merits some discussion.
We note that the enrichment functions G l are written in terms of the global
coordinates x, but that they are multiplied by the nodal shape functions N I .
In this fashion the additional enrichment takes on a local character. This
concept of multiplying global functions by the finite element partition of
unity was first suggested in [8]. The change in the form of the approximation
from (1) to (2) is only made locally in the vicinity of a feature of interest,
such as a discontinuity.
We now turn to the precise form of the enrichment functions used to model
discontinuous fields, with the goal of modeling cracks and crack growth.
Three distinct regions are identified for the crack geometry, namely the crack
interior and the two near-tip regions as shown in Fig.2. In the set I of all
nodes in the mesh, we distinguish three different sets which correspond to
each of these regions. The set J is taken to be the set of nodes enriched for
the crack interior, and the sets K 1 and K 2 are those nodes enriched for the
first and second crack tips, respectively. The precise manner in which these
sets are determined from the interaction of the crack and the mesh geometry
is given in Section 2.3.
The enriched approximation takes the form
I
I u I
l
l
where b J and c 1
Kl , c 2
are nodal degrees of freedom corresponding
to the enrichment functions H(x), F 1
l (x) and F 2
l (x), respectively. The
function H(x) is discontinuous across the crack line, and the sets F 1
l (x) and
l (x) consist of those functions which span the near-tip asymptotic fields.
For two-dimensional elasticity, these are given by
fF l (r; ')g 4
r sin( '
r cos( '
r sin( '
r cos( '
where (r; ') are the local polar coordinates for the crack tip [12]. Note that
the first function in (40),
rsin( '), is discontinuous across the crack faces
whereas the last three functions are continuous. The form of the near-tip
functions for plates is similar and is developed in Section 3.
The jump function H(x) is defined as follows. The crack is considered to
be a curve parametrized by the curvilinear coordinate s, as in Fig. 3. The
origin of the curve is taken to coincide with one of the crack tips. Given a
point x in the domain, we denote by x the closest point on the crack to x.
At x , we construct the tangential and normal vector to the curve, e s and
with the orientation of e n taken such that e s " e . The function
H(x) is then given by the sign of the scalar product In the
case of a kinked crack, the cone of normals at x needs to be considered (see
[6]). Roughly speaking, the function H(x) takes the value of 1 'above' the
crack, and \Gamma1 'below' the crack.
2.2 An Alternative Near-Tip Function
The jump function H(x) is in general not capable of representing the discontinuity
in the displacement field along the entire crack geometry. For
example, if the crack tip is not aligned with an element edge, then a near-tip
function must also be used (see [6]). For cracks which are not straight, a
mapping is required to align the near-tip discontinuities with the crack edges.
Due to the local form of the enrichment, the mapping procedure is only necessary
in those elements with nodes enriched with the near-tip functions. In
this section, we review the mapping procedure and present an alternative
near-tip function.
The crack is modeled as a series of straight line segments connecting ver-
tices, with new crack segments added as the crack grows. The discontinuities
in the near-tip fields are aligned with each segment by using a procedure developed
in [12] and [5]. In this procedure, the discontinuity in the near-tip
functions are aligned with the crack by a mapping technique that rotates
each section of the discontinuity onto the crack model.
A key step in technique is the modification of the angle ' in F l (r; ').
Given a point define an angle
terms of the angle of
the segment ' R (see Fig 4) and the sampling point angle ff(x
The coordinates of the sampling point are mapped to coordinates in
the crack tip frame ("x; " y) as shown in Fig. 5:
r cos(
r sin(
where l is the distance between r is as shown in the
figure. The variables r and ' in the enrichment functions are then computed
in terms of the local ("x; " y) coordinates. This procedure is repeated similarly
for each segment of the crack and the sequence of mappings leaves the length
of the crack invariant.
In [5], the entire crack was modeled with the near-tip fields and the above
mapping procedure. The use of the discontinuous function H(x) eliminates
the need for the mapping on the crack interior, so that the above procedure
is only necessary at each crack tip. In the following, we propose the use of a
smooth 'ramp' function in conjunction with the function H(x) to model the
near-tip region.
Consider the following function defined in terms of the crack tip coordinates
x
l c
l c
where the length l c is taken to be the characteristic length of the element
containing the crack tip. The above function and its derivative vanishes at
the crack tip.
When this smooth ramp function is multiplied by the function H(x), i.e.,
~
a near-tip function which is discontinuous across the crack edges and vanishes
in front of the crack tip results. This function in turn does not require any
mapping to align the discontinuity with the crack edges. The function ~
R is
shown in Fig. 6. in the vicinity of a crack tip and two consecutive segments.
It is clear from the figure that the resulting near-tip function is continuous
in the
domain\Omega and discontinuous across the crack line.
The above near-tip function is useful on several levels. In the first in-
stance, in conjunction with the function H(x) for the crack interior it is
perhaps the simplest means to model the entire crack discontinuity. In addi-
tion, for non-linear problems the exact near-tip functions may not be known.
The concept of multiplying a smooth function which vanishes at the crack
tip by the jump function H(x) can also be extended to three-dimensional
problems. In linear elastic fracture mechanics, however, the incorporation
of the asymptotic near-tip fields is still useful to obtain greater accuracy at
the crack tip. To some extent, this advantage can be maintained by simply
replacing the first function in (4) with the function ~
R:
fF l (r; ')g 4
r cos( '
r sin( '
r cos( '
The last three functions need not be mapped into the crack faces, as they
are all continuous in the
domain\Omega\Gamma
2.3 Node Selection for Enrichment
In the preceding development, three distinct regions were identified for en-
richment, corresponding to the nodal sets J , K 1 and K 2 . In this section we
define these sets precisely, and present the methodology by which nodes are
identified for inclusion in each set.
We begin with some preliminary notations. The support of node I is
denoted by ! I , with closure ! I . Essentially, a node's support is the open set
of element domains connected to the node, and the closure is the closed set
which includes the outer boundary. The distinction as it applies to nodal
selection will be discussed shortly. We also denote the location of crack tips
respectively, and by C the geometry of the crack.
With these definitions, the sets J , K 1 and K 2 are defined as follows.
The sets K 1 and K 2 consist of those nodes whose support closure contains
crack tip 1 or 2, respectively. The set J is the set of nodes whose support is
intersected by the crack and do not belong to K 1 or
Note that the set J consists of nodes whose support, as opposed to support
closure, is intersected by the crack. This distinction implies that if the crack
only intersects the boundary of a node's support, the node will not be enriched
with the function H(x). This prevents any nodes from being enriched
with a constant function (either -1 or +1) over their entire support, which
is important in order to avoid creating a linear dependency in the approxi-
mation. We note that for the alternative near-tip function proposed in the
previous section, the support closure must be changed to the open set for
similar reasons.
In practice the above sets are determined as follows. All elements intersected
by the crack are first determined. From this set of elements, we
distinguish three disjoint sets of 'tip elements' (for either tip 1 or 2) and
'interior elements'. The set of tip elements are given by those which contain
either crack tip. The nodes of the 'tip elements' correspond to either set K 1
or K 2 . The nodes of the 'interior elements' in turn correspond to the set J .
Fig. 7 illustrates the nodes that are selected as tip nodes and interior nodes
for the cases of a uniform mesh and an unstructured mesh.
An additional step is taken to remove those nodes from the set J whose
support closure, but not support, is intersected by the crack. For this pur-
pose, subpolygons which align with both the crack and element boundaries
are generated as shown in Fig. 8b. These subpolygons are generated easily
enough by triangulating the polygons formed from the intersection of the
crack and element boundaries.
The generation of the subtriangles allows the computation of the amount
of a node's support `above' and 'below' the crack, which can then be compared
against a tolerance. In two-dimensional analysis, we denote the area
of a nodal support by A ! , which is calculated from the sum of the areas of
each element connected to the node. With the aid of the subtriangles, we
also calculate the nodal support area above A ab
and below A be
! the crack. We
then calculate the ratios
A ab
(13a)
A be
If either of these ratios are below a tolerance (we have used 0.01, or 1%), the
nodes are removed from the set J .
2.4 Numerical Integration of the Weak Form
For elements cut by the crack and enriched with the jump function H(x),
we make a modification to the element quadrature routines for the assembly
of the weak form. As the crack is allowed to be arbitrarily oriented in
an element, standard Gauss quadrature may not adequately integrate the
discontinuous field. For those nodes in the set J , it is important that the
quadrature scheme accurately integrate the contributions to the weak form
on both sides of the discontinuity. If the integration of the discontinuous
enrichment is indistinguishable from that of a constant function, spurious
singular modes can appear in the system of equations. In this section, we
present the modifications made to the numerical integration scheme for elements
cut by a crack.
The discrete weak form is normally constructed with a loop over all ele-
ments, as the domain is approximated by
where m is the number of elements,
and\Omega e is the element subdomain. For
elements cut by a crack, we define the element subdomain to be a union of
a set of subpolygons whose boundaries align with the crack geometry:
s
s denotes the number of subpolygons for the element. The subtriangles
shown in Fig. 8 already generated for the selection of the interior nodes
also work well for integration. It is emphasized that the subpolygons are
only necessary for integration purposes; no additional degrees of freedom are
associated with their construction. In the integration of the weak form, the
element loop is replaced by a loop over the subpolygons for those elements
cut by the crack.
3 Application to Fracture Mechanics
In this section, we review the pertinent equations for linear elastic fracture
mechanics. In this paper, emphasis is placed on plate fracture, although
some two-dimensional plane-strain studies are discussed. A key difference in
the plate formulation is the enrichment of the displacement components with
different sets of near-tip functions. After reviewing the governing equations
for Mindlin-Reissner plates, we examine the form of the asymptotic crack tip
fields. A domain form of the J-integral for plates is derived for the calculation
of the energy release rate and the moment intensity factors. Finally, the
enriched finite element approximation is presented.
3.1 Mindlin-Reissner Plate Formulation
Two main formulations exist to model a plate: the classical theory or Kirchhoff
plate theory and the Mindlin-Reissner plate theory. Allowing three
boundary conditions instead of two for the Kirchhoff theory, the Mindlin
theory gives a more realistic shear and moment distribution around a crack
tip (see [13] and [14]). A summary of the Mindlin theory follows.
3.1.1 Governing Equations
There are several different ways to introduce the Mindlin theory. As we
are also interested in examining problems in two-dimensional elasticity, the
theory is presented here as a degeneration of the three-dimensional elasticity
problem using the principal of virtual work with the appropriate kinematic
assumptions.
Consider a plate of thickness t whose mid-plane lies in the x
plane. The conventions used throughout this paper are shown in Fig. 9.
The main assumptions of the Mindlin theory state that the in plane dis-
placements, u 1 and u 2 vary linearly through the thickness with the section
rotations / 1 and / 2 . In addition, the normal stress oe 33 is assumed to vanish in
the domain. For the sake of simplicity, we make the additional assumptions
that the surface of the plate and any crack faces are traction-free.
In the (e 1 is the unit normal vector to the plate,
the deformation components at a point are given by
where w is the transverse displacement and / 1 and / 2 are the rotations
about the x 2 and x 1 axes, respectively. The above can be expressed in a
more compact form as
The strain is given by2 (ru
with the bending contribution
and a shear contribution
We note that the x 3 related components are zero for both ffl b and ffl s .
The virtual internal work is defined by
Z
d\Omega (21)
where oe is the symmetric stress tensor, and ffiu is an arbitrary virtual displacement
from the current position. After a few manipulations, we obtain
the relation
where the superscript indicates a reduction of the operator to the in plane
component and oe s is the shear stress vector oe
Making the substitution (22) into (21) and integrating through the thickness
gives the work expression
Z
A
where the moment M and shear Q are defined by
Z t=2
\Gammat=2
Z t=2
\Gammat=2
oe s dx 3 (24)
The virtual external work is composed of the action of the bending and
twisting moments gathered in a couple vector C, and of the shear traction
T . We assume there is no external pressure acting on the plate. The virtual
external work is then given by
Z
Z
Equating the internal and external virtual work, and applying the divergence
theorem yields the equilibrium equations
in\Omega
r
and the traction boundary conditions on \Gamma
where n is the unit outward normal to the boundary.
The constitutive relationships are obtained by energetic equivalence between
the plate and the three-dimensional model. Assuming the plate is
made of an isotropic homogeneous elastic material of Young's modulus E
and of Poisson's ratio , the constitutive relations are given by4 M 11
22
ffl b22
and
Ekt
5=6 is a correction factor which accounts for the parabolic variation
of the shear stresses through the plate thickness.
These are rewritten in a more compact form using the fourth order bending
stiffness tensor D b and the second order shear stiffness tensor
3.1.2 Weak Form
Let the boundary \Gamma be divided into a part \Gamma u on which displacement boundary
conditions are imposed and a part \Gamma t on which loads are applied with
the restrictions
The kinematics constraints are given by a prescribed transverse displacement
w and prescribed rotations / while the loads come from the prescribed couples
C and prescribed shear tractions T . As in Section 2, we also designate
c as an internal boundary across which the displacement field is allowed to
be discontinuous.
be the space of kinematically admissible transverse displacements
and rotations:
where V is a space of sufficiently smooth functions on \Omega\Gamma The details on
this matter when the domain contains an internal boundary or re-entrant
corner may be found in [15] and [16]. We note that the space V allows for
discontinuous functions across the crack line.
The space of test functions is defined similarly as:
The weak form is to find (w; /) 2 V g such that
Z
(D
Z
(D s s(w; /)) \Delta s(ffiw; ffi/)
Z
Z
It can be shown that the above is equivalent to the equilibrium equations (26)
and traction boundary conditions (27). When the space V is discontinuous
along \Gamma c , the traction-free conditions on the crack faces are also satisfied.
In contrast to boundary element techniques, this enables the method to be
easily extended to non-linear problems.
In the finite element method, the space V is approximated with a finite
dimensional space V h ae V. The space V h is typically made discontinuous
across \Gamma c by explicitly meshing the surface, as in Fig. 1c. In the present
method, the approximating space is constructed with discontinuous enrichment
3.2 Plate Fracture Mechanics
Consider the problem of a through crack in a plate as shown in Fig. 10,
where for convenience we adopt a local polar coordinate system centered at
the crack tip. In contrast to the stress intensity factors obtained in classical
linear elasticity, in plate theory the quantities of interest are moment and
shear force intensity factors. The moment intensity factors are denoted by
K I and K II , while the shear force intensity factor is denoted by K III . These
are defined as
The relationship between these factors and the energy release rate G is similar
to the three-dimensional theory:
\Theta
I
II
5Et
III (37)
The form of the asymptotic near-tip displacement fields differs significantly
from the three-dimensional theory. In particular, the transverse displacement
w is only singular when a K III mode is present. The asymptotic
displacement fields in Mindlin-Reissner plate theory are given in [17], and
they are provided here for the sake of completeness.
5h
\Gamma3
(38a)
\Gammasin(
\Gamma2cos(
(38c)
For the purposes of defining the near-tip enrichment functions in the plate
theory, we consider only the terms proportional to
r for the rotations / 1
and / 2 . For the transverse displacement, we consider terms proportional to
both
r and r 3=2 . With these restrictions, the near-tip fields are contained
in the span of the sets
where
fg I (r; ')g j
ae p
oe
(40a)
ff I (r; ')g j
ae p
oe
(40b)
The discrete approximation for the plate which incorporates the above near-tip
functions is presented in Section 3.4.
3.3 Domain Form of the J-integral
Several different domain and path-independent integrals have been developed
for the extraction of mixed mode moment and shear force intensity factors in
plates [17]. These integrals typically consist of a contour integral enclosing
the crack-tip singularity. With finite elements, the numerical evaluation of
these integrals usually involves some kind of smoothing technique, as the
required field quantities are discontinuous at element interfaces.
In this section, we illustrate the use of a weighting function q to recast
these line integrals into their equivalent domain form. The development
presented here closely follows that given for two-dimensional elasticity in
[18]. The domain forms of crack contour integrals are particularly well suited
for use with finite elements, as the same quadrature points used for the
integration of the weak form can be used to calculate the domain integral.
The construction of additional quadrature points or the use of a smoothing
procedure is not required.
Consider the open contour \Gamma surrounding a through crack as shown in
Fig. 11. In the following, we use indicial notation where the Greek indices
(ff; fi) range over the values and a comma denotes a derivative with
respect to the following argument. The contour integral proposed by [17] in
the absence of an externally applied pressure is given by
I
where W is the strain energy density of the plate. This is defined as
We now introduce a weight function q 1 which is defined over the domain
of interest. Consider the simply connected curve
shown in Fig. 11. The function q 1 is defined to be sufficiently smooth in the
area A enclosed by C, and is given on the surfaces by:
ae 1 on \Gamma
on C
We then use this function to rewrite (41) as
I
Z
where we have used \Gamman i on \Gamma, and
on the crack faces. The last integral above vanishes for traction free crack
faces. Applying the divergence theorem to the closed integral, we then obtain
Z
A
Z
which is the equivalent domain form of the J 1 integral proposed by [17].
The measure number J 1 is domain independent and its magnitude is
equivalent to the energy release rate (37). Therefore, under pure mode I
loading, the moment intensity factor K I is given by
r
In more general mixed-mode conditions, the values K I ,K II and K III cannot
be separated so easily. While the comparison of J 1 to analytical values is
adequate to assess the effectiveness and qualities of the enrichment strategy,
crack growth laws are typically expressed in terms of the mixed-mode intensity
factors. In classical linear elasticity, the interaction integral approach
[19] has proven effective to extract mixed mode stress intensity factors. The
application of this method to plate fracture is currently under development
3.4 Enrichment of the MITC4 plate element
When discretizing the plate equations (26), some care must be taken to avoid
shear locking. As the plate becomes very thin (i.e. t ! 0), the following
relationship must be satisfied to keep the strain energy in the plate bounded:
In other words, the shear strain ffl s must vanish as t ! 0. Standard displacement
based elements, such as the four node isoparametric element, have
difficulty satisfying this constraint. The consequence is a structure which
exhibits an overly stiff response, often referred to as shear locking.
To discretize the plate displacements (16), we begin with the MITC4
element. To avoid shear locking, the MITC formulation modifies the approximation
for the section rotations / in the expression for the shear stiffness
(see [21]). In the following, we express this modification using the notation
~
I , where it is understood that only those expressions relating to the shear
components are modified.
The enriched discretization takes the form
I w I
c w
Kl G l (r; ')
(48a)
~
I / I
~
c /
Kl F l (r; ')
(48b)
where N I are the standard bilinear shape functions. In the above, we have
collapsed the sums over each crack tip into one for compactness.
The sets of near-tip functions G l and F l are derived from (40) in the
following fashion. We take G l to be only those functions in g l which are
proportional to r 3=2 . The set F l is taken to be equivalent to f l . In addition to
having four additional degrees of freedom for each displacement component,
this choice for G l and F l satisfies the following relation
such that a linear combination of the near-tip enrichment functions can satisfy
(47). We note that this relationship does not ensure that the enriched
formulation will be completely free of shear locking. However, the numerical
examples presented in the next section indicate that the above formulation
performs well for a wide range of plate thicknesses.
In this section we present several different numerical calculations. We first
examine some problems in two-dimensional elasticity, including a robustness
test and the simulation of crack growth. Then a benchmark and an additional
study are presented for Mindlin-Reissner plates.
4.1 Two-dimensional problems
We begin with a simple example of an edge crack to demonstrate the robustness
of the discretization scheme, and then present results for more complicated
geometries. In all of the following examples, the material is taken to be
isotropic with Young's modulus
and plane strain conditions are assumed. The calculation of the stress intensity
factors is performed with the domain form of the interaction integral,
and the maximum hoop stress law is used to govern crack growth ( see [19]
and [5]).
4.1.1 Robustness tests
Consider the geometry shown in Fig. 12: a plate of width w and height L
with an edge crack of length a, subjected to a far-field stress oe o . We analyze
the influence of the location of the crack with respect to the mesh on the K I
stress intensity factor when the position of the crack is perturbed by ffix in
the X direction and ffiy in the Y direction. The geometry is discretized with
a uniform mesh of 24x48 4-noded quadrilateral elements.
In this study, several different discretizations are obtained depending on
the position of the crack with respect to the mesh. Two cases are shown in
Fig. 13. In this investigation, we wish to examine the performance of the
modified tip function ~
R(x), and the accuracy of the formulation when it is
used in conjunction with the other near-tip functions as in (9).
The exact solution for this problem is given by [22]
a (50)
where C is a finite-geometry correction factor:
a
a
a
a
The numerical results normalized by the exact solution when only the
function ~
R(x) is used to model the near-tip region are given in Table 1.
Depending on the location of the crack tip, the total number of degrees of
freedom varies from 2483 to 2503. The results vary by approximately 4%
over all crack tip locations tested. When the near-tip functions are added,
the accuracy improves as shown in Table. 2. These results are consistent with
those reported in [6] in that the best results are obtained when the crack is
aligned with mesh boundaries. We note that the results are not as accurate
as when the exact asymptotic function
rsin(') is used, in which case the
error is less than 2% (see [6]).
4.1.2 Crack Growth from a Fillet
This example shows the growth of a crack from a fillet in a structural mem-
ber, and serves to illustrate how the present method can be used as an aid to
design against failure. The configuration to be studied is shown in Fig. 14,
with the actual domain modeled as indicated. The setup is taken from experimental
work found in [23]. In this example, we investigate the effect of
the thickness of the lower I-beam on crack growth. Only the limiting cases
for the bottom I-beam of a rigid constraint (very thick beam) and flexible
constraint (very thin beam) are considered. In addition, the welding residual
stresses between the member and the I-beam are neglected.
The structure is loaded with a traction of and the initial
crack length is taken to be a 5mm. The geometry is discretized with 8243
three-node triangular elements. To model a rigid constraint, the displacement
in the vertical direction is fixed along the entire bottom of the domain. A
flexible constraint is idealized by fixing the vertical displacement at both
ends of the bottom of the domain. For both sets of boundary conditions, an
additional degree of freedom is fixed to prevent a rigid body rotation.
For each load case, we simulate crack growth with a step size of \Deltaa =
5mm for a total of 14 steps. Fig. 15 shows the mesh in the vicinity of the
fillet and compares the crack paths for the cases of a thick I-beam (upper
crack) and a thin I-beam (lower crack). It is emphasized that the same mesh
is used throughout the simulation, and that no remeshing is required. As
new crack segments are added, additional enriched degrees of freedom are
generated for each new segment. The results shown are consistent with both
the experimental [23] and previous numerical results [24].
4.2 Plate examples
In this section, we present some examples using the enriched MITC4 plate
formulation developed in Section 3.4. We first examine the accuracy of the
method as a function of plate thicknesses for a benchmark problem, and
then present a more general example. Throughout this section, the material
properties are assumed to be isotropic with Young's modulus of
GPa, and Poisson's ratio
As a benchmark problem we consider a through crack in an infinite plate
subjected to a far-field moment M o . The crack is oriented at an angle fi
with respect to the x 1 axis as shown in Fig. 16. Recently, very accurate
calculations were carried out by [25] for various plate thicknesses for the case
when . In this case, the loading is purely mode I, and the domain form
of the J-integral for plates (45) is used in conjunction with (46) to determine
the moment intensity factor K I . In the finite element model, only one-half
of a square plate is modeled, with symmetry conditions along the x 2 axis.
To approximate the infinite plate, the plate width w is taken to be 10 times
the half crack length a. The crack length for all of the results presented in
this section is taken to be
Fig. 17 shows the normalized K I for four discretizations, two standard and
two enriched. The lower curve corresponds to a non-enriched formulation,
and the values for K I are within 5% of the exact for the entire range of
plate thicknesses t. These values are improved when the mesh is refined for
a total of 2463 degrees of freedom as shown. We observe that the enriched
solution with only 755 degrees of freedom is as accurate as the solution with
2463 degrees of freedom without enrichment. The last curve for the enriched
case with 3087 degrees of freedom exhibits less than 1% error. The enriched
solutions show good correlation with the analytical solution for the full range
of plate thicknesses tested.
As a last example, moment intensity factors are calculated for a finite
plate as a function of crack length for various plate thicknesses. The geometry
of the plate is taken to be the same as the previous example, and the
results are compared to those given in [26]. In this study, the mesh does
not model the crack discontinuity; the jump in the rotations and transverse
displacement is created entirely with enrichment. Table 3 gives the results
for four different plate width to thickness ratios for the case when the plate
is modeled with 1424 MITC4 elements. These results show excellent correlation
for the cases when w=8, in which the maximum
error is 1.2%. For the remaining cases the maximum difference between the
numerical solutions and those given in [26] is 9.4%. We note, however, that
the reference [26] is not as current as [25]. In the latter, the moment intensity
factors are shown to be significantly greater than the classical results as
the thickness t ! 0. The results shown in Table 3 are consistent with these
findings.
Summary
A method of constructing finite element approximations with enrichment
functions was presented which allows for the simulation of evolving discontinuities
in a straightforward fashion. The specific examples of cracks and
crack growth in two-dimensional elasticity and Mindlin-Reissner plate theory
were examined. By incorporating the appropriate asymptotic near-tip
fields, accurate moment and stress intensity factors were obtained for coarse
meshes. A new near-tip function was also developed to remove the need for a
mapping in the case of kinked cracks. The methodology for the construction
of the discrete approximation from the interaction of the crack geometry and
the mesh was provided, and numerical tests served to illustrate the algo-
rithm's robustness. Additional numerical studies for Mindlin-Reissner plates
demonstrated the extent to which stress intensity factors can be calculated
accurately for a wide range of plate thicknesses.
The present method has a lot of potential to extend the finite element
method for the modeling of evolving interfaces and free surfaces. A key feature
of the enrichment in conjunction with numerical integration is the capability
of modeling geometrical features which are independent of the mesh
topology. As was shown in this paper, several different crack configurations
can be considered for a single mesh of a component, simply by changing
the enrichment scheme according to the crack geometry. Future work will
focus on the application of the method to three-dimensional and dynamic
fracture, as well as other areas of mechanics in which moving interfaces are
of importance.
Acknowledgements
The support of the Office of Naval Research and Army Research Office, to
Northwestern University, is gratefully acknowledged. The authors are grateful
for the support provided by the DOE Computational Science Graduate
Fellowship program, to John Dolbow.
--R
A hybrid-element approach to crack problems in plane elasticity
Modeling mixed-mode dynamic crack propagation using finite elements: Theory and applications
Modelling strong discontinuities in solid mechanics via strain softening constitutive equations.
Elastic crack growth in finite elements with minimal remeshing.
A finite element method for crack growth without remeshing.
An extended finite element method with discontinuous enrichment for applied mechanics.
Multiple scale finite element methods.
Meshless methods: An overview and recent developments.
Enriched methods for singular fields
On the bending of an elastic plate containing a crack.
Mechanics of fracture 3: Plates and shells with cracks.
Elliptic Problems in Nonsmooth Domains.
Computation of stress intensity factors for plate bending via a path-independent integral
Crack tip and associated domain integrals from momentum and energy balance.
Modeling fracture in Mindlin- Reissner plates with the extended finite element method
Displacement and stress convergence of our MITC plate bending elements.
Fracture Mechanics.
Morphological aspects of fatigue crack propagation.
The Element-free Galerkin Method for Fatigue and Quasi-Static Fracture
Bending of a thin Reissner plate with a through crack.
Internal and edge cracks in a plate of finite width under bending.
--TR
--CTR
Kenjiro Performance assessment of generalized elements in the finite cover method, Finite Elements in Analysis and Design, v.41 n.2, p.111-132, November 2004
L. B. Tran , H. S. Udaykumar, A particle-level set-based sharp interface cartesian grid method for impact, penetration, and void collapse, Journal of Computational Physics, v.193 n.2, p.469-510, 20 January 2004
H. S. Udaykumar , L. Tran , D. M. Belk , K. J. Vanden, An Eulerian method for computation of multimaterial impact with ENO shock-capturing and sharp interfaces, Journal of Computational Physics, v.186 n.1, p.136-177, 20 March | partition-of-unity;fracture;discontinuous enrichment |
364268 | Recognizing Action Units for Facial Expression Analysis. | AbstractMost automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams. | Introduction
Recently facial expression analysis has attracted attention in the computer vision literature [3, 5, 6, 9,
11, 13, 17, 19]. Most automatic expression analysis systems attempt to recognize a small set of prototypic
expressions (i.e. joy, surprise, anger, sadness, fear, and disgust) [11, 17]. In everyday life, however,
such prototypic expressions occur relatively infrequently. Instead, emotion is communicated by changes
in one or two discrete facial features, such as tightening the lips in anger or obliquely lowering the lip
corners in sadness [2]. Change in isolated features, especially in the area of the brows or eyelids, is
typical of paralinguistic displays; for instance, raising the brows signals greeting. To capture the subtlety
of human emotion and paralinguistic communication, automated recognition of fine-grained changes in
facial expression is needed.
Ekman and Friesen [4] developed the Facial Action Coding System (FACS) for describing facial
expressions. The FACS is a human-observer-based system designed to describe subtle changes in
facial features. FACS consists of 44 action units, including those for head and eye positions. AUs
are anatomically related to contraction of specific facial muscles. They can occur either singly or in
combinations. AU combinations may be additive, in which case combination does not change the
appearance of the constituents, or nonadditive, in which case the appearance of the constituents changes
(analogous to co-articulation effects in speech). For action units that vary in intensity, a 5-point ordinal
scale is used to measure the degree of muscle contraction. Although the number of atomic action units
is small, more than 7,000 combinations of action units have been observed [12]. FACS provides the
necessary detail with which to describe facial expression.
Automatic recognition of action units is a difficult problem. AUs have no quantitative definitions and
as noted can appear in complex combinations. Several researchers have tried to recognize AUs [1, 3, 9].
The system of Lien et al. [9] used dense-flow, feature point tracking and edge extraction to recognize 6
upper face AUs or AU combinations (AU1+2, AU1+4, AU4, AU5, AU6, and AU7) and 9 lower face AUs
and AU combinations (AU12, AU25, AU26, AU27, AU12+25, AU20+25, AU15+17, AU17+23+24,
AU9+17). Bartlett et al. [1] recognized 6 individual upper face AUs (AU1, AU2, AU4, AU5, AU6, and
but none occurred in combinations. The performance of their feature-based classifier on novel
was 57%; on new images of faces used for training, the rate was 85.3%. By combining holistic
spatial analysis and optical flow with local features in a hybrid system, Bartlett et al. increased accuracy
to 90.9% correct. Donato et al. [3] compared several techniques for recognizing action units including
optical flow, principal component analysis, independent component analysis, local feature analysis, and
Gabor wavelet representation. Best performances were obtained by Gabor wavelet representation and
independent component analysis which achieved a 95% average recognition rate for 6 upper face AUs
and 6 lower face AUs.
In this report, we developed a feature-based AU recognition system. This system explicitly analyzes
appearance changes in localized facial features. Since each AU is associated with a specific set of facial
muscles, we believe that accurate geometrical modeling of facial features will lead to better recognition
results. Furthermore, the knowledge of exact facial feature positions could benefit the area-based [17],
holistic analysis [1], or optical flow based [9] classifiers. Figure 1 depicts the overview of the analysis
system. First, the head orientation and face position are detected. Then, subtle changes in the facial
components are measured. Motivated by FACS action units, these changes are represented as a collection
of mid-level feature parameters. Finally, action units are classified by feeding these parameters to a neural
network.
Because the appearance of facial features is dependent upon head orientation, we develop a multi-state
model-based system for tracking facial features. Different head orientations and corresponding variation
in the appearance of face components are defined as separate states. For each state, a corresponding
description and one or more feature extraction methods are developed.
We separately represent all the facial features into two parameter groups for upper face and lower face
because facial actions in the upper and lower face are relatively independent [4]. Fifteen parameters
are used to describe eye shape, motion, and state, and brow and cheek motion, and upper face furrows
for upper face. Nine parameters are used to describe the lip shape, lip motion, lip state, and lower face
furrows for lower face.
After the facial features are correctly extracted and suitably represented, we employ a neural network
to recognize the upper face AUs (Neutral, AU1, AU2, AU4, AU5, AU6, and AU7) and lower face AUs
(Neutral, AU9, AU 10, AU 12, AU 15, AU 17, AU 20, AU 25, AU 26, AU 27, and AU23+24) respectively.
Seven basic upper face AUs and eleven basic lower face AUs are identified regardless of whether they
occurred singly or in combinations. For the upper face AU recognition, compared to Bartlett's [1] results
by using the same database, our system achieves recognition accuracy with an average recognition rate
of 95% with fewer parameters and in the more difficult case in which AUs may occur either individually
or in additive and nonadditive combinations. For the lower face AU recognition, a previous attempt for a
similar task [9] recognized 6 lower face AUs and combinations(AU 12, AU12+25, AU20+25, AU9+17,
AU17+23+24, and AU15+17) with 88% average recognition rate by separate hidden Markov Models
for each action unit or action unit combination. Compared to the previous results, our system achieves
recognition accuracy with an average recognition rate of 96.71%. Difficult cases in which AUs occur
either individually or in additive and nonadditive combinations are handled also.
Figure
1. Feature based action unit recognition system.
2. Multi-State Models for Face and Facial Components
2.1. Multi-state face model
Head orientation is a significant factor that affects the appearance of a face. Based on the head orien-
tation, seven head states are defined in Figure 2. To develop more robust facial expression recognition
system, head state will be considered. For the different head states, facial components, such as lips,
appear very differently, requiring specific facial component models. For example, the facial component
models for a front face include F rontLips, F rontEyes (left and right), F rontCheeks(left and
right), NasolabialFurrows, and Nosewrinkles. The right face includes only the component models
SideLips, Righteye, Rightbrow, and Rightcheek. In our current system, we assume the face images
are nearly front view with possible in-plane head rotations.
2.2. Multi-state face component models
Different face component models must be used for different states. For example, a lip model of the
front face doesn't work for a profile face. Here, we give the detailed facial component models for the
nearly front-view face. Both the permanent components such as lips, eyes, brows, cheeks and the transient
components such as furrows are considered. Based on the different appearances of different components,
different geometric models are used to model the component's location, shape, and appearance. Each
(a) Head state.
(b) Different facial components used for each head state.
Figure
2. Multiple state face model. (a) The head state can be left, left-front, front, right-front, right,
down, and up. (b) Different facial component models are used for different head states.
component employs a multi-state model corresponding to different component states. For example, a
three-state lip model is defined to describe the lip states: open, closed, and tightly closed. A two-state
eye model is used to model open and closed eye. There is one state for brow and cheek. Present and
absent are use to model states of the transient facial features. The multi-state component models for
different components are described in Table 1.
Table
1. Multi-state facial component models of a front face
Component State Description/Feature
Opened
Closed
(xc, yc)
Tightly closed Lip corner1 Lip corner2
Eye Open
(xc, yc) h2
Closed corner2
corner1
Brow Present
Cheek Present
Furrow Present
Eye's inner corner line
furrows
nasolabial
Absent
3. Facial Feature Extraction
Contraction of the facial muscles produces changes in both the direction and magnitude of the motion
on the skin surface and in the appearance of permanent and transient facial features. Examples of
permanent features are the lips, eyes, and any furrows that have become permanent with age. Transient
features include any facial lines and furrows that are not present at rest. We assume that the first frame is
in a neutral expression. After initializing the templates of the permanent features in the first frame, both
permanent and transient features can be tracked and detected in the whole image sequence regardless of
the states of facial components. The tracking results show that our method is robust for tracking facial
features even when there is large out of plane head rotation.
3.1. Permanent features
features: A three-state lip model is used for tracking and modeling lip features. As shown in
Table
1, we classify the mouth states into open, closed, and tightly closed. Different lip templates are
used to obtain the lip contours. Currently, we use the same template for open and closed mouth. Two
parabolic arcs are used to model the position, orientation, and shape of the lips. The template of open
and closed lips has six parameters: lip center (xc, yc), lip shape (h1, h2 and w), and lip orientation (').
For a tightly closed mouth, the dark mouth line connecting lip corners is detected from the image to
model the position, orientation, and shape of the tightly closed lips.
After the lip template is manually located for the neutral expression in the first frame, the lip color is
obtained by modeling as a Gaussian mixture. The shape and location of the lip template for the image
sequence is automatically tracked by feature point tracking. Then, the lip shape and color information
are used to determine the lip state and state transitions. The detailed lip tracking method can be found in
paper [15].
Eye features: Most eye trackers developed so far are for open eyes and simply track the eye locations.
However, for recognizing facial action units, we need to recognize the state of eyes, whether they are
open or closed, and the parameters of an eye model, the location and radius of the iris, and the corners
and height of the open eye. As shown in Table 1, the eye model consists of "open" and "closed".
The iris provides important information about the eye state. If the eye is open, part of the iris normally
will be visible. Otherwise, the eye is closed. For the different states, specific eye templates and different
algorithms are used to obtain eye features.
For an open eye, we assume the outer contour of the eye is symmetrical about the perpendicular
bisector to the line connecting two eye corners. The template, illustrated in Table 1, is composed of a
circle with three parameters and two parabolic arcs with six parameters
This is the same eye template as Yuille's except for two points located at the center of the whites [18].
For a closed eye, the template is reduced to 4 parameters for each of the eye corners.
The default eye state is open. Locating the open eye template in the first frame, the eye's inner corner
is tracked accurately by feature point tracking. We found that the outer corners are hard to track and
less stable than the inner corners, so we assume the outer corners are on the line that connects the inner
corners. Then, the outer corners can be obtained by the eye width, which is calculated from the first
frame.
Intensity and edge information are used to detect an iris because the iris provides important information
about the eye state. A half-circle iris mask is used to obtain correct iris edges. If the iris is detected, the
eye is open and the iris center is the iris mask center In an image sequence, the eyelid contours
are tracked for open eyes by feature point tracking. For a closed eye, we do not need to track the eyelid
contours. A line connects the inner and outer corners of the eye is used as the eye boundary. The detailed
eye feature tracking techniques can be found in paper [14].
Brow and cheek features: Features in the brow and cheek areas are also important to facial expression
analysis. For the brow and cheek, one state is used respectively, a triangular template with six parameters
is used to model the position of brow or cheek. Both brow and cheek
are tracked by feature point tracking. A modified version of the gradient tracking algorithm [10] is
used to track these points for the whole image sequence. Some permanent facial feature tracking results
for different expressions are shown in Figure 3. More facial feature tracking results can be found in
http://www.cs.cmu.edu/-face.
3.2. Transient features
Facial motion produces transient features. Wrinkles and furrows appear perpendicular to the motion
direction of the activated muscle. These transient features provide crucial information for the recognition
of action units. Contraction of the corrugator muscle, for instance, produces vertical furrows between
the brows, which is coded in FACS as AU 4, while contraction of the medial portion of the frontalis
muscle (AU 1) causes horizontal wrinkling in the center of the forehead.
Some of these lines and furrows may become permanent with age. Permanent crows-feet wrinkles
around the outside corners of the eyes, which is characteristic of AU 6 when transient, are common in
adults but not in infants. When lines and furrows become permanent facial features, contraction of the
corresponding muscles produces changes in their appearance, such as deepening or lengthening. The
presence or absence of the furrows in a face image can be determined by geometric feature analysis [9, 8],
or by eigen-analysis [7, 16]. Kwon and Lobo [8] detect furrows by snake to classify pictures of people
into different age groups. Lien [9] detected whole face horizontal, vertical and diagonal edges for face
expression recognition.
In our system, we currently detect nasolabial furrows, nose wrinkles, and crows feet wrinkles. We
define them in two states: present and absent. Compared to the neutral frame, the wrinkle state is present
if the wrinkles appear, deepen, or lengthen. Otherwise, it is absent. After obtaining the permanent facial
features, the areas with furrows related to different AUs can be decided by the permanent facial feature
locations. We define the nasolabial furrow area as the area between eye's inner corners line and lip
corners line. The nose wrinkle area is a square between two eye inner corners. The crows feet wrinkle
areas are beside the eye outer corners.
We use canny edge detector to detect the edge information in these areas. For nose wrinkles and crows
feet wrinkles, we compare the edge pixel numbers E of current frame with the edge pixel numbers E 0
of the first frame in the wrinkle areas. If E=E 0 large than the threshold T , the furrows are present.
Otherwise, the furrows are absent. For the nasolabial furrows, we detect the continued diagonal edges.
The nasolabial furrow detection results are shown in Fig. 4.
4. Facial Feature Representation
Each action unit of FACS is anatomically related to contraction of a specific facial muscle. For
instance, AU 12 (oblique raising of the lip corners) results from contraction of the zygomaticus major
muscle, AU 20 (lip stretch) from contraction of the risorius muscle, and AU 15 (oblique lowering of the
lip corners) from contraction of the depressor anguli muscle. Such muscle contractions produce motion
in the overlying skin and deform shape or location of the facial components. In order to recognize the
(a) (b) (c) (d)
Figure
3. Permanent feature tracking results for different expressions. (a) Narrowing eyes and opened
smiled mouth. (b) Large open eye, blinking and large opened mouth. (c) Tight closed eye and eye
blinking. (4) Tightly closed mouth and blinking.
Figure
4. Nasolabial furrow detection results. For the same subject, the nasolabial furrow angle(between
the nasolabial furrow and the line connected eye inner corners) is different for different expressions.
subtle changes of face expression, we represent the upper face features and lower face features into a
group of suitable parameters respectively because facial actions in the upper face have little influence on
facial motion in lower face, and vice versa [4].
For defining these parameters, we first define the basic coordinate system. Because the eye's inner
corners are the most stable features in the face and are relatively insensitive to deformation by facial
expressions, we define the x-axis as the line connecting two inner corners of eyes and the y-axis as
perpendicular to x-axis. In order to remove the effects of the different size of face images in different
image sequences, all the parameters except those about wrinkles' states are calculated in ratio scores by
comparison to the neutral frame.
4.1. Upper Face Feature Representation
We represent the upper face features as 15 parameters. Of these, 12 parameters describe the motion
and shape of eyes, brows, and cheeks. 2 parameters describe the state of crows feet wrinkles, and 1
parameter describes the distance between brows. Figure 5 shows the coordinate system and the parameter
meanings. The definitions of upper face parameters are listed in Table 2.
Table
2. Upper face feature representation for AU recognition
Permanent features (Left and right)
Inner brow Outer brow Eye height
motion (r binner ) motion (r bouter ) (r eheight )
r binner r bouter r eheight
If r binner >0, If r bouter >0, If r eheight >0,
Inner brow Outer brow Eye height
move up. move up. increases.
Eye top lid Eye bottom lid Cheek motion
motion (r top ) motion (r btm ) (r cheek )
r top r btm r cheek
. =\Gamma h2\Gammah2 0
. =\Gamma c\Gammac 0
If r top ? 0, If r btm ? 0, If r cheek ? 0,
Eye top lid Eye bottom lid Cheek
move up. move up. move up.
Other features
Distance Left crows Right crows
of brows feet wrinkles feet wrinkles
(D brow
D brow If W lef
. Left crows feet Right crows feet
wrinkle present. wrinkle present.
Figure
5. Upper face features. hl(hl1 are the height of left eye and right
eye; D is the distance between brows; cl and cr are the motion of left cheek and right cheek. bli and
bri are the motion of the inner part of left brow and right brow. blo and bro are the motion of the
outer part of left brow and right brow. f l and fr are the left and right crows feet wrinkle areas.
4.2. Lower Face Feature Representation
We define nine parameters to represent the lower face features from the tracked facial features. Of
these, 6 parameters describe the permanent features of lip shape, lip state and lip motion, and 3 parameters
describe the transient features of the nasolabial furrows and nose wrinkles.
We notice that if the nasolabial furrow is present, there are different angles between the nasolabial
furrow and x-axis for different action units. For example, the nasolanial furrow angle of AU9 or AU10
is larger than that of AU12. So we use the angle to represent its orientation if it is present. Although
the nose wrinkles are located in the upper face, but we classify the parameter of them in the lower face
feature because it is related to the lower face AUs.
The definitions of lower face parameters are listed in Table 3. These feature data are affine aligned
by calculating them based on the line connected two inner corners of eyes and normalized for individual
differences in facial conformation by converting to ratio scores. The parameter meanings are shown in
Figure
6.
Figure
6. Lower face features. h1 and h2 are the top and bottom lip heights; w is the lip width; D lef t
is the distance between the left lip corner and eye inner corners line; D right is the distance between the
right lip corner and eye inner corners line; n1 is the nose wrinkle area.
5. Facial Action Unit Definitions
Ekman and Friesen [4] developed the Facial Action Coding System (FACS) for describing facial
expressions by action units (AUs) or AU combinations. are anatomically related to
Table
3. Representation of lower face features for AUs recognition
Permanent features
Lip height Lip width Left lip corner
r height r width r lef t
. =\Gamma D left \GammaD left0
If r height >0, If r width >0, If r lef t >0,
lip height lip width left lip corner
increases. increases. move up.
Right lip corner Top lip motion Bottom lip
right
r right r top r btm
right \GammaD right0
D right0
. =\Gamma Dtop \GammaD top0
. =\Gamma D btm \GammaD btm0
If r right >0, If r top >0, If r btm >0,
right lip corner top lip bottom lip
move up. move up. move up.
Transient features
Left nasolibial Right nasolibial State of nose
furrow angle furrow angle wrinkles
(Ang
Left nasolibial Left nasolibial If S nosew
furrow present furrow present nose wrinkles
with angle Ang lef t . with angle present.
Ang right .
contraction of a specific set of facial muscles. Of thses, 12 are for upper face, and are for lower
Action units can occur either singly or in combinations. The action unit combinations may be
additive such as AU1+5, in which case combination does not change the appearance of the constituents,
or nonadditive, in which case the appearance of the constituents does change such as AU1+4. Although
the number of atomic action units is small, more than 7,000 combinations of action units have been
observed [12]. FACS provides the necessary detail with which to describe facial expression.
Table
4. Basic upper face action units or AU combinations
Inner portion of Outer portion of Brows lowered
the brows is the brows is and drawn
raised. raised. together
Upper eyelids Cheeks are Lower eyelids
are raised. raised. are raised.
AU 1+4 AU 4+5 AU 1+2
Medial portion Brows lowered Inner and outer
of the brows is and drawn portions of the
raised and pulled together and brows are raised.
together. upper eyelids
are raised.
AU 1+2+4 AU1+2+5+6+7 AU0(neutral)
Brows are pulled Brow, eyelids, and Eyes, brow, and
together and cheek are raised. cheek are
upward. relaxed.
5.1. Upper Face Action Units
Table
4 shows the definitions of 7 individual upper face AUs and 5 non-additive combinations involving
these action units. As an example of a non-additive effect, AU4 appears differently depending on whether
it occurs alone or in combination with AU1, as in AU1+4. When AU1 occurs alone, the brows are drawn
together and lowered. In AU1+4, the brows are drawn together but are raised by the action of AU 1. As
another example, it is difficult to notice any difference between the static images of AU2 and AU1+2
because the action of AU2 pulls the inner brow up, which results in a very similar appearance to AU1+2.
In contrast, the action of AU1 alone has little effect on the outer brow.
5.2. Lower face action units
Table
5 shows the definitions of 11 lower face AUs or AU combinations.
6. Image Database
6.1. Image Database for Upper Face AU Recognition
We use the database of Bartlett et al. [1] for upper face AUs recognition. This image database was
obtained from 24 Caucasian subjects, consisting of 12 males and 12 females. Each image sequence
consists of 6-8 frames, beginning with a neutral or with very low magnitude facial actions and ending
with a high magnitude facial actions. For each sequence, action units were coded by a certified FACS
coder.
For this investigation, 236 image sequences from 24 subjects were processed. Of these, 99 image
sequences contain only individual upper face AUs, and 137 image sequences contain upper-face AU
combinations. Training and testing are performed on the initial and final two frames in each image
sequence. For some of the image sequences, lighting normalizations were performed.
To test our algorithm on the individual AUs, we randomly generate training and testing sets from the
image sequences, as shown in Table 6. In T rainS3 and TestS3, we ensure that the subjects do not
appear in both training and testing sets.
To test our algorithm on the both individuall AUs and AU combinations, we generate a training set
(T rainC1) and a testing set (T estC1) as shown in Table 6.
Table
5. Basic lower face action units or AU combination
The infraorbital The infraorbital The lips and the
triangle and triangle is lower portion of
center of the pushed upwards. the nasolabial
upper lip are Upper lip is furrow are pulled
pulled upwards. raised. Nose pulled back
Nose wrinkling wrinkle is absent. laterally. The
is present. mouth is
elongated.
The corner of The chin boss Lip corners are
the lips are is pushed pulled obliquely.
pulled down. upwards.
AU 25 AU 26 AU27
Lips are relaxed Lips are relaxed Mouth stretched,
and parted. and parted; open and the
mandible is mandible pulled
lowered. downwards.
AU 23+24 neutral
Lips tightened, Lips relaxed
narrowed, and and closed.
pressed together.
Table
6. Data distribution of each data set for upper face AU recognition.
Single AU Data Sets
AUs AU0 AU1 AU2 AU4 AU5 AU6 AU7 Total
TrainS1
TrainS2 76 20 28 228
TrainS3 52
AU Combination Data Sets
6.2. Image Database for Lower Face AU Recognition
We use the data of Pitt-CMU AU-Coded Face Expression Image Database for lower face AU recogni-
tion. The database currently includes 1917 image sequences from 182 adult subjects of varying ethnicity,
performing multiple tokens of 29 of primary FACS action units. Subjects sat directly in front of the
camera and performed a series of facial expressions that included single action units (e.g., AU 12, or
smile) and combinations of action units (e.g., AU 6+12+25). Each expression sequence began from a
neutral face. For each sequence, action units were coded by a certified FACS coder.
Total 463 image sequences from 122 adults (65% female, 35% male, 85% European-American, 15%
African-American or Asian, ages to 35 years) are processed for lower face action unit recognition.
Some of the image sequences are with more action unit combinations such as AU9+17, AU10+17,
AU12+25, AU15+17+23, AU9+17+23+24, and AU17+20+26. For each image sequence, we use the
neutral frame and two peak frames. 400 image sequences are used as training data and 63 different image
sequences are used as test data. The training and testing data sets are shown in Table 7.
Table
7. Training data set for lower face AU recognition
neutral AU9 AU10 AU12 AU15 AU17 AU20 AU25 AU26 AU27 AU23+24 Total
Train Set 400 38
Test
7. Face Action Units Recognition
7.1. Upper Face Action Units Recognition
We used three-layer neural networks with one hidden layer. The inputs of the neural networks are the
parameters shown in Table 2. Three separate neural networks were evaluated. For comparison with
Bartlett's results, the first NN is for recognizing individual AUs only. The second NN is for recognizing
AU combinations when only modeling 7 individual upper face AUs. The third NN is for recognizing AU
combinations when separately modeling nonadditive AU combinations. The desired number of hidden
units to achieve a good recognition was also investigated.
7.1.1 Upper Face Individual AU Recognition
The NN outputs are 7 individual upper face AUs. Each output unit gives an estimate of the probability
of the input image consisting of the associated action units. From experiments, we have found 6 hidden
units are sufficient.
In order to recognize individual action units, we used the training and testing data that include individual
AUs only. Table 8 shows results of our NN on the T rainS1, TestS1 training and testing sets. A 92.3%
recognition rate was obtained. When we increase the training data by using T rainS2 and test by using
TestS2, a 92% recognition rate was obtained.
For detecting the system's robustness to new faces, we tested our algorithm on the T rainS3/TestS3
training/testing sets. The recognition results are shown in Table 9. The average recognition rate is 92.9%
with zero false alarms. For the misidentifications between AUs, although the probability of the output
units of the labeled AU is very close to the highest probability, it was treated as an incorrect result. For
Table
8. AU recognition for single AUs on T rainS1 and TestS1. The rows correspond to NN outputs,
and columns correspond to human labels.
Average Recognition rate: 92.3%
example, if we obtain the probability of AU1 and AU2 with AU1=0.59 and AU2=0.55 for a labeled
AU2, it means that AU2 was misidentified as AU1. When we tested the NN trained on single AU image
sequences on data set containing AU combinations, we found the recognition rate decreases to 78.7%.
Table
9. AU recognition for single AUs when all test data come from new subjects who were not used
for training.
Average Recognition rate: 92.9%
7.1.2 Upper Face AU Combination Recognition When Modeling 7 Individual AUs
This NN is similar to the one used in the previous section, except that more than one output units could
fire. We also restrict the output to be the first 7 individual AUs. For the additive and nonadditive AU
combinations, the same value is given for each corresponding individual AUs in training data set. For
example, for AU1+2+4, the outputs are AU1=1.0, AU2=1.0, and AU4=1.0. From experiments, we found
we need to increase the number of hidden units from 6 to 12.
Table
shows the results of our NN on the (T rainC1)/(TestC1) training/testing set. A 95% average
recognition rate is achieved, with a false alarm rate of 6.4%. The higher false alarm rate comes from
the AU combination. For example, if we obtained the recognition results with AU1=0.59 and AU2=0.55
for a labeled AU2, it was treated as AU1+AU2. This means AU2 is recognized but with AU1 as a false
alarm.
Table
10. AU recognition for AU combinations when modeling 7 single AUs only.
AU No. Correct false Missed Confused Recognition
rate
Total 94
False alarm: 6.4%
7.1.3 Upper Face AU Combination Recognition When Modeling Nonadditive Combinations
For this NN,we separately model the nonadditive AU combinations. The 11 outputs consist of 7 individual
upper face AUs and 4 non-additive AU combinations (AU1+2, AU1+4, AU4+5, and AU1+2+4). The
non-additive AU combinations and the corresponding individual AUs strongly depend on each other.
Table
11 shows the correlations between AU1, AU2, AU4, AU5, AU1+2, AU1+2+4, AU1+4, and AU4+5
used in the training set. We set the values based on the appearances of these AUs or combinations.
Table
12 shows the results of our NN on the (T rainC1)/(T estC1) training/testing set. An average
recognition rate of 93.7% is achieved, with a slightly lower false alarm rate of 4.5%. In this case,
modeling separately the nonadditive combinations does not improve recognition rate due to the fact that
Table
11. The correlation of AU1, AU2, AU4, AU5, AU1+2, AU1+2+4, AU1+4, and AU4+5.
the AUs in these combinations strongly depend on each other.
Table
12. AU recognition for AU combinations by modeling the non-additive AU combinations as separate
AUs.
AU No. Correct false Missed Confused Recognition
rate
Total 111 104 5 7 - 93.7%
False alarm: 4.5%
7.2. Lower Face Action Units Recognition
We used a three-layer neural network with one hidden layer to recognize the lower face action units.
The inputs of the neural network are the lower face feature parameters shown in Table 3. 7 parameters
are used except two parameters of the nasolabial furrows. We don't use the angles of the nasolabial
furrows because they are varied much for the different subjects. Generally, we use them to analyze the
different expressions of same subject.
Two separate neural networks are trained for lower face AU recognition. The outputs of the first NN
ignore the nonadditive combinations and only models 11 basic single action units which are shown in
Table
5. We use AU 23+24 instead of AU23 and AU24 because they almost occur together. The outputs
of the second one separately models some nonadditive combinations such as AU9+17 and AU10+17
besides the basic single action units.
The recognition results for modeling basic lower face AUs only are shown in Table 13 with recognition
rate of 96.3%. The recognition results for modeling non-additive AU combinations are shown in Table 14
with average recognition rate of 96.71%. We found that separately model the nonadditive combinations
slightly increase lower action unit recognition accuracy.
All the misidentifications come from AU10, AU17, and AU26. All the mistakes of AU26 are confused
by AU25. It is reasonable because both AU25 and AU26 are with parted lips. But for AU26, the mandible
is lowered. We did not use the jaw motion information in current system. All the mistakes of AU10 and
AU17 are caused by the image sequences with AU combination AU10+17. Two combinations AU10+17
are classified to AU10+12. One combination of AU10+17 is classified as AU10 (missing AU17). The
combination AU 10+17 modified the single AU's appearance. The neural network needs to learn the
modification by more training data of AU 10+17. There are only ten examples of AU10+17 in 1220
training data in our current system. More data about AU10+17 is collecting for future training. Our
system is able to identify action units regardless of whether they occurred singly or in combinations. Our
system is trained with the large number of subjects, which included African-Americans and Asians in
addition to European-Americans, thus providing a sufficient test of how well the initial training analyses
generalized to new image sequences.
For evaluating the necessity of including the nonadditive combinations, we also train a neural network
using 11 basic lower face action units as the outputs. For the same test data set, the average recognition
rate is 96.3%.
8. Conclusion and Discussion
We developed a feature-based facial expression recognition system to recognize both individual AUs
and AU combinations. To localize the subtle changes in the appearance of facial features, we developed
a multi-state method of tracking facial features that uses convergent methods of feature analysis. It has
Table
13. Lower face action unit recognition results for modeling basic lower face AUs only.
AU No. Correct false Missed Confused Recognition
rate
9
Total
Table
14. Lower face action unit recognition results for modeling non-additive AU combinations.
AU No. Correct false Missed Confused Recognition
rate
9
26 14 9 - 5 (AU25) 64.29%
Total
high sensitivity and specificity for subtle differences in facial expressions. All the facial features are
represented in a group of feature parameters.
The network was able to learn the correlations between facial feature parameter patterns and specific
action units. Although often correlated, these effects of muscle contraction potentially provide unique
information about facial expression. Action units 9 and 10 in FACS, for instance, are closely related
expressions of disgust that are produced by variant regions of the same muscle. The shape of the nasolabial
furrow and the state of nose wrinkles distinguishe between them. Changes in the appearance of facial
features also can affect the reliability of measurements of pixel motion in the face image. Closing of
the lips or blinking of the eyes produces occlusion, which can confound optical flow estimation. Unless
information about both motion and feature appearance are considered, accuracy of facial expression
analysis and, in particular, sensitivity to subtle differences in expression may be impaired. A recognition
rate of 95% was achieved for seven basic upper face AUs. Eleven basic lower face action units are
recognized and 96.71% of action units were correctly classified.
Unlike previous methods [9] which build a separate model for each AU and AU combination, we build
a single model that recognizes AUs whether they occur singly or in combinations. This is an important
capability since the number of possible AU combinations is too large (over 7000) for each combination
to be modeled separately.
Using the same database, Bartlett et al. [1] recognized only 6 single upper face action units but no
combinations. The performance of their feature-based classifier on novel faces was 57%; on new images
of a face used for training, the rate was 85.3%. After they combined holistic spatial analysis, feature
measures and optical flow, they obtained their best performance at 90.9% correct. Compared to their
system, our feature-based classifier obtained a higher performance rate about 92.5% on both novel faces
and new images of a face used for training for individual AU recognition. Moreover, our system works
well for a more difficult case in which AUs occur either individually or in additive and nonadditive
AU combinations. 95% of upper face AUs or AU combinations are correctly classified regardless of
whether these action units occur singly or in combination. Those disagreements that did occur were from
nonadditive AU combinations such as AU1+2, AU1+4, AU1+2+4, AU4+5, and AU6+7. As a result,
more analysis of the nonadditive AU combinations should be done in future.
From the experimental results, we have the following observations:
1. The recognition performance from facial feature measurements is comparable to holistic analysis
and Gabor wavelet representation for AU recognition.
2. 5 to 7 hidden units are sufficient to code 7 individual upper face AUs. 10 to 16 hidden units are
needed when AUs may occur either singly or in complex combinations.
3. For upper face AU recognition, separately modeling nonadditive AU combinations affords no
increase in the recognition accuracy. In contrast, separately modeling nonadditive AU combinations
affords slightly increase in the recognition accuracy for lower face AU recognition.
4. After using sufficient data to train the NN, recognition accuracy is stable for recognizing AUs of
new faces.
In summary, the face image analysis system demonstrated concurrent validity with manual FACS
coding. The multi-state model based convergent-measures approach was proved to capture the subtle
changes of facial features. In the test set, which included subjects of mixed ethnicity, average recognition
accuracy for 11 basic action units in the lower face was 96.71%, for 7 basic action units in the upper
face was 95%, regardless of these action units occur singly or in combinations. This is comparable to
the level of inter-observer agreement achieved in manual FACS coding and represents advancement over
the existing computer-vision systems that can recognize only a small set of prototypic expressions that
vary in many facial regions.
Acknowledgements
The authors would like to thank Paul Ekman, Human Interaction Laboratory, University of California,
San Francisco, for providing the database. The authors also thank Zara Ambadar, Bethany Peters, and
Michelle Lemenager for processing the images. This work is supported by NIMH grant R01 MH51435.
--R
Measuring facial expressions by computer image analysis.
Facial expression in hollywood's portrayal of emotion.
Classifying facial actions.
The Facial Action Coding System: A Technique For The Measurement of Facial Movement.
Facial feature point extraction method based on combination of shape extraction and pattern matching.
Application of the k-l procedure for the characterization of human faces
Age classification from facial images.
An interative image registration technique with an application in stereo vision.
Recognition of facial expression from optical flow.
Handbook of methods in nonverbal behavior research.
Analysis of facial images using physical and anatomical models.
Robust lip tracking by combining shape
face recognition using eigenfaces.
Recognizing human facial expression from long image sequences using optical flow.
Feature extraction from faces using deformable templates.
--TR
--CTR
Chao-Fa Chuang , Frank Y. Shih, Rapid and Brief Communication: Recognizing facial action units using independent component analysis and support vector machine, Pattern Recognition, v.39 n.9, p.1795-1798, September, 2006
Jia-Jun Wong , Siu-Yeung Cho, Facial emotion recognition by adaptive processing of tree structures, Proceedings of the 2006 ACM symposium on Applied computing, April 23-27, 2006, Dijon, France
Robust feature detection for facial expression recognition, Journal on Image and Video Processing, v.2007 n.2, p.5-5, August 2007
Jiatao Song , Zheru Chi , Jilin Liu, A robust eye detection method using combined binary edge and intensity information, Pattern Recognition, v.39 n.6, p.1110-1125, June, 2006
Seong G. Kong , Jingu Heo , Besma R. Abidi , Joonki Paik , Mongi A. Abidi, Recent advances in visual and infrared face recognition: a review, Computer Vision and Image Understanding, v.97 n.1, p.103-135, January 2005
Matthew Turk, Computer vision in the interface, Communications of the ACM, v.47 n.1, January 2004
Alice J. O'Toole , Joshua Harms , Sarah L. Snow , Dawn R. Hurst , Matthew R. Pappas , Janet H. Ayyad , Herve Abdi, A Video Database of Moving Faces and People, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.5, p.812-816, May 2005
Hatice Gunes , Massimo Piccardi , Tony Jan, Face and body gesture recognition for a vision-based multimodal analyzer, Proceedings of the Pan-Sydney area workshop on Visual information processing, p.19-28, June 01, 2004
Mu-Chun Su , Yi-Jwu Hsieh , De-Yuan Huang, A simple approach to facial expression recognition, Proceedings of the 2007 annual Conference on International Conference on Computer Engineering and Applications, p.456-461, January 17-19, 2007, Gold Coast, Queensland, Australia
Dong Liang , Jie Yang , Zhonglong Zheng , Yuchou Chang, A facial expression recognition system based on supervised locally linear embedding, Pattern Recognition Letters, v.26 n.15, p.2374-2389, November 2005
Congyong Su , Li Huang, Spatio-temporal graphical-model-based multiple facial feature tracking, EURASIP Journal on Applied Signal Processing, v.2005 n.1, p.2091-2100, 1 January 2005
Yongmian Zhang , Qiang Ji, Active and Dynamic Information Fusion for Facial Expression Understanding from Image Sequences, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.5, p.699-714, May 2005
Shyi-Chyi Cheng , Ming-Yao Chen , Hong-Yi Chang , Tzu-Chuan Chou, Semantic-based facial expression recognition using analytical hierarchy process, Expert Systems with Applications: An International Journal, v.33 n.1, p.86-95, July, 2007
Philipp Michel , Rana El Kaliouby, Real time facial expression recognition in video using support vector machines, Proceedings of the 5th international conference on Multimodal interfaces, November 05-07, 2003, Vancouver, British Columbia, Canada
Benjamn Hernndez , Gustavo Olague , Riad Hammoud , Leonardo Trujillo , Eva Romero, Visual learning of texture descriptors for facial expression recognition in thermal imagery, Computer Vision and Image Understanding, v.106 n.2-3, p.258-269, May, 2007
Iain Matthews , Jing Xiao , Simon Baker, 2D vs. 3D Deformable Face Models: Representational Power, Construction, and Real-Time Fitting, International Journal of Computer Vision, v.75 n.1, p.93-113, October 2007
Yan Tong , Yang Wang , Zhiwei Zhu , Qiang Ji, Robust facial feature tracking under varying face pose and facial expression, Pattern Recognition, v.40 n.11, p.3195-3208, November, 2007
Maria Shugrina , Margrit Betke , John Collomosse, Empathic painting: interactive stylization through observed emotional state, Proceedings of the 4th international symposium on Non-photorealistic animation and rendering, June 05-07, 2006, Annecy, France
Haisong Gu , Yongmian Zhang , Qiang Ji, Task oriented facial behavior recognition with selective sensing, Computer Vision and Image Understanding, v.100 n.3, p.385-415, December 2005
Yanxi Liu , Karen L. Schmidt , Jeffrey F. Cohn , Sinjini Mitra, Facial asymmetry quantification for expression invariant human identification, Computer Vision and Image Understanding, v.91 n.1-2, p.138-159, July
Tao Xiang , Shaogang Gong, Model Selection for Unsupervised Learning of Visual Context, International Journal of Computer Vision, v.69 n.2, p.181-201, August 2006
Chuang , Christoph Bregler, Mood swings: expressive speech animation, ACM Transactions on Graphics (TOG), v.24 n.2, p.331-347, April 2005
Zhang , Zicheng Liu , Dennis Adler , Michael F. Cohen , Erik Hanson , Ying Shan, Robust and Rapid Generation of Animated Faces from Video Images: A Model-Based Modeling Approach, International Journal of Computer Vision, v.58 n.2, p.93-119, July 2004
R. W. Picard , S. Papert , W. Bender , B. Blumberg , C. Breazeal , D. Cavallo , T. Machover , M. Resnick , D. Roy , C. Strohecker, Affective Learning A Manifesto, BT Technology Journal, v.22 n.4, p.253-269, October 2004
Aleix M. Martnez, Recognizing Imprecisely Localized, Partially Occluded, and Expression Variant Faces from a Single Sample per Class, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.6, p.748-763, June 2002
Scott Brave , Clifford Nass, Emotion in human-computer interaction, The human-computer interaction handbook: fundamentals, evolving technologies and emerging applications, Lawrence Erlbaum Associates, Inc., Mahwah, NJ, 2002 | neural network;facial expression analysis;action units;multistate face and facial component models;computer vision;facial action coding system;AU combinations |
365027 | Scale effects in steering law tasks. | Interaction tasks on a computer screen can technically be scaled to a much larger or much smaller sized input control area by adjusting the input device's control gain or the control-display (C-D) ratio. However, human performance as a function of movement scale is not a well concluded topic. This study introduces a new task paradigm to study the scale effect in the framework of the steering law. The results confirmed a U-shaped performance-scale function and rejected straight-line or no-effect hypotheses in the literature. We found a significant scale effect in path steering performance, although its impact was less than that of the steering law's index of difficulty. We analyzed the scale effects in two plausible causes: movement joints shift and motor precision limitation. The theoretical implications of the scale effects to the validity of the steering law, and the practical implications of input device size and zooming functions are discussed in the paper. | INTRODUCTION
This research addresses the following questions: Can we successfully
accomplish the two steering tasks in Figure 1 in the
same amount of time? Can a large input device be substituted
with a small one without significantly impacting user
performance? Does size matter to input control quality? Can
a small-sized input area be compensated by higher control
gain (i.e. control-display ratio)? What are the scale effects
in movement control, if any? How sensitive are the scale
effects?
There are many practical reasons to ask these questions. One
concerns the miniaturization of the computing devices. We
are indeed stepping into the long-awaited era of inexpensive,
powerful and portable computers. In the rush towards minia-
turization, input devices are expected to adapt to the system
physical constraints: trackballs now come in a much smaller
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies
are not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
SIGCHI'01, March 31-April 4, 2001, Seattle, WA, USA.
Figure
1: The two circular tunnels are equivalent in
steering law difficulty but they differ in movement
scale. Does it take the same amount of time to steer
through the two tunnels?
diameter than before and touchpads are designed with a fairly
small contact surface, for instance. It is not clear whether
these reduced-size input devices still maintain the same level
of performance as their predecessors.
If we push the question of scale to the extreme, the answer
is obvious: of course size matters. Humans can not do well
in movement scales that are either greater than their arm's
reach or smaller than their absolute motor precision toler-
ance. Within these extremes, however, the question is much
more difficult to answer.
RELATED WORK AND LITERATURE
One might imagine that the scale effects in input control
should be a well documented topic in the human-machine
system literature. In reality, however, the results were scattered
and controversial. The scale effects were often studied
and reported under two related concepts: control gain and
control-display (CD) ratio. When the display (output) size
is fixed, these two concepts correspond to control movement
scale. Major handbooks [5, 6, 15] tend to suggest that human
performance is an inverted U-shaped function of control gain
or CD ratio: it reaches the highest point in a medium range
of the control gain and deteriorates in both directions away
from this range. Such a U function was usually found in studies
that involved control systems with higher order dynamics
(e.g. rate control system, or systems with inertia or lag).
Hess [11] is a common source regarding the U-shaped func-
tion. In his experiment, subjects performed tracking tasks by
manipulating a near-isometric joystick in rate control. A U-shaped
function was found between participants' subjective
rating and the system control gain 1 .
Gibbs [9] provided the most comprehensive set of data on
control gain. He studied control gain in both positional and
rate control systems and found that the target acquisition time
follows the function:
0:02G 0:106
0:003
G
(1)
where G is the control gain and L is the system lag. The
function produced U-shaped curve when L was greater than
zero. When L was zero (no system lag), the performance-
gain function produces a straight line - the greater the gain
was (which means the smaller the movement scale was), the
worse the performance was.
Buck [7] called into question the views on the significance
of CD gain. Based on results from a target alignment exper-
iment, he argued that target width on both the control device
and the display were important, but their ratio was not.
Arnaut and Greenstein [3] conducted a rather convoluted study
in which control input magnitude (movement scale), display
output magnitude, display target width, control target
width and Fitts' index of difficult were varied in two experi-
ments. They found that a greater movement scale increased
gross movement time but decreased fine adjustment time.
Gross and fine movements were defined by the initial entry
point in the target. The total completion time, in the case
of a tablet, was a U-shaped function. In the case of a track-
ball, however, the greater movement scale increased the total
completion time monotonically. They concluded that a combination
of gain and Fitts' index of difficulty could be a more
useful predictor than either of them alone.
Jellinek and Card studied users' performance as a function
of the control gain in a computer mouse [12]. They found
a U-shaped performance-gain function, but argued against
its status as a basic human performance characteristic. They
believed the performance loss in case of a large control gain
was due to the loss of relative measurement resolution (i.e. a
quantization effect). If there were not a resolution limit, and
as long as the control gain was in a "moderate" range, a user's
performance should have stayed constant, so they argued.
It is necessary to clarify that the most basic construct, in our
view, should be the "control movement scale". Other vari-
ables, such as control gain and CD ratio, are derivative or sec-
ondary. We think the concept of C-D ratio (or gain) in itself
is partially responsible for the contradictions in the literature.
By definition, C-D ratio is a compound variable between the
display scale and the control movement scale. The same relative
C-D ratio could have very different implications on input
1 Note that the notion of control gain is related but not always interchangeable
with movement scale or C-D ratio. Control gain, a term originated
in feedback control theory, exists in both zero order (position control)
and higher order systems. Control display ratio and movement scale only
exist in position control systems. For example, since a force joystick was
used, there was no control movement scale per se in Hess' study [11].
control, depending on the display size. Control movement
scale, on the other hand, is absolute and can be compared to
human body measurements. Furthermore, between the display
and the control movement scale, the former is more relevant
to perception and the latter is more directly relevant to
control performance.
One implication of movement scale is the limb segments (or
motor joints) involved in executing a task. Although limb
segments rarely work in isolation, a large movement (e.g.
m), tends to be carried out primarily by the arm (shoulder
and elbow joints), a medium range by the hand (wrist joint),
a small range (e.g. 10 mm) by the fingers. Langolf et al. [13]
demonstrated that Fitts' law gave different slopes and intercepts
in finger, wrist, and forearm scales 2 . They came to
the conclusion that the smaller the scale, the greater the aiming
performance, which in terms of primary limb segments
means that:
fingers > wrist > forearm (2)
This performance order was confirmed by Balakrishnan et
al. [4], who found that a combined use of multiple fingers
resulted in higher performance than other limb segments.
However they noted that "the finger(s) do not necessarily perform
better than the other segments of the upper limb" when
a single finger was involved 3 .
In a six degrees of freedom docking task, Zhai et al. [16]
showed that relative performance of 6-DOF devices did depend
on the muscle groups used. More specifically, they
demonstrated that the user performance was superior with the
fingers involved (together with the wrist and the arm) in operating
the control device than without (wrist and arm only).
It is natural to ask the question of scale in light of the well-known
Fitts' law [8, 14]), which predicts that the time T to
select a target of width W that lies at distance A is:
A
where a and b are empirically determined constants. The
logarithmic transformation of the ratio between A and W is
called the index of difficulty of the task. Some researchers
argue that control scale should not matter in view of Fitts'
law [12]. If a reaching task is scaled by a factor of two, both
the distance A and the width W will be twice as large and
hence cancel each other in the index of difficulty measure.
On the other hand, the impact of scale could be reflected in a
and b, as shown in [13].
The validity of index of difficulty as the sole determinant of
aimed movement has been recently called into question by
Guiard [10]. He argued that the way Fitts' law was studied
and applied in the past was problematic; both difficulty
and scale should be viewed as the basic dimensions of aimed
movement.
Some objections have been raised about this study, suggesting a faulty
experimental design [4]. But the finding that the index of performance varies
with movement scale is widely accepted.
3 Note that in the Balakrishnan study, however, the finger movement was
controlled in the lateral direction, which does not occur frequently in natural
movement
We recently established a movement law that models human
performance in a different class of tasks: trajectory-based
tunnel steering [1]. It is both theoretically and practically
necessary to study the scale effects in relation to the steering
law. Theoretically, it is important to investigate how the
steering law prediction is affected by movement scale. Prac-
tically, the steering law may serve as a platform based on a
new class of tasks for studying the control movement scale
effects, which may guide the design and selection of inter-action
devices and techniques. Some input devices, such as
tablet, are primarily designed for trajectory-based tasks.
To move a stylus tip or a cursor through a tunnel or path (see
Figure
1 for examples) without crossing the boundaries is a
steering task. One common steering task in HCI is traversing
multi-layered menus. In a recent study [1], we proposed and
validated a theoretical model for the successful completion
of steering tasks. This model, called the steering law, comes
in both an integral and a local form.
Integral form
The integral form of the steering law states that the difficulty
for steering through a generic tunnel C can be determined
by integrating the inverse of the path width along the tunnel
(see [1] for details). Formally, we define the index of difficulty
ID C for steering though C by:
Z
ds
where the integration variable s stands for the curvilinear abscissa
along the path. As in Fitts' law, the steering task difficulty
ID C predicts the time T needed to steer through tunnel
C in a simple linear form:
where a and b are constants. Finally, by analogy to Fitts'
law, we define the index of performance IP in a steering task
by This quantity is usually used for comparing
steering performance between experimental conditions.
The steering law also has a local formulation, which states
that the instantaneous speed at any point in a steering movement
is proportional to the permitted variability at that point:
where v(s) is the velocity of the limb at the point of curvilinear
abscissa s, W (s) is the width of the path at the same
point, and is an empirically determined time constant.
Types of tunnels
Equation 4 allows the calculation of steering difficulty for a
wide range of tunnel shapes. In [1], three shapes were tested:
straight, narrowing and spiral tunnels. It was suggested [2]
that the properties of a great variety of tunnel shapes could be
captured by two common tunnel shapes: a linear tunnel and
a circular one (Figure 2). For both of the two steering tasks,
the steering law can be reduced to the following simple form:
where A is the tunnel length in the case of linear tunnels
and the perimeter of the center circle in the case of circular
tunnels (Figure 2). In both cases W stands for the path width.
a and b are experimentally determined constants. They were
found to be different for linear and circular tunnels, due to
the very different nature of steering in the two cases.
PSfrag replacements
A
(a) Linear tunnel
PSfrag replacements
A
(b) Circular tunnel
Figure
2: Two steering tasks
Influence of scale
One will notice from Equation 7 that the argument against
scale effects based on Fitts' law can also be found here: in
both the linear circular path, the steering difficulty depends
on the length/width ratio, such that dividing both the length
and the width of the steering path by a factor k gives the same
index of difficulty. In other words, although the speed in a
tunnel of width W
k will be k times slower than in a tunnel of
width W , this decrease in speed should be fully compensated
by the shortened steering length by the same ratio, such that
the movement time remains the same.
It is thus pertinent to ask whether the steering law still holds
over very different scales and, if not, how significant the scale
impact is.
The experimental task was steering through linear and circular
tunnels at five different scales. The scales were chosen
to cover a broad range of movement amplitudes so as to
guarantee that different combinations of motor joints were
tested. The input device used in the experiment was a graphics
tablet, which, in comparison to other input devices, provided
the most direct interaction, hence allowing us to focus
on more fundamental human performance characteris-
tics. Depending on the movement scale the movement of a
tablet stylus may be controlled by the fingers, the wrist, or the
arm joints. When operating the stylus, multiple fingers work
in conjunction, which should be much better than a single
finger working in isolation [4].
Ten volunteers participated in the experiment. All were right-handed
and had no or little experience using graphics tablets.
Apparatus
The experiment was conducted on a PC running Linux, with
a 24-inch GW900 Sony monitor (19201200 pixels resolu-
tion), and equipped with a Wacom UD-1218E tablet (455
303 mm active area, 12761277 dpi resolution). The computer
system was sufficiently fast that the input or feedback
lag was not perceptible. The size of the active view of the
monitor was set exactly equal to the size of the active area of
the tablet, which gives an approximate 107100 dpi screen
resolution. Different portions of the tablet area were mapped
onto the screen depending on the movement scale currently
being tested (mappings are detailed in the design section).
All experiments were done in full-screen mode, with the background
color set to black.
Procedure
Subjects performed two types of steering tasks: linear tunnel
and circular tunnel steering (Figure 2). At the beginning of
each trial, the path to be steered was presented on the screen,
in green color. After placing the stylus on the tablet (to the
left of the start segment) and applying pressure to the stylus
tip, the subject began to draw a blue line on a screen, showing
the stylus trajectory. When the cursor crossed the start
segment, left to right, the line turned red, as a signal that the
task had begun and the time was being recorded. When the
cursor crossed the end segment, also left to right, all drawings
turned yellow, signaling the end of the trial. Crossing
the borders of the path resulted in the cancellation of the trial
and an error being recorded. Releasing pressure on the stylus
after crossing the start segment and before crossing the
second, but without crossing the tunnel border, resulted in
an invalid trial, but no error was recorded 4 . Subjects were
asked to minimize errors. Finally, linear tunnels were all oriented
horizontally and were to be steered left to right; as for
circular steering, it had to be done clockwise.
Design
A fully-crossed, within-subject factorial design with repeated
measures was used. Independent variables were movement
scale detailed below), test phase
first and second block), task type linear and circular
tunnels), tunnel length
width on the screen pixels). The tunnel
lengths and widths define 6 different IDs, ranging from 4 to
33. The order of testing of the five scales (S conditions) was
balanced between five groups of subjects according to a Latin
square pattern. Within each S condition, subjects performed
a practice session, consisting of 1 trial in each of the 6 ID
conditions, in both linear and circular steering. The practice
session was followed by two identical sets of the 12 T -A-
conditions presented in random order, during which data
was actually collected. Subjects performed 3 trials in each
S-P -T -A-W condition.
The five scales were chosen considering the maximummove-
ment amplitude for each arm segment and in order to cover
the maximum number of motor "strategies". They were:
very large scale (S =1): the whole active area of the tablet
(455303 mm) was used, which corresponded to standard
4 Subjects sometimes released the pressure by mistake, but this could not
be attributed to the constraints imposed on movement variability.
A3 format. This scale involves movement amplitudes typically
around 20 cm, which require mainly forearm movements
large scale (S =2): the active tablet area was 227151 mm,
which was one half of the tablet in both dimensions. This
was equivalent to a A5-sized tablet. In this scale, movement
amplitudes are typically around 10 cm, which require mainly
wrist movements but involve to a certain extent the use of the
forearm.
medium scale (S =4): with an active area of 11476mm
of the tablet), movement amplitudes in this scale condition
are around 5 cm, which require mainly finger and wrist
movements and prevent the use of the forearm. This scale
was somewhat equivalent to a A6-format tablet.
small scale the tablet active area size was 57
of the tablet). Typical movement amplitudes in
that condition are ' 2 cm, which require finger movements
and to some extent wrist movements. This was the size of a
touchpad used in some notebook computers.
very small scale active area of
the tablet ( 1 =16 of the tablet) implied very small movements
amplitudes, around 1 cm, which require finger movements
exclusively, with the wrist and forearm joints stabilized on
the tablet surface. Note that this smallest scale tested was
still orders of magnitude above the tablet resolution, hence
preventing the possible machinery quantization effect in previous
studies.
Figure
3 illustrates the relative size of active areas of the
tablet for the different movement scales. The outermost box,
labeled S=1, corresponds to the whole tablet active area.
PSfrag replacements
Figure
3: Relative active tablet sizes at different scales
Table
1 shows the movement amplitudes and path widths in
input space for each scale condition: for instance, the tunnel
to be steered on the graphics tablet when S =4, W=60 and
A=250 has a width of 5 mm and a length of 14:8 mm.
Finally, in light of the movement scale vs. C-D ratio and display
scale discussion, the visual stimuli were kept in the same
size over all five movement scales, so that no visual perception
effect could influence the results. The experimental software
was identical for all scales; only the tablet scale settings
were changed.
Table
1: Movement amplitudes and path widths in input
space for each scale condition (in millimeters).
The results of the experiment include steering time, steering
speed and error rates.
Steering time
As expected, movement amplitude and tunnel width significantly
influenced steering time
and F there was also a
strong interaction between movement amplitude and tunnel
width which is consistent with
the fact that the steering time depends on the ratio of amplitude
and width. As in [2], steering type (linear vs. circular)
proved to be also a significant factor influencing steering time
As for the studied variable, movement
scale, it had a significant influence on steering performance
While the significant impact
of test phase shows a strong learning
effect, the non-significant interaction between test phase
and movement scale suggests that the
influence of scale is likely not to vary much with practice.
Paired t-tests between scale levels classified the scales into
three groups. The first group includes scales 2 and 4, the second
includes scales 1 and 8, and the last one is only composed
of scale 16. The differences were insignificant between
the two scales of the first group (p > :08) and the
scales of the second group (p > :31). The scales of the
first group outperformed significantly the scales of the second
group (with p< :0001 for all compared pairs), while the
last group is outperformed by both the first group (p< :0001)
and the second one (p< :0001). The ranking between movement
scales in terms of time performance is:
This grouping of scales and the ranking between groups held
in both linear and circular steering. Figure 4 summarizes the
average steering time depending on the movement scale and
steering task.
There was also a strong interaction between movement scale
and tunnel type which suggests that
the circular tasks were more sensitive to changes in movement
scale than the linear ones (see Figure 4). We also found
a significant interaction between scale and amplitude
3:2, p< :01), as well as between scale and width
:0001). The movement scale effects was greater when
the movement amplitude was greater or the tunnel width was
Scale
Steering
time
linear tunnel
circular tunnel
Figure
4: Steering time as a function of scale
smaller (Figure 5). This was especially true for the scale-16
conditions: very long amplitudes or very narrow tunnels in
this case were very difficult to steer, such that often subjects
could achieve the trial only after a couple of attempts (see error
rates analysis further). However, a significant interaction
between test phase and movement amplitude
p< :001) suggests that subjects tend to deal with long amplitudes
much better with practice.
As for the fitness to the steering law model, the integral form
of the steering law [1] proved to hold at all studied scales
with very good regression fitness (see Figure 6). The models
of steering time were, for linear steering (in ms):
and for circular steering:
The slope of linear regressions was significantly influenced
by tunnel type movement scale
:05). The intercept was not significantly
affected by tunnel type or by movement
scales
were rather small comparing to the total times involved,
which is consistent with previous results [1, 2].
Similar to the results found on steering times, the relationship
between the steering law index of performance and the
movement scale was an (inverted) U-shaped function (Fig-
ure 7), with the best performance in the medium scale 2 and
Movement amplitude
Steering
time
(ms)
scale 1
scale 2
scale 4
scale 8
scale
circular steering
linear steering
(a) Interaction between movement amplitude and movement scale
Tunnel width
Steering
time
(ms)
scale 1
scale 2
scale 4
scale 8
scale
circular steering
linear steering
(b) Interaction between tunnel width and movement scale
Figure
5: Steering time against scale depending on
movement amplitude and tunnel width
slightly higher than scale 4 in linear steering)
and lower performance in scale 1 and 8, and lowest in scale
16. All scales resulted in almost equivalent performance for
very easy tasks, but significant differences appeared for very
difficult tasks; this was characterized by the statistical interaction
between scale and index of difficulty of the tasks
In conclusion, the time performance was the highest when
the movement scale was between scale 2 and scale 4. In
terms of tablet size, this corresponds to A4/A5-sized tablets.
In terms of motor joints involved, it was when the wrist and
the fingers were the primary movement carriers. A3-sized
tablets seemed to be too large and require too much movement
efforts, while tablets smaller than A6 format were likely
to amplify noise beyond reasonable rates.
Steering
time
(ms)
scale 1
scale 2
scale 4
scale 8
scale
(a) Linear steering
Steering
time
(ms)
scale 1
scale 2
scale 4
scale 8
scale
(b) Circular steering
Figure
Steering time against ID
Errors
Besides the expected main effects on error rates of movement
amplitude
scale had a strong influence on error
occurrence
steering resulted in more errors than linear steering
95, p < :0001). A significant interaction between scale and
tunnel type shows that the number
of errors increases much faster for circular steering than
for linear steering when the movement scale decreases (see
Figure
8). Finally interactions between movement scale and
width movement
scale and movement amplitude
that it is more likely to make errors in long or narrow
tunnels when the tablet is very small: in the scale-16 con-
ditions, natural tremor and biomechanical noise was greatly
amplified such that subjects systematically made a few failed
trials in a sequence, even though they did their best.
scale
IP
(bits/s)515 linear tunnel
circular tunnel
Figure
7: Index of performance against scale
Scale
Average
number
of
linear tunnel
circular tunnel
Figure
8: Average number of errors in each scale
To conclude, the smaller the scale, the higher the error rates.
Considering that the optimal scales were 2 and 4 for time
performance, it appeared that the scale-2 condition had the
best overall performance while considering both time performance
and error rate.
DISCUSSION AND CONCLUSION
Movement scale, control gain, control-display ratio and motor
joints performance differences, are a set of related concepts
in input control literature without consistent conclu-
sions. In terms of movement scale or control gain effects,
some researchers found a U-shaped function [11, 3]; others
straight linear function [9], and yet others did not believe gain
or scale should matter much [12, 7]. Traditionally these issues
have been studied in the framework of target acquisition
tasks. We have conducted a systematic study on these issues
in a new paradigm - the steering law. Furthermore, we focused
on the most fundamental concept of them all - control
movement scale.
Our results supported the U-shaped performance-scale func-
tion: scale does matter. The U-shaped function is easily plausible
in the two extrema: too large a scale is beyond the arm's
reach and too small a scale is beyond motor control precision.
But even within the "moderate" range we tested, the U shape
was still clearly demonstrated. The cause for the U-shaped
function in this range is likely to be twofold: motor joints
shift and the human motor precision limitation.
The best performance appeared in the middle range (scale 2
and 4), when the movements were carried out by all parts of
the upper limb (arm, hand, and finger), although the arm's
role might be lesser than the hand and finger. In this range,
the steering IP was (in bits=ms) 1=62 and 1=74 for linear
steering; and 1=179 and 1=174 for circular steering. On the
larger scale side, when the movements were primarily carried
out by the arm movement, steering performance dropped to
1=81 for linear and 1=200 for circular steering.
On the smaller scale side (scale 8 and 16), when the movements
were carried out mostly by fingers, the performance
also dropped (to 1=85 and 1=227 for scale 16). However,
we can not make a conclusion that the fingers are inferior,
because the other factor, motor control precision limitation,
was increasingly more limiting as the movement scale de-
creased. This was clearly demonstrated by the number of
errors (failed attempts to complete the entire steering path)
shown in Figure 6: participants increasingly "accidentally"
moved out of the tunnel. Note that because we maintained
the same visual display size for all scale conditions, the error
had to be on the motor precision side.
The more theoretical implication of the results pertains the
validity of the steering law [1]. Similar to Fitts' law, the steering
law states that the difficulty of movement lies in relative
accuracy. The two steering tasks in Figure 1 are exactly the
same in steering law terms. This study shows that while the
fitness of the steering law held very well in all levels tested,
the movement scale does have an impact on steering law's
index of performance. A steering law model with strictly the
same index of performance is only valid if the scale does not
vary so widely that the motor joint combination shift fundamentally
or the control precision becomes the primary limiting
factor.
It is interesting to realize that the impact of scale is much
less significant than the steering law's index of difficulty. For
example, the range of scale we tested varied by a factor of
16, but the largest steering time difference was only 17% -
an impact equivalent to only 17% change of the steering ID
(either 17% longer or 15% narrower tunnel).
There are also many practical implications in our results. For
example, the size of a computer input device (tablet or mouse
and its pad) should be such that the fingers, wrists and to a
lesser extent forearm are all allowed in the operation. Another
practical implication lies in the design and use of zooming
interface. In fact users often unconsciously make effort
to stay at the bottom of the U-shaped scale function by zooming
up when their motor precision limits their performance,
and zooming down when too much of the movement had to
be carried out by large arm movement. How we can deliberately
apply the findings in the present study to assist zooming
is an interesting future research issue.
Based on the results of this study, we can begin to answer
some of the questions we raised at the beginning of this paper
on a more scientific ground. First, we found that device
size and movement scale indeed affects input control quality:
people do not accomplish the two steering tasks in Figure 1
in the same amount of time. Furthermore, small scale tends
to be limited by motor precision, large scale limited by the
arm dexterity, but scale in the medium range does not significantly
influence performance. Consequently, substituting
a large input device with a small one will not significantly
impact user performance if there is no fundamental change
in the muscle groups involved; but it will if the substituted
input device is too small that human motor control precision
becomes a limiting factor. The scale effects are also not very
sensitive in comparison to the steering law index of difficulty
effects (e.g. change the tunnel width while keeping the same
length). Finally, the question with regard to control gain or
control-display ration is ill-posed, because what matters is
the control movement scale: for the same movement scale,
the appropriate control gain depends on the display size.
In summary, we have 1) introduced a new task paradigm to
study scale effect; 2) contributed to the literature of scale ef-
fect, confirming the U-shaped function and rejecting straight-line
or no-effect hypotheses; improved the understanding
of the steering law; guidelines to practical input
design and selection issues.
ACKNOWLEDGMENTS
We would like to thank Fr-ed-eric Lepied of the XFree86 team
for his advice on the Wacom tablet configuration.
--R
Beyond Fitts' law: models for trajectory
Performance evaluation of input devices in trajectory-based tasks: an application of the steering law
Is display/control gain a useful metric for optimizing an interface?
Performance differences in the fingers
Engineering data com- pendium: Human perception and performance
Motor performance in relation to control-display gain and target width
The information capacity of the human motor system in controlling the amplitude of move- ment
Controller design: Interactions of controlling limbs
Difficulty and scale as the basic dimensions of aimed movement.
Nonadjectival rating scales in human response experiments.
Powermice and user per- formance
An investigation of Fitts' law using a wide range of movement amplitudes.
Fitts' law as a research and design tool in human-computer interaction
Human Factors Design Handbook.
The influence of muscle groups on performance of multiple degree-of- freedom input
--TR
Powermice and user performance
Is display/control gain a useful metric for optimizing an interface?
The influence of muscle groups on performance of multiple degree-of-freedom input
Beyond Fitts'' law
Performance differences in the fingers, wrist, and forearm in computer input control
Performance evaluation of input devices in trajectory-based tasks
--CTR
Mary Czerwinski, Humans in human-computer interaction, The human-computer interaction handbook: fundamentals, evolving technologies and emerging applications, Lawrence Erlbaum Associates, Inc., Mahwah, NJ, 2002
Robert Pastel, Measuring the difficulty of steering through corners, Proceedings of the SIGCHI conference on Human Factors in computing systems, April 22-27, 2006, Montral, Qubec, Canada
Sergey Kulikov , I. Scott MacKenzie , Wolfgang Stuerzlinger, Measuring the effective parameters of steering motions, CHI '05 extended abstracts on Human factors in computing systems, April 02-07, 2005, Portland, OR, USA
Yves Guiard , Michel Beaudouin-Lafon, Target acquisition in multiscale electronic worlds, International Journal of Human-Computer Studies, v.61 n.6, p.875-905, December 2004
Raghavendra S. Kattinakere , Tovi Grossman , Sriram Subramanian, Modeling steering within above-the-surface interaction layers, Proceedings of the SIGCHI conference on Human factors in computing systems, April 28-May 03, 2007, San Jose, California, USA
Yves Guiard , Renaud Blanch , Michel Beaudouin-Lafon, Object pointing: a complement to bitmap pointing in GUIs, Proceedings of the 2004 conference on Graphics interface, p.9-16, May 17-19, 2004, London, Ontario, Canada
Marcelo Mortensen Wanderley , Nicola Orio, Evaluation of Input Devices for Musical Expression: Borrowing Tools from HCI, Computer Music Journal, v.26 n.3, p.62-76, Fall 2002
Tue Haste Andersen, A simple movement time model for scrolling, CHI '05 extended abstracts on Human factors in computing systems, April 02-07, 2005, Portland, OR, USA
Ken Hinckley, Input technologies and techniques, The human-computer interaction handbook: fundamentals, evolving technologies and emerging applications, Lawrence Erlbaum Associates, Inc., Mahwah, NJ, 2002
Carl Gutwin , Amy Skopik, Fisheyes are good for large steering tasks, Proceedings of the SIGCHI conference on Human factors in computing systems, April 05-10, 2003, Ft. Lauderdale, Florida, USA
David Ahlstrm, Modeling and improving selection in cascading pull-down menus using Fitts' law, the steering law and force fields, Proceedings of the SIGCHI conference on Human factors in computing systems, April 02-07, 2005, Portland, Oregon, USA
Taher Amer , Andy Cockburn , Richard Green , Grant Odgers, Evaluating swiftpoint as a mobile device for direct manipulation input, Proceedings of the eight Australasian conference on User interface, p.63-70, January 30-February 02, 2007, Ballarat, Victoria, Australia
Shumin Zhai , Johnny Accot , Rogier Woltjer, Human action laws in electronic virtual worlds: an empirical study of path steering performance in VR, Presence: Teleoperators and Virtual Environments, v.13 n.2, p.113-127, April 2004
Xiang Cao , Shumin Zhai, Modeling human performance of pen stroke gestures, Proceedings of the SIGCHI conference on Human factors in computing systems, April 28-May 03, 2007, San Jose, California, USA | input device;movement scale;C-D ratio;joints;elbow;motor control;finger;control gain;steering law;wrist;device size |
365035 | Empirically validated web page design metrics. | A quantitative analysis of a large collection of expert-rated web sites reveals that page-level metrics can accurately predict if a site will be highly rated. The analysis also provides empirical evidence that important metrics, including page composition, page formatting, and overall page characteristics, differ among web site categories such as education, community, living, and finance. These results provide an empirical foundation for web site design guidelines and also suggest which metrics can be most important for evaluation via user studies. | INTRODUCTION
There is currently much debate about what constitutes good
web site design [19, 21]. Many detailed usability guidelines
have been developed for both general user interfaces
and for web page design [6, 16]. However, designers have
historically experienced difficulties following design guidelines
[2, 7, 15, 24]. Guidelines are often stated at such a
high level that it is unclear how to operationalize them. A
typical example can be found in Fleming's book [10] which
suggests ten principles of successful navigation design in-
cluding: be easily learned, remain consistent, provide feed-
back, provide clear visual messages, and support users' goals
and behaviors. Fleming also suggests differentiating design
among sites intended for community, learning, information,
shopping, identity, and entertainment. Although these goals
align well with common sense, they are not justified with
empirical evidence and are mute on actual implementation.
Other web-based guidelines are more straightforward to im-
plement. For example, Jakob Nielsen's alertbox column [18]
of May 1996 (updated in 1999) claims that the top ten mistakes
of web site design include using frames, long pages,
non-standard link colors, and overly long download times.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies
are not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy oth-
erwise, or republish, to post on servers or to redistribute to lists, requires
prior specific permission and/or a fee.
SIGCHI'01, March 31-April 4, 2001, Seattle, WA, USA.
These are based on anecdotal observational evidence. Another
column (March 15, 1997) provides guidelines on how
to write for the web, asserting that since users scan web pages
rather than read them, web page design should aid scannabil-
ity by using headlines, using colored text for emphasis, and
using 50% less text (less than what is not stated) since it is
more difficult to read on the screen than on paper. Although
reasonable, guidelines like these are not usually supported
with empirical evidence.
Furthermore, there is no general agreement about which web
design guidelines are correct. A recent survey of 21 web
guidelines found little consistency among them [21]. We
suspect this might result from the fact that there is a lack
of empirical validation for such guidelines.
Surprisingly, no studies have derived web design guidelines
directly from web sites that have been assessed by human
judges. In this paper we report the results of empirical analyses
of the page-level elements on a large collection of expert-
reviewed web sites. These metrics concern page composition
(e.g., word count, link count, graphic count), page formatting
(e.g., emphasized text, text positioning, and text clusters),
and overall page characteristics (e.g., page size and down-load
speed). The results of this analysis allows us to predict
with 65% accuracy if a web page will be assigned a very high
or a very low rating by human judges. Even more interest-
ingly, if we constrain predictions to be among pages within
categories such as education, community, living, and finance,
the prediction accuracy increases to 80% on average.
The remainder of this paper describes related work, our method-
ology, including the judged web dataset, the metrics, and the
data collection process; the results of the study in detail, and
finally our conclusions.
RELATED WORK
Most quantitative methods for evaluating web sites focus on
statistical analysis of usage patterns in server logs [5, 8, 11,
12, 26, 27]. Traffic-based analysis (e.g., pages-per-visitor or
visitors-per-page) and time-based analysis (e.g., click paths
and page-view durations) provide data that the evaluator must
interpret in order to identify usability problems. This analysis
is largely inconclusive since web server logs provide incomplete
traces of user behavior, and because timing estimates
may be skewed by network latencies.
Other approaches assess static HTML according to a number
of pre-determined guidelines, such as whether all graphics
contain ALT attributes [4, 22]. Other techniques compare
quantitative web page measures - such as the number of links
or graphics - to thresholds [25, 27, 28]. However, concrete
thresholds for a wider class of quantitative web page measures
still remain to be established; our work is a first step
towards this end.
The Design Advisor [9] uses heuristics about the attentional
effects of various elements, such as motion, size, images, and
color, to determine and superimpose a scanning path on a
web page. The author developed heuristics based on empirical
results from eye tracking studies of multimedia presen-
tations. However, the heuristics have not been validated for
web pages.
Simulation has also been used for web site evaluation. For
example, WebCriteria's Site Profile [29] attempts to mimic
a user's information-seeking behavior within a model of an
implemented site. This tool uses an idealized user model that
follows an explicit, pre-specified navigation path through the
site and estimates several metrics, such as page load and optimal
navigation times. As another example, Chi, Pirolli, and
Pitkow [5] have developed a simulation approach for generating
navigation paths for a site based on content similarity
among pages, server log data, and linking structure. The
simulation models hypothetical users traversing the site from
specified start pages, making use of information "scent" (i.e.,
common keywords between the user's goal and content on
linked pages) to make navigation decisions. Neither of these
approaches account for the impact of various web page at-
tributes, such as the amount of text or layout of links.
Brajnik [3] surveyed 11 automated web site analysis meth-
ods, including the previously mentioned static analysis tools
and WebCriteria's Site Profile. The survey revealed that these
tools address only a sparse set of usability features, such as
download time, presence of alternative text for images, and
validation of HTML and links. Other usability aspects, such
as consistency and information organization are unaddressed
by existing tools.
Zhu and Gauch [30] gathered web site quality ratings criteria
from a set of expert sites, including Internet Scout, Lycos Top
5%, Argus Clearinghouse, and the Internet Public Library.
For each site they computed web page currency, availability,
authority, popularity, cohesiveness, and information-to-noise
ratio. This last metric is the only one related to the kind of
metrics discussed below, and is computed as the number of
bytes taken up by words divided by the total number of bytes
in the page; in essense a word percentage measure. The authors
assessed these metrics in terms of how well they aided
in various information retrieval tasks, finding that weighted
combinations of metrics improved search over text content
Webby Overall Score9.007.005.003.001.00Number
of
Figure
1: Histogram of the the overall scores assigned to the sites
considered for the 2000 Webby Awards. The x axis is the overall
score and the y axis is the number of sites assigned this score.
alone. They did not relate these metrics to web site usability
or attempt to predict the judges ratings outside the context of
search.
The most closely related work is our earlier study [13] in
which we reported a preliminary analysis of a collection of
428 web pages. Each page corresponded to a site that had
either been highly rated by experts, or had no rating. The expertise
ratings were derived from a variety of sources, such
as PC Magazine Top 100, WiseCat's Top 100, and the final
nominees for the Webby Awards. For each web page,
we captured 12 quantitative measures having to do with page
composition, layout, amount of information, and size (e.g.,
number of words, links, and colors). We found that 6 metrics
cluster count, link count, page size, graphics count,
color count and reading complexity - were significantly associated
with rated sites. Additionally, we found 2 strong pair-wise
correlations for rated sites, and 5 pairwise correlations
for unrated sites. Our predictions about how the pairwise correlations
were manifested in the layout of the rated and unrated
sites' pages were supported by inspection of randomly
selected pages. A linear discriminant classifier applied to the
page types (rated versus unrated) achieved a predictive accuracy
of 63%.
The work reported in this paper expands on that preliminary
analysis in several ways. First, rather than comparing highly
rated sites to unrated sites, we are comparing sites that have
been rated on a single scale, and according to several mea-
sures, by one set of judges. Second, the sites within this
dataset have been classified into topics (such as financial, ed-
ucational, community), thus allowing us to see if preferred
values for metrics vary according to type of category. Fi-
nally, informed by the results of our preliminary study, we
have improved our metrics and analyze a larger number of
web pages. This work further validates our preliminary analysis
This study computes quantitative web page attributes (e.g.,
number of fonts, images, and words) from web pages evaluated
for the 2000 Webby Awards [20]. The Webby organizers
place web sites into 27 categories, including news, per-
sonal, finance, services, sports, fashion, and travel. A panel
of over 100 judges from The International Academy of Digital
Arts & Sciences use a rigorous evaluation process to select
winning sites. 1 Webby organizers describe the judge selection
criteria as follows: "Site Reviewers are Internet professionals
who work with and on the Internet. They have
clearly demonstrable familiarity with the category in which
they review and have been individually required to produce
evidence of such expertise. The site reviewers are given different
sites in their category for review and they are all prohibited
from reviewing any site with which they have any
personal or professional affiliation. The Academy regularly
inspects the work of each reviewer for fairness and accuracy."
Judges rate web sites based on six criteria: content, structure
& navigation, visual design, functionality, interactivity, and
overall experience. Figure 1 shows the distribution of the
overall criterion across all of the judged sites. We suspected
that the six criteria were highly correlated, suggesting that
there was one factor underlying them all. To test this hypoth-
esis, we used a principles component analysis to examine the
underlying factor structure. The first factor accounted for
91% of the variance in the six criteria. In the experiments reported
below, we used both the overall Webby score and the
extracted factor for doing discriminant classification.
For our study, we selected sites from six topical categories -
financial, educational, community, health, service, and living
- because these categories contained at least 100
sites (in which the primary goal is to convey
information about some topic). We used the overall score
to define two groups of sites for analysis: good (top 33% of
sites), and not-good (remaining 67% of sites). Specifically,
we wanted to determine if there are significant differences
between the groups - both overall and within each category.
Furthermore, we wanted to construct models for predicting
group membership. These models would enable us to establish
concrete thresholds for each metric, evaluate them with
user studies, and eventually provide guidance for design im-
provement. We also used the composite rating to group sites
into two categories: top 33% of sites, and bottom 33% of
sites. The cutoffs for both sets, based on the overall criterion
(ranging from 1 to 10) are:
Webby Awards judging has three rounds. The data used in this study
are derived from the first round of judging; only the list of nominees for the
last round is available to the public. Throughout this paper, we assume a
score assigned to a site applies uniformly to all the pages within that site.
Community Education Finance
Top 6.97 6.58 6.47 6.6
Bottom 5.47 5.66 5.38 5.8
Health Living Services
Top 7.9 6.66 7.54
Bottom 6.4 5.66 5.9
The following section introduces the metrics and describes
the data collected for this analysis.
Web Page Metrics
From a list of 42 web page attributes associated with effective
design and usability [13], we developed an automated tool to
compute the 11 metrics that we focus on in this study (see
Table
1). (This subset was chosen primarily because it was
the easiest to compute; we are in the process of extending the
tool to compute a wider range of metrics.) The tool functions
similarly to the Netscape Navigator browser in processing
web pages and cascading stylesheets; it has limited support
for inline frames, but does not support framesets, applets,
scripts or other embedded objects. We analyzed the accuracy
of the computed metrics using a set of 5 pages with widely
different features, such as use of stylesheets, style tags, and
forms. Overall, the metrics are about 85% accurate with text
cluster and text positioning counts range from 38% to 74%
accuracy.
Data Collection
We used the metrics tool to collect data for 1,898 pages from
the six Webby Awards categories. These pages are from 163
sites and from 3 different levels in the site - the home page,
pages directly accessible from the home page (level 1), and
pages accessible from level 1 but not directly accessible from
the home page (level 2). We attempted to capture 15 level 1
pages and 45 level 2 pages from each site. Because not every
website has many pages at each level, our collection consists
of an average of 11 pages per site.
We employed several statistical techniques, including linear
regression, and linear discriminant analysis, and t-test
for equality of means, to examine differences between the
good and not-good groups. The following sections discuss
the findings in detail.
Distinguishing Good Pages
We used Linear Discriminant analysis to discriminate good
from not-good pages, and top from bottom pages. This technique
is suitable for cases where the predicted variable is
dichotomous in nature. We built two predictive models for
identifying good webpages using linear discriminant analysis
. Model 1: A simple, conservative model that distinguishes
"good" (top 33%) from "not good" (bottom 67%) websites,
using the overall Webby criterion as the predictor.
Metric Description
Word Count Total words on a page
Body Text % Percentage of words that are body vs. display text (i.e., headers)
Emphasized Body Text % Portion of body text that is emphasized (e.g., bold, capitalized or near !'s)
Text Positioning Count Changes in text position from flush left
areas highlighted with color, bordered regions, rules or lists
Link Count Total links on a page
Page Size Total bytes for the page as well as elements graphics and stylesheets
Graphic % Percentage of page bytes that are for graphics
Graphics Count Total graphics on a page (not including graphics specified in scripts, applets and objects)
Color Count Total colors employed
Font Count Total fonts employed (i.e., face bold
Table
1: Web page metrics computed for this study.
. Model 2: A more complex model that uses the Webby factor
and distinguishes "top" (top 33%) from "bottom" (bot-
tom 33%) pages.
Tables
2 and 3 summarize the accuracy of the predictions
for both models for the entire sample as well as within each
category. We report the Wilks Lambda along with the associated
Chi-square for each of the models; all of the discriminant
functions have significant Wilks Lambda. The squared
canonical correlation indicates the percentage of variance in
the metrics accounted for by the discriminant function. The
final and most important test for the model is the classification
accuracy.
For Model 1, the overall accuracy is 67% (50.4% and 78.4%
for good and not-good pages, respectively) if categories are
not taken into account (see Table 2). Classification accuracy
is higher on average when categories are assessed separately
(70.7% for good pages and 77% for not-good pages). Our
earlier results [13] achieved 63% overall accuracy but had
a smaller sample size, did not have separation into category
types, and had to distinguish between rated sites versus non-
rated sites, meaning that good sites may have been included
among the non-rated sites.
Interestingly, the average percentage of variance explained
within categories is more than double the variance explained
across the dataset. The health category has the highest
percentage of variance explained and also has the highest
classification accuracy of 89% (80.9% and 94.6% for good
and not-good pages, respectively). The accuracy for this
model is indicative of the predictive power of this approach.
In the future we plan to use more metrics and a larger dataset
in our analysis.
The model with the smallest percentage of variance explained
(20% for the living category) is also the model with the lowest
classification accuracy of 55% (47.4% and 62.3% for good
and not-good pages, respectively). We partially attribute this
lower accuracy to a smaller sample size; there are only 118
pages in this category.
The results for Model 2 are shown in Table 3. The average
category accuracy increases to 73.8% for predicting the top
pages and 86.6% for predicting the bottom pages. (This prediction
does not comment on intermediate pages, however.)
The higher accuracy is caused both by the relatively larger
differences between top and bottom pages (as opposed to top
versus the rest) and by the use of the Webby factor.
In related work [23] analyzing the Webby Award criteria in
detail, we found that the content criterion was the best predictor
of the overall score, while visual design was a weak
predictor at best. Here we see that the metrics are able to
better predict the Webby factor than the overall score. We
think this happens because the overall criterion is an abstract
judgement of site quality, while the Webby factor (consisting
of contributions from content, structure & navigation, visual
design, functionality, interactivity, and as well as overall rat-
ings) reflects aspects of the specific criteria which are more
easily captured by the metrics.
The Role of Individual Metrics
To gain insight about predictor metrics in these categories,
we also employed multiple linear regression analysis to predict
the overall Webby scores. We used a backward elimination
method wherein all of the metrics are entered into
the equation initially, and then one by one, the least predictive
metric is eliminated. This process is repeated until the
Adjusted R Square shows a significant reduction with the
elimination of a predictor. Table 4 shows the details of the
analysis. The adjusted R 2 for all of the regression analyses
was significant at the .01 level, meaning that the metrics explained
about 10% of the variance in the overall score for the
whole dataset. This indicates that a linear combination of our
metrics could significantly predict the overall score.
We used standardized Beta coefficients from regression equations
to determine the significance of the metrics in predicting
good vs. not-good pages. Table 5 illustrates which of the
metrics make significant contributions to predictions as well
as the nature of their contributions (positive or negative). Significant
metrics across the dataset are fairly consistent with
Squared Classification
Canonical Wilks Chi- Sample Accuracy
Category Correlation Lambda square Sig. Size Good Not-Good
Community
Education 0.26 0.74 111.52 0.000 373 69.00% 90.20%
Finance 0.37 0.63 85.61 0.000 190 63.20% 88.00%
Health 0.60 0.4 104.8 0.000 121 94.60% 80.90%
Living
Services 0.34 0.66 100.6 0.000 311 82.50% 75.80%
Cat. Avg. 70.70% 77.00%
Table
2: Classification accuracy for predicting good and not-good pages. The overall accuracy ignores category labels. Discriminant
analysis rejects some data items.
Squared Classification
Canonical Wilks Chi- Sample Accuracy
Category Correlation Lambda square Sig. Size Top Bottom
Community 0.60 0.40 275.78 0.000 305 83.2% 91.9%
Education 0.28 0.72 118.93 0.000 368 75.7% 73.2%
Finance 0.47 0.53 85.74 0.000 142 76.5% 93.4%
Health
Living 0.22 0.79 24.46 0.010 106 42.3% 75.9 %
Services 0.36 0.64 90.51 0.000 208 85.7% 74.8%
Cat. Avg. 76.07% 82.75%
Table
3: Classification accuracy for predicting the top 33% versus the bottom 33% according to the Webby factor. The overall accuracy
ignores category labels.
profiles discussed in the next section; most of the metrics
found to be individually significant play a major role in the
overall quality of pages.
Profiles of Good Pages
Word count was significantly correlated with 9 other metrics
(all but emphasized body text percentage), so we used it
to subdivide the pages into three groups, depending on their
size: low (avg. word count = 66.38), medium (avg. word
(avg. word count = 827.15). Partitioning
pages based on the word count metric created interesting
profiles of good versus not-good pages. In addition,
the regression score and discriminant analysis classification
accuracy increases somewhat when the dataset is divided in
this manner; Model 1 is most accurate for pages that fall into
the medium-size group.
To develop profiles of pages based on overall ratings, we
compared the means and standard deviations of all metrics
for good and not-good pages with low, medium, and high
word counts (see Table 6). We employed t-tests for equality
of means to determine their significance and also report
2-tailed significance values. Different metrics were significant
among the different size groups, with the exception of
graphic percentage, which is significant across all groups.
The data suggests that good pages have relatively fewer graph-
ics; this is consistent with our previously discussed finding
that visual design was a weak predictor of overall rating [23].
Returning to Table 5, we see that in most cases, the positive
Adj. R Std. F Sig. Sample
Category Square Err. value Size
Community 0.36 1.76 22.52 .000 430
Education 0.16 1.53 10.34 .000 536
Finance 0.24 1.90 7.78 .000 234
Health 0.56 0.79 27.98 .000 234
Living 0.11 1.62 2.70 .000 153
Services
Table
4: Linear regression results for predicting overall rating for
good and not-good pages. The F value and corresponding significance
level shows the linear combination of the metrics to be related
to the overall rating.
or negative contribution of a metric aligns with differences
in the means of good vs. bad pages depicted in Table 6, with
the exception that page size and link count in the medium
word count category appear to have opposite contribution
than expected, since in general good pages are smaller and
have more links on average than not-good pages. Looking in
detail at Tables 5 and 6, we can create profiles of the good
pages that fall within low, medium, and high word counts:
Low Word Count. Good pages have slightly more content,
smaller page sizes, less graphics and employ more font
variations than not-good pages. The smaller page sizes and
graphics count suggests faster download times for these
Word Count Category
Metric Low Med. High Com. Edu. Fin. Hlth. Lvng Serv. Freq.
Word Count # 4
Body Text % # 3
Emp. Body Text % # 3
Text Pos. Count # 2
Link Count # 2
Page Size # 4
Graphic
Graphics Count # 2
Color Count # 3
Font Count # 4
Table
5: Significant beta coefficients for all metrics in terms of whether they make a positive (#), negative (#), or no contribution in
predicting good pages. The frequency column summarizes the number of times a metric made significant contributions within the categories.
pages (this was corroborated by a download time metric,
not discussed in detail here). Correlations between font
count and body text suggest that good pages vary fonts
used between header and body text.
Medium Word Count. Good pages emphasize less of the
body text; if too much text is emphasized, the unintended
effect occurs of making the unemphasized text stands out
more than emphasized text. Based on text positioning and
text cluster count, medium-sized good pages appear to organize
text into clusters (e.g., lists and shaded table areas).
The negative correlations between body text and color count
suggests that good medium-sized pages use colors to distinguish
headers.
High Word Count. Large good pages exhibit a number of
differences from not-good pages. Although both groups
have comparable word counts, good pages have less body
text, suggesting pages have more headers and text links
than not-good pages (we verified this with hand-inspection
of some pages). As mentioned above, headers are thought
to improve scannability, while generous numbers of links
can facilitate information seeking provided they are meaningful
and clearly marked.
DISCUSSION
It is quite remarkable that the simple, superficial metrics used
in this study are capable of predicting expert's judgements
with some degree of accuracy. It may be the case that these
computer-accessible metrics reflect at some level the more
complex psychological principles by which the Webby judges
rate the sites. It turns out that similar results have been found
in related problems. For example, one informal study of
computer science grant proposals found that superficial features
such as font size, inclusion of a summary section and
section numbering distinguishes between proposals that are
funded and those that are not [1]. As another example, programs
can assign grades to student essays using only superficial
metrics (such as average word length, essay length, number
of commas, number of prepositions) and achieve correlations
with teachers' scores that are close to those of between-
teacher correlations [14].
There are two logical explanations for this effect; the first is
that there is a causal relationship between these metrics and
deeper aspects of information architecture. The second possibility
is that high quality in superficial attributes is generally
accompanied by high quality in all aspects of a work. In
other words, those who do a good job do a good job overall.
It may be the case that those site developers who have high-quality
content are willing to pay for professional designers
to develop the other aspects of their sites. Nevertheless, the
fact that these metrics can predict a difference between good
and not-good sites indicates that there are better and worse
ways to arrange the superficial aspects of web pages. By
reporting these, we hope to help web site developers who
cannot afford to hire professional design firms.
There is some question as to whether or not the Webby Awards
judgements are good indicators of web site usability, or whether
they assess other measures of quality. We have conducted
a task-based user study on a small subset of the web sites
within our sample, using the WAMMI Usability Questionnaire
[17]. We plan to report the results of this study in future
work.
CONCLUSIONS AND FUTURE WORK
The Webby Awards dataset is possibly the largest human-
rated corpus of web sites available. Any site that is submitted
is initially examined by three judges on six criteria. As such
it is a statistically rigorous collection. However, since the criteria
for judging are so broad, it is unclear or unknown what
the specific web page components are that judges actually
use for their assessments. As such it is not possible for those
who would like to look to these expert-rated sites to learn
Low Word Count Medium Word Count High Word Count
Mean & (Std. Dev.) Mean & (Std. Dev.) Mean & (Std. Dev.)
Metric G NG Sig. G NG Sig. G NG Sig.
Word Count 74.6 62.7 0.002 231.77 228.1 0.430 803.0 844.7 0.426
Body Text % 62.4 60.0 0.337 68.18 68.5 0.793 73.5 80.3 0.000
Emp. Body Text % 12.1 14.5 0.180 9.28 18.1 0.000 11.2 17.1 0.001
Text Pos. Count 1.4 1.6 0.605 4.59 3.5 0.096 6.1 7.3 0.403
Text Clus. Count 1.3 1.1 0.477 5.17 3.4 0.007 7.8 7.8 0.973
Link Count 74.6 16.0 0.202 35.68 36.2 0.764 61.1 51.7 0.019
Page Size 23041.2 32617.6 0.004 53429.98 46753.0 0.163 77877.7 50905.0 0.000
Graphic % 28.8 48.9 0.000 40.64 56.0 0.000 37.8 45.4 0.004
Graphics Count 11.4 15.0 0.005 24.88 26.2 0.451 25.3 25.8 0.835
Color Count 6.1 5.9 0.224 7.47 7.1 0.045 8.1 7.2 0.000
Font Count 3.7 3.2 0.001 5.42 5.3 0.320 6.7 6.7 0.999
Table
Means and standard deviations (in parenthesis) for the good (G) and not-good (NG) groups based on the low, medium, and high
word count categories. The table also contains t-test results (2-tailed significance) for each profile; bold text denotes significant differences
(i.e., p < 0.05).
how to improve their own designs to derive value from these
results. We hope that the type of analysis that we present
here opens the way towards a new, bottom-up methodology
for creating empirically justified, reproducible interface design
recommendations, heuristics, and guidelines.
We are developing a prototype analysis tool that will enable
designers to compare their pages to profiles of good pages
in each subject category. However, the lack of agreement
over guidelines suggests there is no one path to good design;
good web page design might be due to a combination of a
number of metrics. For example, it is possible that some
good pages use many text clusters, many links, and many
colors. Another good design profile might make use of less
text, proportionally fewer colors, and more graphics. Both
might be equally valid paths to the same end: good web page
design. Thus we do not plan to simply present a rating, nor do
we plan on stating that a given metric exceeds a cutoff point.
Rather, we plan to develop a set of profiles of good designs
for each category, and show how the designer's pages differ
from the various profiles.
It is important to keep in mind that metrics of the type explored
here are only one piece of the web site design puzzle;
this work is part of a larger project whose goals are to develop
techniques to empirically investigate all aspects of web
site design, and to develop tools to help designers assess and
improve the quality of their web sites.
ACKNOWLEDGMENTS
This research was supported by a Hellman Faculty Fund Award, a
Gates Millennium Fellowship, a GAANN fellowship, and a Lucent
Cooperative Research Fellowship Program grant. We thank Maya
Draisin and Tiffany Shlain at the International Academy of Digital
Arts & Sciences for making the WebbyAwards 2000 data available;
Nigel Claridge and Jurek Kirakowski of WAMMI for agreeing to
analyze our user study data; and Tom Phelps for his assistance with
the extended metrics tool.
--R
Does typography affect proposal as- sessment? Communications of the ACM
Automatic web usability evaluation: Where is the limit?
Building usable web pages: An HCI per- spective
The use of guidelines in menu interface design: Evaluation of a draft stan- dard
Using web server logs to improve site design.
Visually critiquing web pages.
Web Navigation: Designing the User Experience.
Measuring user motivation from server log files.
Understanding patterns of user visits to web sites: Interactive starfield visualizations of WWW log data.
Preliminary findings on quantitative measures for distinguishing highly rated information-centric web pages
Beyond automated essay scoring.
Web Style Guide: Basic Design Principles for Creating Web Sites.
WAMMI web usability qusetion- naire
The alertbox: Current issues in web usability.
Designing Web Usability: The Practice of Simplicity.
The International Academy of Arts and Sciences.
Characterization and assessment of HTML style guides.
Developing usability tools and techniques for designing and testing web sites.
Content or graphics?
Standards versus guidelines for designing user interface software.
The rating game.
Reading reader reaction: A proposal for inferential analysis of web server log files.
Authoring tools: Towards continuous usability testing of web documents.
A tool for systematic web authoring.
Web Criteria.
Incorporating quality metrics in centralized/distributed information retrieval on the world wide web.
--TR
Knowledge-based evaluation as design support for graphical user interfaces
Characterization and assessment of HTML style guides
Guidelines for designing usable World Wide Web pages
Gentler
Web navigation
Using Web server logs to improve site design
The scent of a site
On Site: does typography affect proposal assessment?
Incorporating quality metrics in centralized/distributed information retrieval on the World Wide Web
Designing Web Usability
Web Style Guide
The use of guidelines in menu interface design
--CTR
Christopher C. Whitehead, Evaluating web page and web site usability, Proceedings of the 44th annual southeast regional conference, March 10-12, 2006, Melbourne, Florida
Eleni Michailidou, ViCRAM: visual complexity rankings and accessibility metrics, ACM SIGACCESS Accessibility and Computing
Hartmut Obendorf , Harald Weinreich , Torsten Hass, Automatic support for web user studies with SCONE and TEA, CHI '04 extended abstracts on Human factors in computing systems, April 24-29, 2004, Vienna, Austria
Janice (Ginny) Redish , Randolph G. Bias , Robert Bailey , Rolf Molich , Joe Dumas , Jared M. Spool, Usability in practice: formative usability evaluations - evolution and revolution, CHI '02 extended abstracts on Human factors in computing systems, April 20-25, 2002, Minneapolis, Minnesota, USA
Tetsuya Yoshida , Masato Watanabe , Shogo Nishida, An image based design support system for web page design, International Journal of Knowledge-based and Intelligent Engineering Systems, v.10 n.3, p.201-212, July 2006
Melody Y. Ivory , Marti A. Hearst, Statistical profiles of highly-rated web sites, Proceedings of the SIGCHI conference on Human factors in computing systems: Changing our world, changing ourselves, April 20-25, 2002, Minneapolis, Minnesota, USA
Melody Y. Ivory , Marti A. Hearst, Improving Web Site Design, IEEE Internet Computing, v.6 n.2, p.56-63, March 2002
Tamara Sumner , Michael Khoo , Mimi Recker , Mary Marlino, Understanding educator perceptions of "quality" in digital libraries, Proceedings of the 3rd ACM/IEEE-CS joint conference on Digital libraries, May 27-31, 2003, Houston, Texas
Ron Perkins, Remote usability evaluations using the internet, Proceedings of the 1st European UPA conference on European usability professionals association conference, p.79-86, September 02-06, 2002, London, UK
Ed H. Chi , Adam Rosien , Gesara Supattanasiri , Amanda Williams , Christiaan Royer , Celia Chow , Erica Robles , Brinda Dalal , Julie Chen , Steve Cousins, The bloodhound project: automating discovery of web usability issues using the InfoScent simulator, Proceedings of the SIGCHI conference on Human factors in computing systems, April 05-10, 2003, Ft. Lauderdale, Florida, USA
Sibylle Steinau , Oscar Daz , Juan J. Rodrguez , Felipe Ibez, A tool for assessing the consistency of websites, Enterprise information systems IV, Kluwer Academic Publishers, Hingham, MA,
Anastasios Tombros , Ian Ruthven , Joemon M. Jose, How users assess web pages for information seeking, Journal of the American Society for Information Science and Technology, v.56 n.4, p.327-344, 15 February 2005
Elaine G. Toms , Adam R. Taves, Measuring user perceptions of web site reputation, Information Processing and Management: an International Journal, v.40 n.2, p.291-317, March 2004
Weiyin Hong , James Y. L. Thong , Kar Yan Tam, Designing product listing pages on e-commerce websites: an examination of presentation mode and information format, International Journal of Human-Computer Studies, v.61 n.4, p.481-503, October 2004
Surendra N. Singh , Nikunj Dalal , Nancy Spears, Understanding web home page perception, European Journal of Information Systems, v.14 n.3, p.288-302, September 2005
Aline Chevalier , Melody Y. Ivory, Web site designs: influences of designer's expertise and design constraints, International Journal of Human-Computer Studies, v.58 n.1, p.57-87, January
Melody Y. Ivory , Rodrick Megraw, Evolution of web site design patterns, ACM Transactions on Information Systems (TOIS), v.23 n.4, p.463-497, October 2005
Ralf Gitzel , Axel Korthaus , Martin Schader, Using established Web Engineering knowledge in model-driven approaches, Science of Computer Programming, v.66 n.2, p.105-124, April, 2007
Melody Y. Ivory , Marti A Hearst, The state of the art in automating usability evaluation of user interfaces, ACM Computing Surveys (CSUR), v.33 n.4, p.470-516, December 2001 | World Wide Web;automated usability evaluation;empirical studies;web site design |
365334 | Optimal Reward-Based Scheduling for Periodic Real-Time Tasks. | AbstractReward-based scheduling refers to the problem in which there is a reward associated with the execution of a task. In our framework, each real-time task comprises a mandatory and an optional part. The mandatory part must complete before the task's deadline, while a nondecreasing reward function is associated with the execution of the optional part, which can be interrupted at any time. Imprecise computation and Increased-Reward-with-Increased-Service models fall within the scope of this framework. In this paper, we address the reward-based scheduling problem for periodic tasks. An optimal schedule is one where mandatory parts complete in a timely manner and the weighted average reward is maximized. For linear and concave reward functions, which are most common, we 1) show the existence of an optimal schedule where the optional service time of a task is constant at every instance and 2) show how to efficiently compute this service time. We also prove the optimality of Rate Monotonic Scheduling (with harmonic periods), Earliest Deadline First, and Least Laxity First policies for the case of uniprocessors when used with the optimal service times we computed. Moreover, we extend our result by showing that any policy which can fully utilize all the processors is also optimal for the multiprocessor periodic reward-based scheduling. To show that our optimal solution is pushing the limits of reward-based scheduling, we further prove that, when the reward functions are convex, the problem becomes NP-Hard. Our static optimal solution, besides providing considerable reward improvements over the previous suboptimal strategies, also has a major practical benefit: Run-time overhead is eliminated and existing scheduling disciplines may be used without modification with the computed optimal service times. | Introduction
In a real-time system each task must complete and produce correct output by the specified deadline.
However, if the system is overloaded it is not possible to meet each deadline. In the past, several
techniques have been introduced by the research community regarding the appropriate strategy to use
in overloaded systems of periodic real-time tasks.
One class of approaches focuses on providing somewhat less stringent guarantees for temporal con-
straints. In [16], some instances of a task are allowed to be skipped entirely. The skip factor determines
how often instances of a given task may be left unexecuted. A best effort strategy is introduced in
[11], aiming at meeting k deadlines out of n instances of a given task. This framework is also known
as (n,k)-firm deadlines scheme. Bernat and Burns present in [3] a hybrid and improved approach to
provide hard real-time guarantees to k out of n consecutive instances of a task.
The techniques mentioned above tacitly assume that a task's output is of no value if it is not executed
completely. However, in many application areas such as multimedia applications [26], image and speech
processing [5, 6, 9, 28], time-dependent planning [4], robot control/navigation systems [12, 30], medical
decision making [13], information gathering [10], real-time heuristic search [17] and database query
processing [29], a partial or approximate but timely result is usually acceptable.
The imprecise computation [7, 19, 21] and IRIS (Increased Reward with Increased Service) [14, 15, 18]
models were proposed to enhance the resource utilization and graceful degradation of real-time systems
when compared with hard real-time environments where worst-case guarantees must be provided. In
these models, every real-time task is composed of a mandatory part and an optional part. The former
should be completed by the task's deadline to provide output of acceptable (minimal) quality. The
optional part is to be executed after the mandatory part while still before the deadline, if there are
enough resources in the system that are not committed to running mandatory parts for any task. The
longer the optional part executes, the better the quality of the result (the higher the reward).
The algorithms proposed for imprecise computation applications concentrate on a model that has
an upper bound on the execution time that could be assigned to the optional part [7, 21, 27]. The
aim is usually to minimize the (weighted) sum of errors. Several efficient algorithms are proposed to
solve optimally the scheduling problem of aperiodic imprecise computation tasks [21, 27]. A common
assumption in these studies is that the quality of the results produced is a linear function of the precision
consequently, the possibility of having more general error functions is usually not addressed.
An alternative model allows tasks to get increasing reward with increasing service (IRIS model)
[14, 15, 18] without an upper bound on the execution times of the tasks (though the deadline of the
task is an implicit upper bound) and without the separation between mandatory and optional parts [14].
A task executes for as long as the scheduler allows before its deadline. Typically, a nondecreasing concave
reward function is associated with each task's execution time. In [14, 15] the problem of maximizing the
total reward in a system of aperiodic independent tasks is addressed. The optimal solution with static
task sets is presented, as well as two extensions that include mandatory parts and policies for dynamic
task arrivals.
Note that the imprecise computation and IRIS models are closely related, since the performance
metrics can be defined as duals (maximizing the total reward vs. minimizing the total error). Similarly,
a concave reward function corresponds to a convex error function, and vice versa. We use the term
"Reward-based scheduling" to encompass scheduling frameworks, including Imprecise Computation and
IRIS models, where each task can be logically decomposed into a mandatory and optional subtask. A
nondecreasing reward function is associated with the execution of each optional part.
An interesting question concerns the types of reward functions that represent realistic application
areas. A linear reward function [19, 21] models the case where the benefit to the overall system increases
uniformly during the optional execution. Similarly, a concave reward function [14, 15, 18, 26] addresses
the case where the greatest increase/refinement in the output quality is obtained during the first portions
of optional executions. Linear and general concave functions are considered as the most realistic and
typical in the literature since they adequately capture the behavior of many application areas like
image and speech processing [5, 6, 9, 28], multimedia applications [26], time-dependent planning [4],
robot control/navigation systems [30], real-time heuristic search [17], information gathering [10] and
database query processing [29]. In this paper, we show that the case of convex reward functions is an
NP-Hard problem and thus focus on linear and concave reward functions. Reward functions with 0/1
constraints, where no reward is accrued unless the entire optional part is executed, or as step functions
have also received some interest in the literature. Unfortunately, this problem has been shown to be
NP-Complete in [27].
Periodic reward-based scheduling remains relatively unexplored, since the important work of Chung,
Liu and Lin [7]. In that paper, the authors classified the possible application areas as "error non-
cumulative" and "error cumulative". In the former, errors (or optional parts left unexecuted) have
no effect on the future instances of the same task. Well-known examples of this category are tasks
which receive, process and transmit periodically audio, video or compressed images [5, 6, 9, 26, 28]
and information retrieval tasks [10, 29]. In "error cumulative" applications, such as radar tracking, an
optional instance must be executed completely at every (predetermined) k invocations. The authors
further proved that the case of error-cumulative jobs is an NP-Complete problem. In this paper, we
restrict ourselves to error non-cumulative applications.
Recently, a QoS-based resource allocation model (QRAM) has been proposed for periodic applications
[26]. In that study, the problem is to optimally allocate several resources to the various applications
such that they simultaneously meet their minimum requirements along multiple QoS dimensions and the
total system utility is maximized. In one aspect, this can be viewed as a generalization of optimal CPU
allocation problem to multiple resources and quality dimensions. Further, dependent and independent
quality dimensions are separately addressed for the first time in this work. However, a fundamental
assumption of that model is that the reward functions and resource allocations are in terms of utilization
of resources. Our work falls rather along the lines of Imprecise Computation model, where the reward
accrued has to be computed separately over all task instances and the problem is to find the optimal
service times for each instance and the optimal schedule with these assignments.
Aspects of the Periodic Reward-Based Scheduling Problem
The difficulty of finding an optimal schedule for a periodic reward-based task set has its origin on two
objectives that must be simultaneously achieved, namely:
i. Meeting deadlines of mandatory parts at every periodic task invocation.
ii. Scheduling optional parts to maximize the total (or average) reward.
These two objectives are both important, yet often incompatible. In other words, hard deadlines of
mandatory parts may require sacrificing optional parts with greatest value to the system.
The analytical treatment of the problem is complicated by the fact that, in an optimal schedule,
optional service times of a given task may vary from instance to instance which makes the framework
of classical periodic scheduling theory inapplicable. Furthermore, this fact introduces a large number
of variables in any analytical approach. Finally, by allowing nonlinear reward functions to better
characterize the optional tasks' contribution to the overall system, the optimization problem becomes
computationally harder.
In [7], Chung, Liu and Lin proposed the strategy of assigning statically higher priorities to mandatory
parts. This decision, as proven in that paper, effectively achieves the first objective mentioned above
by securing mandatory parts from the potential interference of optional parts. Optional parts are
scheduled whenever no mandatory part is ready in the system. In [7], the simulation results regarding
the performance of several policies which assign static or dynamic priorities among optional parts are
reported. We call the class of algorithms that statically assign higher priorities to mandatory parts
Mandatory-First Algorithms.
In our solution, we do not decouple the objectives of meeting the deadlines of mandatory parts and
maximizing the total (or average) reward. We formulate the periodic reward-based scheduling problem
as an optimization problem and derive an important and surprising property of the solution for the
most common (i.e., linear and concave) reward functions. Namely, we prove that there is always an
optimal schedule where optional service times of a given task do not vary from instance to instance.
This important result immediately implies that the optimality (in terms of achievable utilization) of
any policy which can fully use the processor in case of hard-real time periodic tasks also holds in the
context of reward-based scheduling (in terms of total reward) when used with these optimal service
times. Examples of such policies are RMS-h (Rate Monotonic Scheduling with harmonic periods) [20],
EDF (Earliest Deadline First) [20] and LLF (Least Laxity First) [24]. We also extend the framework to
homogeneous multiprocessor settings and prove that any policy which can fully utilize all the processors
is also optimal for scheduling periodic reward-based tasks (in terms of total reward) on multiprocessors
environments.
Following these existence proofs, we address the problem of efficiently computing optimal service
times and provide polynomial-time algorithms for linear and/or general concave reward functions. Note
that using these optimal and constant optimal service times has also important practical advantages: (a)
The runtime overhead due to the existence of mandatory/optional dichotomy and reward functions is
removed, and (b) existing RMS with harmonic periods), EDF and LLF schedulers may be used without
any modification with these optimal assignments.
The remainder of this paper is organized as follows: In Section 2, the system model and basic
definitions are given. The main result about the optimality of any periodic policy which can fully utilize
the processor(s) is obtained in Section 3. In Section 4, we first analyze the worst-case performance
of Mandatory-First approaches. We also provide the results of experiments on a synthetic task set to
compare the performance of policies proposed in [7] against our optimal algorithm. In Section 6, we show
that the concavity assumption is also necessary for computational efficiency by proving that allowing
convex reward functions results in an NP-Hard problem. Then, we examine whether the optimality of
identical service times still holds if the model is modified by dropping some fundamental assumptions
(Section 5). We present details about the specific optimization problem that we use in Section 7. We
conclude by summarizing our contribution and discussing future work.
System Model
We first develop and present our solution for uniprocessor systems, then we show how to extend it to
the case of homogeneous multiprocessor systems.
We consider a set T of n periodic real-time tasks . The period of T i is denoted by P i ,
which is also equal to the deadline of the current invocation. We refer to the j th invocation of task T i
All tasks are assumed to be independent and ready at
Each task T i consists of a mandatory part M i and an optional part O i . The length of the mandatory
part is denoted by m i ; each task must receive at least m i units of service time before its deadline in order
to provide output of acceptable quality. The optional part O i becomes ready for execution only when
the mandatory part M i completes, it can execute as long as the scheduler allows before the deadline.
Associated with each optional part of a task is a reward function R which indicates the reward
accrued by task T ij when it receives t ij units of service beyond its mandatory portion. R is of the
(1)
where f i is a nondecreasing, concave and continuously differentiable function over nonnegative real
numbers and is the length of the entire optional part O i . We underline that f
the benefit of task T ij can not decrease by allowing it to run longer. Notice that the reward function
R i (t) is not necessarily differentiable at Note also that in this formulation, by the time the
task's optional execution time t reaches the threshold value the reward accrued ceases to increase.
Clearly, the reward of executing an optional part O i for an amount of time will be the same as
the reward for executing for Therefore, it is not beneficial to execute O i for more than
time units.
A function f(x) is concave if and only if for all x; y and 0 - ff -
ff)f(y). Geometrically, this condition means that the line joining any two points of a concave curve
may not be above the curve. Examples of concave functions are linear functions (kx
functions (ln[kx decay functions (c th root functions (x 1=k ). Note
that the first derivative of a nondecreasing concave function is nonincreasing. Having nondecreasing
concave reward functions means that while a task T i receives service beyond its mandatory portion M i ,
its reward monotonically increases. However, its rate of increase decreases or remains constant with
time. The concavity assumption implies that the early portions of an optional execution are not less
important than the later ones, which adequately captures many application areas mentioned in the
introduction. We mostly concentrate on linear and, in general, concave reward functions.
A schedule of periodic tasks is feasible if mandatory parts meet their deadlines at every invocation.
Given a feasible schedule of the task set T, the average reward of task T i is defined as:
where P is the hyperperiod, that is, the least common multiple of P is the service time
assigned to the j th instance of optional part of task T i . That is, the average reward of T i is computed
over the number of its invocations during the hyperperiod P, in an analogous way to the definition of
average error in [7] 1 .
The average weighted reward of a feasible schedule is then given by:
We note that the results we prove easily extend to the case in which one is interested in maximizing the total reward
where w i is a constant in the interval (0,1] indicating the relative importance of optional part O i .
Although this is the most general formulation, it is easy to see that the weight w i can always be
incorporated into the reward function f i (), by replacing it by w Thus, we will assume that all
weight (importance) information are already expressed in the reward function formulation and that
REWW is simply equal to
Finally, a schedule is optimal if it is feasible and it maximizes the average weighted reward.
A Motivating Example:
Before describing our solution to the problem, we present a simple example which shows the performance
limitations of any Mandatory-First algorithm. Consider two tasks where
5. Assume that the reward functions associated with optional parts are linear
Furthermore, suppose that k 2 associated with the reward accrued
by T 2 is negligible when compared to k 1 , i.e. k 1 AE k 2 . In this case, the "best" algorithm among
"Mandatory-First" approaches should produce the schedule shown in Figure 1.00000000000011111111111100000000000000000000001111111111111111111111
Figure
1: A schedule produced by Mandatory-First Algorithm
Above, we assumed that the Rate Monotonic Priority Assignment is used whenever more than one
mandatory task are simultaneously ready, as in [7]. Yet, following other (dynamic or static) priority
schemes would not change the fact that the processor will be busy executing solely mandatory parts
Mandatory-First approach. During the remaining idle interval [5,8], the best
algorithm would have chosen to schedule O 1 completely (which brings most benefit to the system) for 1
time unit and O 2 for 2 time units. However, an optimal algorithm would produce the schedule depicted
in
Figure
2.
As it can be seen, the optimal strategy in this case consisted of delaying the execution of M 2 in order
to be able to execute 'valuable' O 1 and we would still meet the deadlines of all mandatory parts. By
doing so, we would succeed in executing two instances of O 1 , in contrast to any Mandatory-First scheme
which can execute only one instance of O 1 . Remembering that k 1 AE k 2 , one can conclude that the
reward accrued by the 'best' Mandatory-First scheme may only be around half of that accrued by the
optimal one, for this example. Also, observe that in the optimal schedule, the optional execution times
of a given task did not vary from instance to instance. In the next section, we prove that this pattern
O 2
Figure
2: An optimal schedule
is not a mere coincidence. We further perform an analytical worst-case analysis of Mandatory-First
algorithms in Section 4.
3 Optimality of Full-Utilization Policies for Periodic Reward-Based
Scheduling
We first formalize the Periodic Reward-Based Scheduling problem. The objective is clearly finding
values to maximize the average reward. By substituting the average reward expression
given by (2) in (3), we obtain our objective function:
maximize
The first constraint that we must enforce is that the total processor demand of mandatory and
optional parts during the hyperperiod P may not exceed the available computing capacity, that is:
Note that this constraint is necessary, but by no means sufficient for feasibility of the task set
with values. Next, we observe that optimal t ij values may not be less than zero, since
negative service times do not have any physical interpretation. In addition, the service time of an
optional instance of T i does not need to exceed the upperbound of reward function R i (t), since the
reward accrued by T i ceases to increase after Hence, we obtain our second constraint set:
The constraint above allows us to readily substitute f i () for R i () in the objective function. Finally,
we need to express the "full" feasibility constraint, including the requirement that mandatory parts
complete in a timely manner at every invocation. Note that it is sufficient to have one feasible schedule
with the involved fm i g and optimal values:
There exists a feasible schedule with fm i g and ft ij g values
We express this constraint in English and not through formulas since the policy or algorithm producing
this schedule including optimal t ij assignments need not be specified at this point.
To re-capture all the constraints, the periodic reward-based scheduling problem, which we denote
by REW-PER, is to find values so as to:
maximize
subject to
There exists a feasible schedule with fm i g and ft ij g values (7)
Before stating our main result, we underline that if
it is not possible to schedule
mandatory parts in a timely manner and the optimization problem has no solution. Note that this
condition is equivalent to
? 1, which indicates that the task set would be unschedulable, even
if it consisted of only mandatory parts. Hence, thereafter, we suppose that
there exists at least one feasible schedule.
Theorem 1 Given an instance of Problem REW-PER, there exists an optimal solution where the optional
parts of a task T i receive the same service time at every instance, i.e.
Furthermore, any periodic hard-real time scheduling policy which can fully utilize the processor (EDF,
LLF, RMS-h) can be used to obtain a feasible schedule with these assignments.
Proof:
Our strategy to prove the theorem will be as follows. We will drop the feasibility condition (7)
and obtain a new optimization problem whose feasible region strictly contains that of REW-PER.
Specifically, we consider a new optimization problem, denoted by MAX-REW, where the objective
function is again given by (4), but only the constraint sets (5) and (6) have to be satisfied. Note
that the new problem MAX-REW does not a priori correspond to any scheduling problem, since the
feasibility issue is not addressed. We then show that there exists an optimal solution of MAX-REW
. Then, we will return to REW-PER and demonstrate the existence
of a feasible schedule (i.e. satisfiability of (7)) under these assignments. The reward associated with
MAX-REW's optimal solution is always greater than or equal to that of REW-PER's optimal solution,
for MAX-REW does not consider one of the REW-PER's constraints. This will imply that this specific
optimal solution of the new problem MAX-REW is also an optimal solution of REW-PER.
Now, we show that there exists an optimal solution of MAX-REW where
be an optimal solution to
also an optimal solution to
MAX-REW.
ffl We first show that ft 0
values satisfy the constraints (5) and (6), if already satisfy them.
Since
i the constraint (5) is not violated by the transformation. Also,
by assumption,
i , which is arithmetic mean of
is necessarily less than or equal to max j
the constraint set (6) is not violated
either by the transformation.
ffl Furthermore, the total reward does not decrease by this transformation, since
). The proof of this statement is presented in the Appendix.
Using Claim 1, we can commit to finding an optimal solution of MAX-REW by setting t
n. In this case,
. Hence, this
version of MAX-REW can be re-written as:
maximize
subject to
Finally, we prove that the optimal solution t automatically satisfies
the feasibility constraint (7) of our original problem REW-PER. Having equal optional service times
for a given task greatly simplifies the verification of this constraint. Since t
satisfy (9), we can write
equivalently,
- 1.
This implies that any policy which can achieve 100% processor utilization in classical periodic
scheduling theory (EDF, LLF, RMS-h) can be used to obtain a feasible schedule for tasks, which have
now identical execution times at every instance. Hence, the "full feasibility" constraint (7)
of REW-PER is satisfied. Furthermore, this schedule clearly maximizes the average reward since ft i g
values maximize MAX-REW whose feasible region encompasses that of REW-PER.
Corollary 1 Optimal t i values for the Problem REW-PER can be found by solving the optimization
problem given by (8), (9) and (10).
The details of the solution of this concave optimization problem are presented in Section 7.
3.1 Extension to Multiprocessors
The existence proof of identical service times can be easily extended to homogeneous multiprocessors.
The original formulation of REW-PER needs to be modified in order to reflect the multiprocessor
environment. Note that the objective function (4), the lower and upper bound constraints (6) on
optional service times and the full feasibility constraint (7) can be kept as they are. However, with k
processors, the system can potentially have a task set whose total utilization is k instead of 1. Hence,
we need to change the first constraint accordingly.
By doing so, we obtain the formulation of periodic imprecise computation problem for k processors,
denoted as MULTI-REW:
maximize
subject to
There exists a feasible schedule on k processors with fm i g and ft ij g values (14)
Following exactly the same line of reasoning depicted in Theorem 1, we can infer the following:
Theorem 2 Given an instance of Problem MULTI-REW, there exists an optimal solution where the
optional parts of a task T i receive the same service time at every instance, i.e.
Furthermore, any scheduling policy which can achieve full utilization on k processors can be used to
obtain a feasible schedule with these assignments.
An example of such full-utilization policies for multiprocessors is provided by Mancini et al. in [23]. We
note that the PFair scheduling policy [2] which can also achieve full-utilization, assumes that all the
periods are multiples of the slot length and hence it can not be used in this context.
Corollary 2 Optimal t i values for the Problem MULTI-REW can be found by solving the following
optimization problem:
maximize
subject to
Again, the details of the solution of this concave optimization problem are given in Section 7.
4 Evaluation and comparison with other approaches
We showed through the example in Section 2 that the reward accrued by any Mandatory-First scheme
may only be approximately half of that of the optimal algorithm. We now prove that, under worst-case
scenario, the ratio of the reward accrued by a Mandatory-First approach to the reward of the
optimal algorithm approaches zero.
Theorem 3 There is an instance of periodic reward-based scheduling problem where the ratio
Reward of the best mandatory\Gammafirst scheme
Reward of the optimal scheme
r for any integer r - 2.
Proof: Consider two tasks T 1 and T 2 such that P 2
r
which implies that
r (r\Gamma1)
This setting suggests that during any period of T 1 , a scheduler has the choice of executing (parts
in addition to M 1 .
Note that under any Mandatory-First policy, the processor will be continuously busy executing
mandatory parts until Furthermore, the best algorithm among Mandatory-First
policies should use the remaining idle times by scheduling O 1 entirely (since
2units of O 2 . The resulting schedule is shown in Figure 3.m 1
Figure
3: A schedule produced by Mandatory-First Algorithm
The average reward that the best mandatory-first algorithm (MFA) can accrue is therefore:
r
However, an optimal algorithm (shown in Figure 4) would choose delaying the execution of M 2 for
units of time, at every period of T 1 . By doing so, it would have the opportunity of accruing the
reward of O 1 at every instance.m 1
om
r
r
r
r
r
Figure
4: An optimal schedule
The total reward of the above schedule is:
r
The ratio of rewards for the two policies turns out to be (for any r - 2)
RMFA
ROPT
=r
r
=r
which can be made as close as possible to 0 by appropriately choosing r.Theorem 3 gives the worst-case performance ratio of Mandatory-First schemes. We also performed
experiments with a synthetic task set to investigate the relative performance of Mandatory-First schemes
proposed in [7] with different types of reward functions and different mandatory/optional workload
ratios.
The Mandatory-First schemes differ by the policy according to which optional parts are scheduled
when there is no mandatory part ready to execute. Rate-Monotonic (RMSO) and Least-Utilization (LU)
schemes assign statically higher priorities to optional parts with smaller periods and least utilizations
respectively. Among dynamic priority schemes are Earliest-Deadline-First (EDFO) and Least-Laxity-
First (LLFO) which consider the deadline and laxity of optional parts when assigning priorities. Least
Attained Time (LAT) aims at balancing execution times of optional parts that are ready, by dispatching
the one that executed least so far. Finally, Best Incremental Return (BIR) is an on-line policy which
chooses the optional task contributing most to the total reward, at a given slot. In other words, at every
slot BIR selects the optional part O ij such that the difference f is the largest (here t ij
is the optional service time O ij has already received and \Delta is the minimum time slot that the scheduler
assigns to any optional task). However, it is still a sub-optimal policy since it does not consider the
laxity information. The authors indicate in [7] that BIR is too computationally complex to be actually
implemented. However, since the total reward accrued by BIR is usually much higher than the other
five policies, BIR is used as a yardstick for measuring the performance of other algorithms.
We have used a synthetic task set comprising 11 tasks whose total (mandatory
is 2.3. Individual task utilizations vary from 0.03 to 0.6. Considering exponential, logarithmic and linear
reward functions as separate cases, we measured the reward ratio of six Mandatory-First schemes with
respect to our optimal algorithm. The tasks' characteristics (including reward functions) are given in
the Table below. In our experiments, we first set mandatory utilization to 0 (which corresponds to the
case of all-optional workload), then increased it to 0.25, 0.4, 0.6, 0.8 and 0.91 subsequently.
7 90
9
Figures
5 and 6 show the reward ratio of six Mandatory-First schemes with respect to our optimal
algorithm as a function of mandatory utilization, for different types of reward functions. A common
pattern appears: the optimal algorithm improves more dramatically with the increase in mandatory
utilization. The other schemes miss the opportunities of executing "valuable" optional parts while
constantly favoring mandatory parts. The reward loss becomes striking as the mandatory workload
increases. Figures 5.a and 5.b show the reward ratio for the case of exponential and logarithmic reward
functions, respectively. The curves for these strictly concave reward functions are fairly similar:
BIR performs best among Mandatory-First schemes, and its performance decreases as the mandatory
utilization increases; for instance the ratio falls to 0.73 when mandatory utilization is 0.6. Other algorithms
which are more amenable to practical implementations (in terms of runtime overhead) than
BIR perform even worse. However, it is worth noting that the performance of LAT is close to that of
BIR. This is to be expected, since task sets with strictly concave reward functions usually benefit from
"balanced" optional service times.
Utilization
OPT
Reward Ratio with
Respect to Optimal
RMSO LLFO
EDFO
(a)
LLFO
RMSO EDFO
LU
Reward Ratio with
Respect to Optimal
OPT
Mandatory
Figure
5: The Reward Ratio of Mandatory-First schemes: strictly concave reward functions (a) Exponential
(b) Logarithmic functions
Reward Ratio with
Respect to Optimal
Utilization
OPT0.800.600.500.201.00
LLFO
RMSO
EDFO
LU
Figure
The Reward Ratio of Mandatory-First schemes: linear reward functions
Figure
6 shows the reward ratio for linear reward functions. Although the reward ratio of Mandatory-
First schemes again decreases with the mandatory utilization, the decrease is less dramatic than in the
case of concave functions (see above). However, note that the ratio is typically less than 0.5 for the
five practical schemes. It is interesting to observe that the (impractical) BIR's reward now remains
comparable to that of optimal, even in the higher mandatory utilizations: the difference is less than
15%. In our opinion, the main reason for this behavior change lies on the fact that, for a given task,
the reward of optional execution slots in different instances does not make a difference in the linear
case. In contrast, not executing the "valuable" first slot(s) of a given instance creates a tremendous
effect for nonlinear concave functions. The improvement of the optimal algorithm would be larger for a
larger range of k i values (where k i is the coefficient of the linear reward function). We note that even
the worst-case performance of BIR may be arbitrarily bad with respect to the optimal one for linear
functions, as Theorem 3 suggests.
Further considerations on the optimality of identical service times
We underline that Theorem 1 was the key to eliminate (potentially) an exponential number of unknowns
thereby to obtain an optimization problem of n variables. One is naturally tempted to ask
whether the optimality of identical service times is still preserved if some fundamental assumptions
of the model are relaxed. Unfortunately, attempts to reach further optimality results for extended /
different models remain inconclusive as the following propositions suggest.
Proposition 1 The optimality of identical service times no longer holds if the Deadline = Period
assumption is relaxed.
Proof:
We will prove the statement by providing a counter-example. Assume that we allow the deadline of
a task to be less than its period. Consider the following two tasks:
Further, assume that the deadline of T 2 is D while the deadline of T 1 coincides with
its period, i.e. d 8. Note that the tight deadline of T 2 makes it impossible to schedule any optional
after which we are able to schedule O 1 for 3 units. This optimal schedule is shown in
Figure
7. On the other hand, if one commits to identical service times per instance, it is clear that we
may not schedule any optional part, since we could not execute O 1 in the first instance of T 1 (Figure
8).Next, suppose that the deadlines are equal to the periods, but we have to adopt a static priority
scheduling policy. It was already mentioned in Section 3, that if the periods are harmonic, then we
can use RMS without compromising optimality. But, in the general case where the periods are not
necessarily harmonic, this is not true even if we are investigating the 'best' schedule within the context
of a given static priority assignment.
Proposition 2 In the general case, the optimality of identical service times no longer holds if we
commit to a static priority assignment.
d
O 110
Figure
7: The optimal schedule0
Figure
8: Best schedule with identical service times
Proof:
Again, consider the following task set:
As we have only two tasks, we will consider the cases where T 1 or T 2 has higher priority and show
that in every case, the reward of the optimal schedule differs from the one obtained with identical service
times per instance assumption.
Case higher priority: It is easy to see that we can construct a schedule which fully
utilizes the timeline during the interval [0; lcm(6; This schedule is also immediately
optimal since the reward function is linear (observe that hence we do not receive any reward
for executing O 2 ). But, we remark that we can not execute O 1 for more than 1 unit on its first instance
in any feasible schedule - without violating the deadline of T 2 . Therefore, we would have ended up
with a lower reward after executing 1 unit of O 1 at each instance, if we had committed to identical
service times (Figure 9)b.
Case 2 - T 2 has higher priority: The optimality is still compromised if T 2 has higher priority.
While the optimal schedule (Figure 10a) fully utilizes the timeline, the best schedule with t
Figure
remains suboptimal.
We remark that the Proposition 2 has also implications for Q-RAM model [26], since it points to the
impossibility of achieving optimality with identical service times by using a static priority assignment.0000000000001111111111110000000000001111111111110000000000001111111111110000000000001111111111110163 242 6
6 81
O 1
O 1
O
O 1
O 1
O
O 1(a)
(b)
Figure
has the higher priority (a) The optimal schedule (b) The best schedule with identical
service times
O
O
MO 15OMM 1
O
O
MO 15OMM 1
O
M5OMM
19 2118O O 111
(a)
(b)
Figure
has the higher priority (a) The optimal schedule (b) The best schedule with identical
service times
The final proposition in this section illustrates that the optimality of identical service times is also
sensitive to the concavity of reward functions.
Proposition 3 The optimality of identical service times no longer holds if the concavity assumption
about the reward functions is relaxed.
Proof:
Consider two harmonic tasks without mandatory parts whose parameters are given in the following
table:
Note that t 2 should be assigned its maximum possible value (i.e., the upper bound 6), since the
marginal return of F 2 is larger than F 1 everywhere. An optimal schedule maximizing the average
reward for these two tasks is depicted in Figure 11.00000000000000000011111111111111111111111100000000000000000000000000000000000000000000000000000000000000011111111111111111111111111111111111111111111111111111111111111111111111111111111111100000000000000000000000000000000000000000000000011111111111111111111111111111111111111111111111111111111111111111O
O 2
O 12 8
Figure
11: An optimal schedule
The sum of the average rewards in the optimal schedule is 2:6 However, if we
commit ourselves to the equal service times per instances, we can find no better schedule than the one
shown in Figure 12, whose reward is only
It is not difficult to construct a similar example for the tasks with 0-1 constraints as well, which
implies that even the number of variables to deal with (the t ij 's) may be prohibitively large in these
problems.
O 1 O 1O513
O 2
Figure
service times
6 Periodic Reward-Based Scheduling Problem with Convex Reward
Functions is NP-Hard
As we mentioned before, maximizing the total (or average) reward with 0/1 constraints case had already
been proven to be NP-Complete in [21]. Similarly, in section 5 we showed that, if the reward functions
are convex, the optimality of identical service times is not preserved. In this section, we show that, in
fact convex reward functions result in an NP-Hard problem, even with identical periods.
We now show how to transform the SUBSET-SUM problem, which is known to be NP-Complete,
to REW-PER with convex reward functions.
SUBSET-SUM: Given a set of positive integers and the integer M, is there a set
SA ' S such that
We construct the corresponding REW-PER instance as follows. Let
Now consider a
set of n periodic tasks with the same period M and mandatory parts m 8i. The reward function
associated with T i is given by:
strictly convex and increasing function on nonnegative real numbers.
Notice that f i (t i ) can be re-written as t i Also we underline that having the same
periods for all tasks implies that REW-PER can be formulated as:
maximize
subject to
Let us denote by MaxRew the total reward of the optimal schedule. Observe that for
the quantity t i Otherwise, at either of the boundary values 0 or s i ,
MaxRew - WM .
Now, consider the question: "Is MaxRew equal to WM ?". Clearly, this question can be answered
quickly if there is a polynomial-time algorithm for REW-PER where reward functions are allowed to be
convex. Furthermore, the answer can be positive only when
and each t i is equal to either 0
or s i . Therefore, MaxRew equal to WM , if and only if there is a set SA ' S such that
which implies that REW-PER with convex reward functions is NP-Hard.
7 Solution of Periodic Reward-Based Scheduling Problem with Concave
Reward Functions
Corollaries 1 and 2 reveal that the two optimization problems whose solutions provide optimal service
times for uniprocessor and multiprocessor systems share a common form:
maximize
subject to
where d (the 'slack' available for optional executions) and b 1 are positive rational numbers.
In this section, we present polynomial-time solutions for this problem, where each f i is a nondecreasing,
concave and differentiable 2 function.
First note that, if the available slack is large enough to accommodate every optional part entirely
(i.e., if
then the choice t clearly maximizes the objective function due to the
nondecreasing nature of reward functions.
Otherwise, the slack d should be used in its entirety since the total reward never decreases by doing
so (again due to the nondecreasing nature of the reward functions). In this case, we obtain a concave
optimization problem with lower and upper bounds, denoted by OPT-LU. An instance of OPT-LU
is specified by the set of nondecreasing concave functions g, the set of upper bounds
and the available slack d. The aim is to:
maximize
subject to
2 In the auxiliary optimization problems which will be introduced shortly, the differentiability assumption holds as well.
Special Case of Linear functions: We address separately the case when F comprises solely linear
functions, since the time complexity can be considerably reduced by using this information. Note that
for a function f i increase t i by \Delta then total reward increases by k i \Delta. However by
doing so, we make use of b i \Delta units of slack (d is reduced by b i \Delta due to (22)). Hence, the "marginal
return" of task T i per slack unit is w
. Now consider another function f
. If should be always favored with respect to T i since the marginal return
of f j is strictly greater than f i everywhere. Repeating the argument for every pair of tasks, we can
obtain the following optimal strategy.
We first order the functions according to their marginal returns w
the function with the largest marginal return, f 2 the second and so on (ties are broken arbitrarily). If
d, then we set t and we are done (we are using the entire slack for T 1 , since transferring
service time to any other task would not increase the total reward). If b 1 then we set t
and d is reduced accordingly Next, we repeat the same for the next "most valuable"
. After at most n iterations, the slack d is completely consumed. We note that this solution
is analogous to the one presented in [26]. The dominant factor in the time complexity comes from the
initial sorting procedure, hence in the special case of all-linear functions, OPT-LU can be solved in time
O(n log n).
When F contains nonlinear functions then the procedure becomes more involved. In the next two
subsections, we introduce two auxiliary optimization problems, namely Problem OPT (which considers
only the equality constraint) and Problem OPT-L (which considers only the equality and lower bound
constraints), which will be used to solve OPT-LU.
7.1 Problem OPT: Case of the Equality Constraint
An instance of the problem OPT is characterized by the set of nondecreasing concave
functions and the slack d:
maximize
subject to
As it can be seen, OPT does not take into account the lower and upper bound constraints of Problem
OPT-LU. The algorithm which returns the solution of Problem OPT, is denoted by "Algorithm OPT".
When F is composed solely of non-linear reward functions, the application of Lagrange multipliers
technique [22] to the Problem OPT, yields:b i
where - is the common Lagrange multiplier and f 0
is the derivative of the reward function f i . The
quantity 1
actually represents the marginal return contributed by T i to the total reward,
which we will denote as w i (t i ). Observe that since f i non-decreasing and concave by assumption,
both
non-increasing and positive valued. Equation (25) implies that the marginal
returns
should be equal for all reward functions in the optimal solution g.
Considering that the equality
should also hold, one can obtain closed formulas in most of
the cases which occur in practice. The closed formulas presented below are obtained by this method.
ffl For logarithmic reward functions of the form f
and
ffl For exponential reward functions of the form f
and
ffl For "k th root" reward functions of the form f
and
When it is not possible to find a closed formula, following exactly the approach presented in [14, 15,
18], we solve - in the equation
is the inverse function of 1
(we assume the existence of the derivative's inverse function whenever f i is nonlinear, complying with
[14, 15, 18]). Once - is determined, t is the optimal solution.
We now examine the case where F is a mix of linear and nonlinear functions. Consider two linear
functions f t. The marginal return of f
and that
of f j is w
then the service time t i should be definitely zero, since
marginal return of f i is strictly less than f j everywhere. After this elimination process, if there are
linear functions with the same (largest) marginal return w max then we will consider them as a
single linear function in the procedure below and evenly divide the returned service time t max among
values corresponding to these p functions.
Hence, without loss of generality, we assume that f n is the only linear function in F. Its
marginal return is w
We first compute the optimal distribution of slack d among
tasks with nonlinear reward functions f . By the Lagrange multipliers technique, w
and thus w 1 (t
at the optimal solution t
.
Now we distinguish two cases:
. In this case, t
is the optimal solution to OPT. To see this,
first remember that all the reward functions are concave and nondecreasing, hence w
This implies that transferring some service
time from another task T i to T n would mean favoring the task which has the smaller marginal
reward rate and would not be optimal.
. In this case, reserving the slack d solely to tasks with nonlinear reward functions means
violating the best marginal rate principle and hence is not optimal. Therefore, we should increase
drops to the level of w not beyond. Solving
assigning any remaining slack d\Gamma
bn to t n (the service
time of unique task with linear reward function) clearly satisfies the best marginal rate principle
and achieves optimality.
7.2 Problem OPT-L: Case of Lower Bounds
In this section we present a solution for problem OPT-L and we improve on this solution in Section 7.2.1.
An instance of Problem OPT-L is characterized by the set F=ff 1 of nondecreasing concave
reward functions, and the available slack d:
maximize
subject to
To solve OPT-L, we first evaluate the solution set SOPT to the corresponding problem OPT and
check whether all inequality constraints are automatically satisfied. If this is the case, the solution
set SOPT \GammaL of Problem OPT-L is clearly the solution SOPT . Otherwise, we will construct SOPT \GammaL
iteratively as described below.
A well-known result of nonlinear optimization theory states that the solution SOPT \GammaL of Problem
OPT-L should satisfy so-called Kuhn-Tucker conditions [22, 25]. Furthermore, Kuhn-Tucker conditions
are also sufficient in the case of concave reward functions [22, 25]. For Problem OPT-L, Kuhn-Tucker
conditions comprise Equations (27), (28) and:
are Lagrange multipliers. The necessary and sufficient character of Kuhn-Tucker
conditions indicates that any 2n+ 1 tuple which satisfies conditions (27)
through (31) provides optimal t i values for OPT-L.
One method of solving the optimization problem OPT-L is to find a solution to the 2n+1 equations
(27), (29) and (30) which satisfies constraint sets (28) and (31). Iteratively solving the 2n+ 1 nonlinear
equations is a complex process which is not guaranteed to converge. In this paper, we follow a different
approach. Namely, we use the Kuhn-Tucker conditions (29), (30) and (31) to prove some useful properties
of the optimal solution. Our method is based on carefully using the properties that we derive in
order to refine the solution of the optimization problem OPT.
violates some inequality constraints given by (28) then 9i
Proof: Assume to the contrary that 8i - In this case Kuhn-Tucker conditions reduce to the
equality constraint (27), the set of inequality constraints (28) plus the Lagrangian condition given in (25).
On the other hand, SOPT , which is the solution of OPT, should satisfy (27) and the Lagrangian condition
(25). In other words, solving OPT is always equivalent to solving a set of non-linear equations which
are identical to Kuhn-Tucker conditions of OPT-L except inequality constraints, by setting 8i -
Hence, if there were a solution to OPT-L where 8i - then that solution would be returned by the
algorithm solving OPT and would not violate inequality constraints. However, given that the solution
SOPT failed to satisfy all the inequality constraints we reach a contradiction. Therefore, there exists at
least one Lagrange multiplier - i which is strictly greater than 0.Claim 3 9j
Proof: For the sake of contradiction, assume that 8j - j ? 0. In this case Equation (30) enforces
that 8i t were true,
will be equal to 0, leaving the slack d totally unutilized.
In this case, this clearly would not be the optimal solution due to the nondecreasing nature of reward
functions.In the rest of the paper, we use the expression "the set of functions" instead of "the set of indices
of functions" unless confusion arises. Let:
Remember that 1
is the marginal return associated with f x (t x ) and is denoted by w x (t x ). In-
formally, \Pi contains the functions f x 2 F with the smallest marginal returns at the lower bound 0,
Lemma 1 If SOPT violates some inequality constraints then, in SOPT \GammaL , t
Proof: Assume that 9m 2 \Pi such that t m ? 0. In this case, Equation (30) implies that the
corresponding we know that 9j such that - j ? 0. By Equation (30), t Using
Equation (29), we can writeb m
the concavity property of f m suggests that 1
(0). But in this case we
contradicting the assumption that m 2 \Pi. Hence -m ? 0 and by
Equation 0.In view of Lemma 1, we present the algorithm to solves Problem OPT-L in Figure 13 .
Algorithm OPT-L(F,d)
1 Evaluate the solution SOPT of the optimization
problem (without inequality constraints)
2 If all the inequality constraints are satisfied then SOPT \GammaL =SOPT ; exit
3 Compute \Pi from equation
6 goto Step 1
Figure
13: Algorithm to solve Problem OPT-L
Complexity: The time complexity COPT (n) of Algorithm OPT is O(n) (If the mentioned closed
apply, then the complexity is clearly linear. Otherwise the unique unknown - can be solved
in linear time under concavity assumptions, as indicated in [14, 15, 18]). Lemma 1 immediately implies
the existence of an algorithm which sets t re-invokes Algorithm OPT for the
remaining tasks and slack (in case that some inequality constraints are violated by SOPT ). Since the
number of invocations is bounded by n, the complexity of the algorithm which solves OPT-LU is O(n 2 ).
7.2.1 Faster Solution for Problem OPT-L
In this section, we present a faster algorithm of time complexity O(n \Delta log n), to solve OPT-L. We will
make use of the new (faster) algorithm in the final solution of OPT-LU.
Consider Algorithm OPT-L depicted in Fig. 13. Let F k be the set of functions passed to OPT during
the k th iteration of Algorithm OPT-L, and \Pi k be the set of functions with minimum marginal returns
at the lower bounds (minimum w i (0) values) during the k th iteration (Formally, \Pi
by f 0
y be the number of distinct w i (0) values among functions in F,
and m - n be the iteration number where Algorithm OPT returns a solution set which satisfies the
constraint set given by (28) for the remaining t i values. Note that the elements of \Pi
be produced in O(n \Delta log n) time during the preprocessing phase. Clearly, Algorithm OPT-L sequentially
sets returns a solution which does not violate
the constraint set for the remaining unknowns at the (m ) th iteration.
A tempting idea is to use binary search in the range [1; n ] to locate the critical index m in a faster
way. However, to justify the correctness of such a procedure one needs to prove that if one had further
set invoked subsequently the algorithm OPT, then
SOPT obtained in this way would have still satisfied the constraint set given by (28). Notice that if
this property does not hold, then it is not possible to determine the "direction" of the search by simply
testing SOPT at a given index i, since we must be assured that there exists a unique index m such that:
setting invoking OPT does not provide a solution SOPT which
satisfies the inequality constraints whenever 1 -
setting invoking OPT does provide a solurion SOPT which satisfies
the inequality constraints whenever m
The first of these properties follows directly from the correctness of Algorithm OPT-L. It turns
out that the second property also holds for concave objective functions as proven below. Hence, the
time complexity COPT \GammaL (n) may be reduced to O(n \Delta log n) by using a binary search like technique.
Algorithm FAST-L which solves Problem OPT-L in time O(n \Delta log n) is shown in Figure 14.
7.2.2 Correctness proof of the Fast Algorithm
We begin by introducing the following additional notation regarding the k th iteration of Algorithm
OPT-L.
assigned to the optional part of task T i by OPT during the k th iteration of
Algorithm OPT-L
solution produced by OPT during the k th iteration of Algorithm OPT-L
0g: the set of indices for which the solution SOPT;k violates inequality constraints.
Clearly, Algorithm OPT-L successively sets t
returns a solution which does not violate any constraints for functions in Fm at the (m ) th iteration.
Algorithm FAST-L uses binary search to determine the critical index m efficiently. The correctness of
Algprithm OPT-L assures that 8 invoking OPT
Algorithm FAST-L(F,d)
Evaluate SOPT of the corresponding Problem OPT
2 If all the constraints are satisfied then SOPT \GammaL =SOPT ; exit
3 Enumerate the functions in F according to the nondecreasing order
of w j (0) values and construct the sets \Pi
6 Evaluate SOPT by invoking OPT(\Pi m+1 [
7 if a constraint is violated
9 else f
Evaluate SOPT by invoking
12 if a constraint is violated
14 else f
Figure
14: Fast Algorithm for Problem OPT-L
would yield a non-empty violating set V y for the remaining tasks. Finally, Proposition 4 establishes
that 8 y - m \Gamma 1, V y will always remain empty after setting t
and invoking OPT, since this would leave even more slack for the remaining tasks. In the algorithm
FAST-L, a specific index m is tested at each iteration to check whether it satisfies the property
and ;. If this is the case, then since there is only one index m satisfying this
property. However, in case that Vm 6= ; then we can infer that and the next probe is
determined in the range (m; n ). Finally, if both then we restrict the search in
the range (0;
Proposition 4 Suppose that, during the execution of Algorithm OPT-L, SOPT;k does not violate some
inequality constraints (i.e., . Then the th invocation of
Algorithm OPT for the remaining tasks yields SOPT;k+1 such that t i;k+1 - t i;k for all
(which implies that V k+1 is still empty).
Proof: Note that t i;k - 0 8 by assumption. Based on the optimality property of
subproblems, if the k th invocation of Algorithm OPT yields an optimal solution, it will also generate
the optimal distribution of d \Gamma
among functions in F k \Gamma \Pi k . However,
the th invocation provides optimal distribution of d among functions in F k \Gamma \Pi k as well
(by setting t Thus, two successive invocations of Algorithm OPT can be written as:
maximize
subject to
and
maximize
subject to
Hence the proof will be complete if we show that 8i 2 F
Any solution SOPT;k should satisfy first-order necessary conditions for Lagrangian [22]:
The necessary conditions (34) giveb c
b d
For the sake of contradiction, assume that 9w 2 F
there must be some y 2 F k \Gamma \Pi k such that t y;k+1 ? t y;k . We distinguish two cases:
1. f w is nonlinear, that is, its derivative is strictly decreasing. Since f y is also concave, we can
by f 0
by f 0
are clearly inconsistent with
Equations (35) and (36). This can be easily seen by substituting w for c and y for d in Equations
(35) and (36), respectively.
2. f w is linear, which implies that f 0
j. In this case, to satisfy Equations (35)
and
y should be also linear of the form f y
bw . Hence, two functions
have the same marginal return
by But remembering our assumption from Section
7.1 that Algorithm OPT treats all linear functions of the same marginal return "fairly" (that is,
assigns them the same amount of service time), we reach a contradiction since t w;k was supposed
to be greater than t y;k .
Complexity: At most O(log n) probes are made during binary search and at each probe Algorithm
OPT is called twice. Recall that Algorithm OPT has the time complexity O(n). The initial cost of
sorting the derivative values is O(n log n). Hence the total complexity is COPT \GammaL
log log n), which is O(n \Delta log n).
7.3 Combining All Constraints: Solution of Problem OPT-LU
An instance of Problem OPT-LU is characterized by the set F= ff of nondecreasing,
differentiable, and concave reward functions, the set O= of upper bounds on the length
of optional execution parts, and available slack d:
maximize
subject to
We recall that
in the specification of OPT-LU.
We first observe the close relationship between the problems OPT-LU and OPT-L. Indeed, OPT-LU
has only an additional set of upper bound constraints. It is not difficult to see that if SOPT \GammaL satisfies
the constraints given by Equation (39), then the solution SOPT \GammaLU of problem OPT-LU is the same
as SOPT \GammaL . However, if an upper bound constraint is violated then we will construct the solution
iteratively in a way analogous to that described in the solution of Problem OPT-L.
contains the functions f x 2 F with the largest
marginal returns at the upper bounds, w x (o x ).
The algorithm ALG-OPT-LU (see Figure 15) which solves the problem OPT-LU is based on the
successive invocations of FAST-L. First, we find the solution SOPT \GammaL of the corresponding problem
OPT-L. We know that this solution is optimal for the simpler problem which does not take into account
upper bounds. If the upper bound constraints are automatically satisfied, then we are done. However,
if this is not the case, we set t \Gamma. Finally, we update the sets F, O and the slack d before
going through the next iteration.
Correctness: Most of the algorithm is self-explanatory in view of the results obtained in previous
sections. However, Line 5 of ALG-OPT-LU requires further elaboration. In addition to constraints
(38), (39) and (40), the necessary and sufficient Kuhn-Tucker conditions for Problem OPT-LU can be
expressed as:
Algorithm OPT-LU(F,O,d)
3 Evaluate SOPT \GammaL by invoking Algorithm FAST-L
4 if all upper bound constraints are satisfied then
set F=F\Gamma\Gamma
9 set O=O\Gammafo x jx 2 \Gammag
Figure
15: Algorithm to solve Problem OPT-LU
are Lagrange multipliers.
violates upper bound constraints given by Equation (39) then 9i -
Proof: Assume that 8i -
In this case, Kuhn-Tucker conditions (42) and (44) vanish. Also
conditions (38), (40), (41), (43) and (45) become exactly identical to Kuhn-Tucker conditions of Problem
OPT-L. Thus, SOPT \GammaL returned by Algorithm FAST-L is also equal to SOPT \GammaLU if and only if it satisfies
the extra constraint set given by (39).Claim 5 8i
Proof: Assume that 9i such that -
In this case, (42) and (43) force us to choose
which implies that this is contrary to our assumption that
the specification of the problem.
Now we are ready to justify line 5 of the algorithm:
violates upper bound constraints given by (39) then , in SOPT \GammaLU , t
\Gamma.
Proof: We will prove that the Lagrange multipliers -
are all non-zero, which will imply
(by (42)) that t We know that 9j such that
5. Using (41) we can
which gives (since t
necessarily less than or equal to , we can deduce 1
contradicts our assumption that m 2 \Gamma.Complexity: Notice that the worst case time complexity of each iteration is equal to that of
Algorithm FAST-L, which is O(n \Delta log n). Observe that the cardinality of F decreases by at least 1
after each iteration. Hence, the number of iterations is bounded by n. It follows that the total time
complexity COPT \GammaLU (n) is O(n 2 \Delta log n).
8 Conclusion
In this paper, we have addressed the periodic reward-based scheduling problem. We proved that when
the reward functions are convex, the problem is NP-Hard. Thus, our focus was on linear and concave
reward functions, which adequately represent realistic applications such as image and speech processing,
time-dependent planning and multimedia presentations. We have shown that for this class of reward
there exists always a schedule where the optional execution times of a given task do not change
from instance to instance. This result, in turn, implied the optimality of any periodic real-time policy
which can schedule a task set of utilization k on k processors. The existence of such policies is well-known
in real-time systems community: RMS (with harmonic periods), EDF and LLF for uniprocessor
systems, and in general, any scheduling policy which can fully utilize a multiprocessor system. We have
also presented polynomial-time algorithms for computing the optimal service times. We believe that
these efficient algorithms can be also used in other concave resource allocation/QoS problems such as
the one addressed in [26].
We underline that besides clear and observable reward improvement over previously proposed sub-optimal
policies, our approach has the advantage of not requiring any runtime overhead for maximizing
the reward of the system and for constantly monitoring the timeliness of mandatory parts. Once optimal
optional service times are determined statically by our algorithm, an existing (e.g., EDF) scheduler does
not need to be modified or to be aware of mandatory/optional semantic distinction at all. In our opinion,
this is another major benefit of having pre-computed and optimal equal service times for a given task's
invocations in reward-based scheduling.
In addition, Theorem 1 implies that as long as we are concerned with linear and concave reward
functions, the resource allocation can be also made in terms of utilization of tasks without sacrificing
optimality. In our opinion, this fact points to an interesting convergence of instance-based [7, 21] and
utilization-based [26] models for the most common reward functions.
About the tractability issues regarding the nature of reward functions, the case of step functions
was already proven to be NP-Complete ([21]). By efficiently solving the case of concave and linear
reward functions and proving that the case of convex reward functions is NP-Hard, efficient solvability
boundaries in (periodic or aperiodic) reward-based scheduling have been reached by our work (assuming
Finally, we have provided examples to show that the theorem about the optimality of identical
service times per instance no longer holds, if we relax some fundamental assumptions such as the
deadline/period equality and the availability of the dynamic priority scheduling policies. Considering
dynamic aperiodic task arrivals and investigating good approximation algorithms for intractable cases
such as step functions and error cumulative jobs can be major avenues for reward-based scheduling.
--R
A Polynomial-time Algorithm to solve Reward-Based Scheduling Prob- lem
Fairness in periodic real-time scheduling
Solving time-dependent planning problems
Scalable Video Coding using 3-D Subband Velocity Coding and Multi-Rate Quan- tization
Scalable Video Data Placement on Parallel Disk data arrays.
Scheduling periodic jobs that allow imprecise results.
Algorithms for Scheduling Real-Time Tasks with Input Error and End-to-End Deadlines
An extended imprecise computation model for time-constrained speech processing and generation
A dynamic priority assignment technique for streams with (m
Architectural foundations for real-time performance in intelligent agents
Reasoning under varying and uncertain resource constraints
Efficient On-Line Processor Scheduling for a Class of IRIS (Increasing Reward with Increasing Service) Real-Time Tasks
Algorithms and Complexity for Overloaded Systems that Allow Skips.
Imprecise Results: Utilizing partial computations in real-time systems
Scheduling Algorithms for Multiprogramming in Hard Real-time Environment
Algorithms for scheduling imprecise computations.
Linear and Nonlinear Programming
Scheduling Algorithms for Fault-Tolerance in Hard-Real-Time Systems
Fundamental Design Problems of Distributed systems for the Hard Real-Time Environment
Classical Optimization: Foundations and Extensions
A Resource Allocation Model for QoS Management.
Algorithms for scheduling imprecise computations to minimize total error.
Image Transfer: An end-to-end design
Producing monotonically improving approximate answers to relational algebra queries.
Anytime Sensing
--TR
--CTR
R. M. Santos , J. Urriza , J. Santos , J. Orozco, New methods for redistributing slack time in real-time systems: applications and comparative evaluations, Journal of Systems and Software, v.69 n.1-2, p.115-128, 01 January 2004
Hakan Aydin , Rami Melhem , Daniel Moss , Pedro Meja-Alvarez, Power-Aware Scheduling for Periodic Real-Time Tasks, IEEE Transactions on Computers, v.53 n.5, p.584-600, May 2004
Shaoxiong Hua , Gang Qu , Shuvra S. Bhattacharyya, Energy reduction techniques for multimedia applications with tolerance to deadline misses, Proceedings of the 40th conference on Design automation, June 02-06, 2003, Anaheim, CA, USA
Xiliang Zhong , Cheng-Zhong Xu, Frequency-aware energy optimization for real-time periodic and aperiodic tasks, ACM SIGPLAN Notices, v.42 n.7, July 2007
Melhem , Nevine AbouGhazaleh , Hakan Aydin , Daniel Moss, Power management points in power-aware real-time systems, Power aware computing, Kluwer Academic Publishers, Norwell, MA, 2002
Melhem , Daniel Moss, Maximizing rewards for real-time applications with energy constraints, ACM Transactions on Embedded Computing Systems (TECS), v.2 n.4, p.537-559, November
Jeffrey A. Barnett, Dynamic Task-Level Voltage Scheduling Optimizations, IEEE Transactions on Computers, v.54 n.5, p.508-520, May 2005
Lui Sha , Tarek Abdelzaher , Karl-Erik rzn , Anton Cervin , Theodore Baker , Alan Burns , Giorgio Buttazzo , Marco Caccamo , John Lehoczky , Aloysius K. Mok, Real Time Scheduling Theory: A Historical Perspective, Real-Time Systems, v.28 n.2-3, p.101-155, November-December 2004 | deadline scheduling;real-time systems;reward maximization;periodic task scheduling;imprecise computation |
365430 | Optimal covering tours with turn costs. | We give the first algorithmic study of a class of covering tour problems related to the geometric Traveling Salesman Problem: Find a polygonal tour for a cutter so that it sweeps out a specified region (pocket), in order to minimize a cost that depends not only on the length of the tour but also on the number of turns. These problems arise naturally in manufacturing applications of computational geometry to automatic tool path generation and automatic inspection systems, as well as arc routing (postman) problems with turn penalties. We prove lower bounds (NP-completeness of minimum-turn milling) and give efficient approximation algorithms for several natural versions of the problem, including a polynomial-time approximation scheme based on a novel adaptation of the m-guillotine method. | Introduction
An important algorithmic problem in manufacturing is to
compute effective paths and tours for covering ("milling")
a given region ("pocket") with a cutting tool: Find a path
or tour along which to move a prescribed cutter in order
that the sweep of the cutter (exactly) covers the re-
gion, removing all of the material from the pocket, while
not "gouging" the material that lies outside of the pocket.
This covering tour problem and its variants arise not only
in NC machining applications but also in several other
applications, including automatic inspection, spray paint-
ing/coating operations, robotic exploration, arc routing,
and even mathematical origami. While we will often
speak of the problem as "milling" with a "cutter", many of
its important applications arise in various contexts outside
of machining.
Department of Applied Mathematics and Statistics, State University
of New York, Stony Brook, NY 11794-3600, festie, jsbmg
@ams.sunysb.edu.
y Department of Computer Science, State University of New York,
Stony Brook, NY 11794-4400, fbender, saurabhg@cs.sunysb
.edu.
z Department of Computer Science, University of Waterloo, Water-
loo, Ontario N2L 3G1, Canada, eddemaine@uwaterloo.ca.
x Fachbereich Mathematik, Technische Universitat Berlin, 10623
Berlin, Germany, fekete@math.tu-berlin.de.
The majority of research on these geometric covering
tour problems as well as on the underlying arc routing
problems in networks has focused on cost functions based
on the lengths of edges. However, in many actual routing
problems, this cost is dominated by the cost of switching
paths or direction at a junction. A drastic example is
given by fiber-optical networks, where the time to follow
an edge is negligible compared to the cost of changing
to a different frequency at a router. In the context of NC
machining, turns represent an important component of the
objective function, as the cutter may have to be slowed in
anticipation of a turn. The number of turns (or the "link
naturally as an objective function in
robotic exploration (minimum-link watchman tours) and
in various arc routing problems (snow plowing or street
sweeping with turn penalties).
In this paper, we address the problem of minimizing
the cost of turns in a covering tour. This important aspect
of the problem has been left unexplored so far in the
algorithmic community; the arc routing community has
examined only heuristics or exact algorithms that do not
have performance guarantees. We provide several new
results:
(1) We prove that the covering tour problem with turn
costs is NP-complete, even if the objective is purely
to minimize the number of turns, the pocket is orthogonal
(rectilinear), and the cutter must move axis-
parallel. The hardness of the problem is not apparent,
as our problem seemingly bears a close resemblance
to the polynomially-solvable Chinese postman prob-
lem; see the discussion below.
(2) We provide a variety of constant-factor approximation
algorithms that efficiently compute covering
tours that are nearly optimal with respect to turn costs
in various versions of the problem. While getting
some O(1)-approximation is not difficult for most
problems in this class, through a careful study of the
structure of the problem, we have developed tools
and techniques that enable significantly stronger approximation
results. One of our main results is a
3.75-approximation for minimum-turn axis-parallel
tours for a unit square cutter that covers an integral
orthogonal polygon (with holes). Another main
result gives a 4/3-approximation for minimum-turn
tours in a "thin" pocket, as arises in the arc routing
version of our problem.
Table
summarizes our various results.
Cycle Cover Tour Simultaneous Maximum
Milling problem APX APX Length APX Coverage
Restricted-direction
geometric 5d 5d
Orthogonal
2.5 3.75 4 4
Integral orthogonal 4 6 4 4
Orthogonal thin 1 4=3 4 4
Table
1: Approximation factors achieved by our algorithms. (See Section 2 for the definitions of , , , d.)
(3) We devise a polynomial-time approximation scheme
(PTAS) for the covering tour problem in which the
cost is given as a weighted combination of length
and number of turns; e.g., the Euclidean length plus a
constant C times the number of turns. For a polygon
with h holes, the running time is O(2 h N O(C) ).
The PTAS involves an extension of the m-guillotine
method, which has previously been applied to obtain
PTAS's in problems involving only length.
Related Work. In the CAD community, there is a vast
literature on the subject of automatic tool path genera-
tion; we refer the reader to Held [21] for a survey and
for applications of computational geometry to the prob-
lem. The algorithmic study of the problem has focussed
on the problem of minimizing the length of a milling
tour: Arkin et al. [5, 6] show that the problem is NP-hard
in general. Constant-factor approximation algorithms
are given in [5, 6, 23], with the best current factor
being a 2.5-approximation for min-length milling (11/5-
approximation for orthogonal simple polygons). For the
closely related lawn mowing problem (also known as the
"traveling cameraman problem" [23]), in which the covering
tour is not constrained to stay within P , the best
current approximation factor is (3 (utilizing PTAS
results for TSP). Also closely related is the watchman
route problem with limited visibility (or "d-sweeper prob-
lem"), as studied by Ntafos [31], who provides a 4/3-
approximation, which is improved to a 6/5-approximation
by [6]. The problem is also closely related to the Hamiltonicity
problem in grid graphs; the recent results of [32]
suggest that in simple polygons, minimum-length milling
may in fact have a polynomial-time algorithm.
Covering tour problems are related to watchman
route problems in polygons, which have had considerable
study in terms of both exact algorithms (for the simple
polygon case) and approximation algorithms (in gen-
see [29] for a recent survey. Most relevant to our
problem is the prior work on minimum-link watchman
tours: see [2, 3, 8] for hardness and approximation re-
sults, and [14, 25] for combinatorial bounds. However, in
these problems the watchman is assumed to see arbitrarily
far, making them distinct from our tour cover problems.
Other algorithmic results on milling include a recent
study of multiple tool milling by Arya, Cheng, and
Mount [9], who give an approximation algorithm for
minimum-length tours that use different size cutters, and
a recent paper of Arkin et al. [7], who examine the problem
of minimizing the number of retractions for "zig-zag"
machining without "re-milling", showing that the problem
is NP-complete and giving an O(1)-approximation algorithm
Geometric tour problems with turn costs have been
studied by Aggarwal et al. [1], who prove NP-complete
the angular-metric TSP, in which one is to compute a
tour on a set of points in order to minimize the sum
of the direction changes at each vertex. Fekete [17]
and Fekete and Woeginger [18] have studied a variety of
angle-restricted tour (ART) problems.
In the operations research literature, there has been an
extensive literature on arc routing problems, which arise
in snow removal, street cleaning, road gritting, trash col-
lection, meter reading, mail delivery, etc.; see the surveys
of [10, 15, 16]. Arc routing with turn costs has
had considerable attention recently, as it enables a more
accurate modeling of the true routing costs in many sit-
uations. Most recently, Clossey et al. [13] present six
heuristic methods of attacking arc routing with turn penal-
ties, without resorting to the usual transformation to a TSP
problem; however, their results are purely based on experiments
and provide no provable performance guarantees.
The directed postman problem with turn penalties has
been studied recently by Benavent and Soler [11], who
prove the problem to be (strongly) NP-hard and provide
heuristics (without performance guarantees) and computational
results. (See also Soler's thesis [19] and [30] for
computational experience with worst-case exponential-time
exact methods.)
Our covering tour problem is related to the Chinese
postman problem, which is readily solved exactly in
polynomial time. However, the turn-weighted Chinese
postman problem is readily seen to be NP-complete:
Hamiltonian cycle in line graphs is NP-complete (contrary
to what is reported in [20]; see page 246, West [33]),
implying that TSP in line graphs is also NP-complete. The
Chinese postman problem on graph G with turn costs at
nodes (and zero costs on edges) is equivalent to TSP on
the corresponding line graph, L(G), where the cost of an
edge in L(G) is given by the corresponding turn cost in
G. Thus, the turn-weighted Chinese postman problem is
also NP-complete.
Summary of Results. As we show in Section 3, all of
the variants of our problem mentioned so far are NP-
thus, our main interest is in approximation al-
gorithms. It turns out that all of these problems have
essentially constant-factor approximations. Table 1 summarizes
our best approximation factors for each problem.
The term "coverage" indicates the number of times a point
is visited, which is of interest in several practical applica-
tions. This parameter also provides an upper bound on the
total length.
Preliminaries
Problem Definitions. The general geometric milling
problem is to find a closed curve (not necessarily sim-
ple) whose Minkowski sum with a given tool (cutter) is
precisely a given region (pocket), P . Subject to this con-
straint, we may wish to optimize a variety of objective
functions, such as the length of the tour, or the number of
turns in the tour. We call these problems minimum-length
and minimum-turn milling, respectively. While the latter
problem is the main focus of this paper, we are also interested
in bicriteria versions of the problem in which both
length and number of turns must be small, or some linear
combination of the two (see Section 5.8).
In addition to choices in the objective function, the
problem version depends on the constraints on the tour.
In the orthogonal milling problem, the region P is an
orthogonal polygonal domain (with holes) and the tool is
an (axis-parallel) unit-square cutter constrained to axis-parallel
motion, with links of the tour alternating between
horizontal and vertical. All turns are 90 ; a "U-turn" has
cost of 2.
Instead of dealing directly with a geometric milling
problem, we often find it helpful to consider a more
combinatorial problem, and then adapt the solution back
to the geometric problem. The integral orthogonal milling
problem is a specialization of the orthogonal milling
problem in which the region P has integral vertices. In
this case, an optimal tour can be assumed to have its
vertex coordinates of the form k
milling in an integral orthogonal polygon (with holes) is
equivalent to finding a tour of all the vertices ("pixels") of
a grid graph; see Figure 1.
A more general combinatorial model than integral
orthogonal milling is the discrete milling problem, in
which we discretize the set of possible links into a finite
collection of "channels" which are connected together at
Figure
1: An instance of the integral orthogonal milling
problem (left), and the grid graph model (right).
"vertices." More precisely, the channels have unit "width"
so that there is only one way to traverse them with the
given unit-size tool. At a vertex, the tour has a choice
of (1) turning onto another channel connected at that end
(costing one turn), (2) going straight if there is an incident
channel collinear with the source channel (costing no
turns), or (3) "U-turning" back onto the source edge
(costing two turns). Hence, this problem can be modeled
by a graph with certain pairs of incident edges marked as
"collinear," in such a way that the set of collinear pairs
at each vertex is a (not necessarily perfect) matching.
The discrete milling problem is to find a tour in such
a graph that visits every vertex. (The vertices represent
the "pixels" to be covered.) Integral orthogonal milling
is the special case of discrete milling in a grid graph.
Let (resp., ) denote the average (resp., maximum)
degree of a vertex and let denote the average number
of distinct "directions" coming together at a vertex, that
is, the average over each vertex of the cardinality of the
matching plus the number of unmatched edges at that
vertex.
The thin milling problem is to find a tour in such a
graph that traverses every edge (and visits every vertex).
Thus, the minimum-length version of thin milling is
exactly the Chinese postman problem. As we have already
noted, the minimum-turn version is NP-complete. The
orthogonal thin milling problem is the special case in
which the graph comes from an instance of orthogonal
milling.
A generalization of orthogonal milling is the geometric
milling problem with a constant number d of allowed
directions, which we call restricted-direction geometric
milling. In particular, the region P can only have edges
with the d allowed directions. This problem is not a sub-problem
of discrete milling, since it does not decompose
into a collection of nonoverlapping "vertices;" however, it
turns out that the same results apply.
Other Issues. It should be stressed that using turn cost
instead of (or in addition to) edge length changes several
characteristics of distances. One fundamental problem
is illustrated by the example in Figure 2: the triangle
inequality does not have to hold when using turn cost.
This implies that many classical algorithmic approaches
for graphs with nonnegative edge weights (such as using
optimal 2-factors or the Christofides method for the TSP)
cannot be applied without developing additional tools.
a
c
Figure
2: The triangle inequality may not hold when using
turn cost as distance measure: d(a; c)
In fact, in the presence of turn costs we distinguish
between the terms 2-factor and cycle cover. While the
terms are interchangeable when referring to the set of
edges that they constitute, we make a distinction between
their respective costs: a "2-factor" has a cost consisting of
the sum of edge costs, while the cost of a "cycle cover"
includes also the turn costs at vertices.
It is often useful in designing approximation algorithms
for optimal tours to begin with the problem of
computing an optimal cycle cover, minimizing the total
number of turns in a set of cycles that covers P . Specif-
ically, we can decompose the problem of finding an optimal
tour into two tasks: finding an optimal
cycle cover, and merging the components. Of course,
these two processes may influence each other: there may
be several optimal cycle covers, some of which are easier
to merge than others. (In particular, we say that a cycle
cover is connected, if the graph induced by the set
of cycles and their intersections is connected.) As we
will show, even the problem of optimally merging a connected
cycle cover is NP-complete. This is in contrast to
minimum-length milling, where an optimal connected cycle
cover can trivially be converted into an optimal tour
that has the same cost.
Another important issue is the encoding of the input
and output. In integral orthogonal milling, one might
think that it is most natural to encode the grid graph,
since the tour will be embedded on this graph and will,
in general, have complexity proportional to the number of
pixels. But the input to any geometric milling problem
has a natural encoding by specifying the vertices of the
polygon P . In particular, long edges are encoded in
binary (or with one real number, depending on the model)
instead of unary. It is possible to get a running time
depending only on this size, but of course we need to
allow for the output to be encoded implicitly. That is, we
cannot explicitly encode each vertex of the tour, because
there are too many (it can be arbitrarily large even for a
succinctly encodable rectangle). Instead, we encode an
abstract description of the tour that is easily decoded.
Algorithms whose running time is polynomial in
the explicit encoding size (pixel count) are pseudo-poly-
nomial. Algorithms whose running time is polynomial
in the implicit encoding size are polynomial. For our
purposes it will not matter whether lengths are encoded
with a single real number or in binary.
Finally, we mention that many of our results carry
over from the tour (or cycle) version to the path version,
in which the cutter need not return to its original position.
We omit discussion here of the changes necessary to
compute optimal paths. We also omit in this abstract
discussions of how our results apply also to the case of
lawn mowing, in which the sweep of the cutter is allowing
to go outside P during its motion.
With so many problems of interest, we specify in
every lemma, theorem, and corollary to which class of
problems it applies. The default subproblem is to find a
tour; if this is not the case (e.g., it is to find a cycle cover),
we state it explicitly.
3 NP-Completeness
Arkin et al. [6] have proved that the problem of optimizing
the length of a milling is NP-hard. This implies that
it is NP-hard to find a tour of minimum total length that
visits all vertices. If, on the other hand, we are given a
connected cycle cover of a graph that has minimum total
length, then it is trivial to convert it into a tour of the same
length by merging the cycles into one tour.
In this section we show that if the quality of a tour
is measured by counting turns, then even this last step of
turning an optimal connected cycle cover into an optimal
tour is NP-complete. This implies NP-hardness of finding
a milling tour that optimizes the number of turns for a
polygon with holes.
THEOREM 3.1. Minimum-turn milling is NP-complete,
even when we are restricted to the orthogonal thin case,
and assume that we know an optimal (minimum-turn)
connected cycle cover.
See the full version [4] of this paper for proofs of
this and most other theorems and lemmas. Because
orthogonal thin milling is a special case of thin milling
as well as orthogonal milling, and it is easy to convert
an instance of thin orthogonal milling into an instance of
integral orthogonal milling, we have
COROLLARY 3.1. Discrete milling, restricted-direction
geometric milling, orthogonal milling, and integral orthogonal
milling are NP-complete.
Approximation Tools
There are three main tools that we use to develop approximation
algorithms: computing optimal cycle covers for
milling the "boundary" of P (Section 4.1), converting cycle
covers into tours (Section 4.2), and utilizing optimal
(or nearly-optimal) "strip covers" (Section 4.3).
4.1 Boundary Cycle Covers
We consider first the problem of finding a minimum-turn
cycle cover for covering a certain subset, P , of P that is
along its boundary. Specifically, in orthogonal milling
we define the boundary links to be orthogonal offsets,
towards the interior of P , by 0:5 of each boundary edge.
(In the nonorthogonal case, the notion of boundary link
can be generalized; we defer the details to the full paper.)
The region P is defined, then, to be the Minkowski sum
of the boundary links and the tool, and we say that a cycle
cover or tour mills the boundary if it covers P . (P is
the union of pixels having at least one edge against the
boundary of P .) We exploit the property that a cycle
cover that mills the boundary (or a cycle cover that mills
the entire region) can be assumed to include the boundary
links in their entirety (without turns):
LEMMA 4.1. Any cycle cover that mills the boundary can
be converted into one that includes each boundary link as
a portion of a link, without changing the number of turns.
(a) (b)
Figure
3: By performing local modifications, an optimal
cycle cover can be assumed to cover each piece of the
boundary in one connected link.
This property allows us to apply methods similar to
those used in solving the Chinese postman problem: we
know portions of links that must be in the cycle cover, and
furthermore these links mill the boundary. What remains
is to connect these links into cycles, while minimizing the
number of additional turns.
We can compute the "turn distance" (which is one
less than the link distance) between an endpoint of one
boundary link and an endpoint of another boundary link.
The crucial knowledge that we are using is the orientation
of the boundary links, so, for example, we correctly
compute that the turn distance is zero when two boundary
links are collinear. Now we can find a minimum-weight
perfect matching in the complete graph on boundary-
link endpoints, with each edge weighted according to the
corresponding turn distance. This connects the boundary
links optimally into a set of cycles. This proves
THEOREM 4.1. A min-turn cycle cover of the boundary
of a region can be computed in polynomial time.
Remark. Note that the definition of the "boundary"
region P used here does not include all pixels that touch
the boundary of P ; in particular, it omits the "reflex
pixels" that share a corner, but no edge, with the boundary
of P . It seems difficult to require that the cycle cover
mill reflex pixels, since Lemma 4.1 does not extend to
this case, and an optimal cycle cover of the boundary
(as defined above) may have fewer turns than an optimal
cycle cover that mills the boundary P plus the reflex
pixels; see Figure 4.
Figure
4: Optimally covering pixels that have an edge
against the boundary can leave reflex pixels uncovered.
4.2 Merging Cycles
It is often easier to find a minimum-turn cycle cover
(or constant-factor approximation thereof) than to find
a minimum-turn tour. Here, we show that an exact
or approximate minimum-turn cycle cover implies an
approximation for minimum-turn tours.
THEOREM 4.2. A cycle cover with t turns can be converted
into a tour with at most t + 2c turns, where c is the
number of cycles.
COROLLARY 4.1. A cycle cover of a connected rectilinear
polygon with t turns can be converted into a single
milling tour with at most 3
turns.
Proof: Follows immediately from Theorem 4.2 and the
fact that each cycle has at least four turns. 2
COROLLARY 4.2. If we could find an optimal cycle cover
in polynomial time, we would have a 3-approximation
algorithm for the number of turns.
Unfortunately, general merging is difficult (as illustrated
by the NP-hardness proof), so we cannot hope to
improve these general merging results by more than a constant
factor.
4.3 Strip and Star Covers
A key tool for approximation algorithms is a covering of
the region by a collection of "strips." In general, a strip
is a maximal link whose Minkowski sum with the tool
is contained in the region. A strip cover is a collection
of strips whose Minkowski sums with the tool cover the
region. A minimum strip cover is a strip cover with
the fewest strips.
LEMMA 4.2. The size of a minimum strip cover is a
lower bound on the number of turns in a cycle cover (or
tour) of the region.
In the discrete milling problem, a related notion is a
"queen placement." A queen is just a vertex, which can
attack every vertex to which it is connected via a single
link. A queen placement is a collection of queens no two
of which can attack each other.
LEMMA 4.3. The size of a maximum queen placement is
a lower bound on the number of turns in a cycle cover (or
tour) for discrete milling.
In the integral orthogonal milling problem, the notions
of strip cover and queen placement are dual, and efficient
to compute:
LEMMA 4.4. For integral orthogonal milling, a minimum
strip cover and a maximum queen (rook) placement
have equal size, and furthermore can be computed in time
O(n 2:5 ).
For general discrete milling, it is possible to approximate
an optimal strip cover as follows. Greedily place
queens until you cannot place anymore, in other words,
until there is no unattackable vertex. This means that every
vertex is attackable by some queen, so by replacing
each queen with all possible strips through that vertex, we
obtain a strip cover of size times the number of queens.
(We call this type of strip cover a star cover.) But each
strip in a minimum strip cover can only cover a single
queen, so this is a -approximation to the minimum strip
cover. We have thus proved
LEMMA 4.5. In discrete milling, the number of stars in
a greedy star cover is a lower bound on the number of
strips, and hence serves as an -approximation algorithm
for minimum strip covers.
LEMMA 4.6. A greedy star cover can be found in linear
time.
5 Approximation Algorithms
5.1 Discrete Milling
Our most general approximation algorithm for the discrete
milling problem has the additional feature of running
in linear time. First we take a star cover according
to Lemma 4.5 which approximates an optimal strip cover
within a factor of . Then we tour the stars using an efficient
method described below. Finally we merge these
tours using Theorem 4.2.
THEOREM 5.1. For min-turn cycle covers in discrete
milling, there is a linear-time (
Furthermore, the maximum coverage of a vertex (i.e., the
maximum number of times a vertex is swept) is , and the
cycle cover is an -approximation on length.
COROLLARY 5.1. For min-turn discrete milling, there is
a linear-time
the maximum coverage of a vertex is , and the tour is an
-approximation on length.
Note that, in particular, these algorithms give a 6-
approximation on length if the discrete milling problem
comes from a planar graph, since the average degree of a
planar graph is bounded by 6.
COROLLARY 5.2. For minimum-turn integral orthogonal
milling, there is a linear-time 12-approximation that
covers each pixel at most 4 times and hence is also a 4-
approximation on length.
Proof: In this case,
5.2 Restricted-Direction Geometric Milling
As we already mentioned above, restricted-direction geometric
milling is not a special case of discrete milling, but
the same method and result applies:
THEOREM 5.2. In restricted-direction geometric milling,
there is a 5d-approximation on minimum-turn cycle covers
that is linear in the number N of pixels, and a (5d+2)-
approximation on minimum-turn tours of same complex-
ity. In both cases, the maximum coverage of a point is at
most 2d, so the algorithms are also 2d-approximations on
length.
Proof: Lemma 4.6 still holds, and each strip in a star (as
described in the previous section) will be a full strip. The
claim follows. 2
Note that this approximation algorithm applies to geometric
milling problems in arbitrary dimensions provided
that the number of directions is bounded, e.g., or-
thohedral milling.
As mentioned in the preliminaries, just the pixel
count N may not be a satisfactory measure for the complexity
of an algorithm, as the original region may be encoded
more efficiently by its boundary, and a tour may be
encoded by structuring it into a small number of pieces
that have a short description. It is possible to use the
above ideas for approximation algorithms in this extended
framework. For simplicity, we describe how this can be
done for the integral orthogonal case, where the set of pixels
is bounded by n boundary edges.
THEOREM 5.3. There is a 10-approximation of (strongly
polynomial) complexity O(n log n) on minimum-turn cycle
cover for a region of pixels bounded by n integral axis-parallel
segments, and a 12-approximation on minimum-
turn tours of same complexity. In both cases, the maximum
coverage of a point is at most 4, so the algorithms
are also 4-approximations on length.
For the special case where the boundary is connected
(i.e., the region does not have any holes), the complexities
drop to O(n).
5.3 Integral Orthogonal
We have already shown a 12-approximation for min-turn
integral orthogonal milling, using a star cover, with a
running time of O(N) or O(n log n). If we are willing to
invest more time for computation, we can find an optimal
rook cover (instead of a greedy one). As discussed in the
proof of Lemma 4.4, this yields an optimal strip cover.
This can be used to get a 6-approximation, with a running
time of O(n 1:5 ).
THEOREM 5.4. There is an O(n 2:5 )-time algorithm that
computes a milling tour with number of turns within 6
times the optimal, and with length within 4 times the
optimal.
By more sophisticated merging procedures, it may be
possible to reduce this approximation factor to something
between 4 and 6. However, our best approximation
algorithm uses a different strategy.
THEOREM 5.5. There is an O(n 2:5 ) 2:5-approximation
algorithm for minimum-turn cycle covers, and hence a
polynomial-time 3:75-approximation for minimum-turn
tours, for integral orthogonal milling.
Proof: As described in Lemma 4.4, find an optimal strip
cover S. Let s be its cardinality, then OPT s.
Now consider the end vertices of the strip cover. By
construction, they are part of the boundary. Any end point
of a strip is either crossed orthogonally, or the tour turns
at the boundary segment. In any case, a tour must have
a link that crosses an end vertex orthogonally to the strip.
(Note that this link has zero length in case of a u-turn.)
Next consider the following distance function between
end points of strips: For any pair of end points u
and v (possibly of different strips s u and s v ), let w(u; v)
be the smallest number of links from u to v when leaving
u in a direction orthogonal to s u , and arriving at v in
a direction orthogonal to s v . By a standard argument, an
optimal matching M satisfies w(M) OPT=2.
By construction, the edges of M and the strips of S
induce a 2-factor of the end points. Since any matching
edges leaves a strip orthogonally, we get at most 2 additional
turns at each strip for turning each 2-factor into a
cycle. The total number of turns is 2s+w(M) 2:5OPT.
Since the strips cover the whole region, we get a feasible
cycle cover.
Finally, we can use Corollary 4.1 to turn the cycle
cover into a tour. By the corollary, this tour does not have
more than 3.75OPT turns. 2
A simple class of examples in [4] shows that the
cycle cover algorithm may use 2OPT turns, and the tour
algorithm may use 3OPT turns, assuming that no special
algorithms are used for matching and merging. Moreover,
the same example shows that this 3:75-approximation
algorithm does not give an immediate length bound on the
resulting tour. However, we can use a local modification
argument to show the following:
THEOREM 5.6. For any given feasible tour of an integral
orthogonal region, there is a feasible tour of equal length
that covers each pixel at most four times. This implies a
performance ratio of 4 on the total length.
5.4 Nonintegral Orthogonal Polygons
Nonintegral orthogonal polygons present a difficulty in
that no polynomial-time algorithm is known to compute
a minimum strip cover for such polygons. Fortunately,
however, we can use the boundary tours from Section 4.1
to get a better approximation algorithm than the 12 from
Corollary 5.2.
THEOREM 5.7. In nonintegral orthogonal milling, there
is a polynomial-time 6:25-approximation for minimum-
turn cycle covers and 6-approximation for minimum-turn
tours. The running time is O(n 2:5 ).
Milling Thin Pockets
In this section we consider the special case of milling thin
pockets. Intuitively, a pocket is thin if it is composed of a
network of width-1 corridors, where each pixel is adjacent
to some part of the boundary of the region. A width-1
polygon is defined more formally as follows:
DEFINITION 1. An orthogonal polygon has width-1 if
no axis-aligned 2x2 square fits into the feasible region.
Equivalently, a width-1 polygon is such that each pixel
has all four of its corners on the boundary of the polygon.
A width-1 pocket has a natural graph representation,
described as follows. We associate vertices with some of
the squares that comprise the pocket. Specifically, a vertex
is associated with each square that is adjacent to more
than two squares or only one square. Squares that have
exactly two neighbors are converted into edges as follows:
Vertices u and v are connected by an edge iff there is
path in the pocket from u to v for which all other pixels
visited have two neighbors. The weight of edge (u; v) is
the number of turns in this path from u to v. In other
words: Pixels with one, three, or four neighbors can be
considered vertices of degree one, three, or four. Chains
(possibly of length zero) of adjacent pixels of degree two
can be considered edges between other vertices, if there
are any. (Clearly, the problem is trivial if there are no
pixels of degree three or higher; moreover, it is not hard
to see that all pixels adjacent to a pixel of degree four can
have at most degree two, and each pixel of degree three
must have at least one neighbor of degree not exceeding
two.) In the following, we will refer to this interpretation
whenever we speak of the "induced graph" of a region.
This interpretation illustrates that the problem is
closely related to the Chinese Postman Problem (CPP),
where the objective is to find a cheapest round trip in a
graph with nonnegative edge weights, such that all edges
are traversed. It is well-known that the CPP can be solved
optimally in polynomial time by finding a minimum-cost
matching of the odd-degree vertices in the graph. For thin
milling, however, this reduction does not work: While it
is possible to find a minimum-cost matching of the odd-
degree vertices, the cost of traversing the resulting Eulerian
graph is more than just the sum of edge weights, since
we may have to add a cost for turning at the vertices where
the matching is merged with the other edges in the graph.
As we noted in the introduction, this implies that triangle
inequality does not hold in general, and standard combinatorial
algorithms based on edge weights only may fail
to achieve a reasonable guaranteed performance. The difficulty
of having turn costs at vertices is also illustrated by
the proof in Section 3: Even in an Eulerian graph, where
all pixels have degree 2 or 4, it is NP-complete to find a
minimum cost roundtrip.
In this section we describe how refined combinatorial
arguments can achieve efficient constant-factor approxi-
mations. Without loss of generality, we assume that there
are no pixels of degree one: They force a doubled path to
and from them, which can be merged with any tour of the
remaining region at no extra cost.
We already know from Theorem 4.1 that a minimum-
turn cycle cover of a thin region can be found in polynomial
time, because any cycle cover of the boundary can
be turned into a cycle cover of the entire thin region. By
also applying Theorem 4.2, we immediately obtain
COROLLARY 5.3. In thin milling, there is a polynomial-time
algorithm for computing a min-turn cycle cover, and
a polynomial-time 1.5-approximation for min-turn tours.
More interesting is that we can do much better than
general merging in the case of thin milling. The idea is
to decompose the induced graph into a number of cheap
cycles, and a number of paths.
5.6 Milling Thin Eulerian Pockets We first solve the
special case of milling Eulerian pockets, that is, pocket
that can be milled without retractions, so that each edge
in the corresponding graph is traversed by the cutting tool
exactly once. In an Eulerian pocket, all nodes have either
two or four neighbors, and so all vertices in the graph have
exactly four neighbors.
Although one might expect that the optimal milling
is one of the possible Eulerian tours of the graph, in fact,
this is not always true; see the full version [4] for a class
of examples with this property.
However, we can achieve the following approximation
THEOREM 5.8. There is a linear-time algorithm that
finds a tour of length at most 6=5OPT.
5.7 Milling Arbitrary Thin Pockets Now we consider
the case of general width-1 pockets, where vertices
may have degree one or three. (As above, we can continue
to disregard vertices of degree one.) For any odd-degree
vertex, and any feasible solution, some edges must be traversed
multiple times.
As motivation for this more general milling problem,
recall the solution to the Chinese Postman Problem. Find
a minimum-cost perfect matching in the complete graph
on the odd-degree vertices, where the weight of an edge is
the length of the shortest path between the two endpoints.
Double the edges along the paths chosen in the perfect
matching. (No edge will be doubled more than once by a
simple local-modification argument.) Now take an Euler
tour of this graph with doubled edges.
This idea can be applied to our turn-minimization
problem. Consider a degree three vertex, and for naming
convenience give it a canonical orientation of the letter
"T". We will need to visit and leave this vertex twice
in order to cover it; that is, two paths through the T are
required in order to mill it.
There are several possibilities, which depend on
which edge of the T is covered twice by the cutting tool.
(See
Figure
5.)
(a) (b) (c)
Figure
5: Ways of covering a degree three node, which we
call a "T".
(a) If one of the top edges is milled twice then there is
only a single turn at the T. Said more concisely, if
the stalk is milled i times then exactly i turns are
required to mill the T.
If the stalk of the T is milled twice then there are two
ways to mill the T:
(b) One path mills the top of the T, and the other enters
on the stalk makes a "U" turn and exits the T on the
same path that it entered.
(c) Both paths mill the stalk and one of the top edges of
the T.
In both cases, if the stalk of the T is milled twice, then
there are 2 turns at the T.
We find a minimum weight cycle cover by having
as few turns at vertices as possible. Specifically, the
tool starts traveling along any edge and proceeds straight
through each intersection without making any additional
turns. Whenever the tool would retraverse an edge, a cycle
is obtained, and the tool continues the process setting out
from another edge. Whenever the tool enters a T from
the stalk it stops. Thus, we obtain a disjoint collection of
cycles and paths.
Now we connect the odd degree vertices together as
in a Chinese postman problem. Thus, we define the length
of a path between two degree three vertices to be the
number of turns along the path, plus a penalty of 1 for
each time the path ends at the bottom leg of a T.
Next, we find a minimum weight perfect matching on
paths that connect odd degree vertices. When we combine
the paths of the matching with the disjoint collection of
paths and cycles we obtain an Eulerian graph. We connect
the ends of the paths with the ends of the matching paths
to form cycles.
CLAIM 1. The above strategy yields a minimum cost
cycle cover.
Observe that this cycle cover is different from the cycle
cover in the Eulerian case because the cycles are not
disjoint sets of edges of the graphs. Some edges may
appear multiple times.
We now describe a 4=3-approximation algorithm for
connecting the cycles together.
1. Find an optimal cycle cover as described in Claim 1.
2. Repeat until there is only one cycle in the cycle
cover:
If there are any two cycles that can be merged
without any extra cost, perform the merge.
Find a vertex at which two cycles cross each
other.
Modify the vertex to incorporate at most two
additional turns, thereby connecting the two
cycles.
THEOREM 5.9. The algorithm described above finds a
tour of length at most 4=3OPT.
An example (shown in the full version [4] of this
paper) proves that the estimate for the performance ratio
is tight.
5.8 PTAS
Here we outline a PTAS for the problem of minimizing
a weighted average of the two cost criteria: length and
number of turns. Our technique is based on using the theory
of m-guillotine subdivisions [28], properly extended
to handle turn costs. We prove the following result:
THEOREM 5.10. For any fixed " > 0, there is a (1
")-approximation algorithm with running time O(2 h
that computes a milling tour for an integral
orthogonal polygon P with h holes, where the cost of the
tour is its length plus C times the number of (90-degree)
turns, and N is the number of pixels in P .
Proof: (sketch) Let T be a minimum-cost tour. Following
the notation of [28], we first use the main structure
theorem to show that there exists an m-guillotine subdivi-
sion, obtained from T , with length at most (1
times the length of T . (Note that part of TG may lie
outside the pocket P , since we added m-spans to make
it m-guillotine.) We then convert TG into a new graph,
G , which has the added properties
(since we keep only those portions of the m-span that lie
inside P ), (2) the number of edges of T incident on each
component of the m-span is even (since T is a tour), and
(3) the total cost of T 0
G is at most (1
times the
cost of T . We then use dynamic programming to obtain
a min-cost (modified) m-guillotine subdivision, T
G ,
which has certain specified properties, including connect-
edness, coverage, "bridge-doubling", and an even number
of edges incident on each connected component of each
m-span (this is the source of the 2 h term in the running
time, as we need to be able to require a given parity at
each sub-bridge, in order to have a connected Eulerian
graph in the end, from which we can extract a tour). The
techniques of [27] can be applied to reduce the exponent
on N to a term independent of ". 2
We expect to be able to use the same methods to
obtain a PTAS that is polynomial in n (versus N ), with
a careful consideration of implicit encodings of tours. We
have not yet been able to give a PTAS for minimizing only
the number of turns in a covering tour; this remains an
intriguing open problem.
6 Conclusion
Many open problems remain:
1. Can we find a minimum-turn cycle cover in polynomial
time? This would immediately lead to a 1.5-
approximation for the orthogonal case, and a (1+ 2
approximation for the general case.
Note that finding a minimum-cost cycle cover for a
planar set of points was shown to be NP-complete by
Aggarwal et al. [1].
2. Is there a polynomial-time algorithm for exactly
computing a minimum-turn covering tour for simple
orthogonal polygons?
The related problem for cost corresponding to distances
is still open, but there is some evidence that it
is indeed polynomial.
3. Is the analysis of the 3.75-approximation algorithm
There may be some redundancy in the combination
of all the estimates; however, our example shows that
even one of the basic simplifications (considering
only maximal strips with endpoints on the boundary)
may lead to a factor 2 or 3, depending on how the
merging is done.
4. What is the complexity of computing a minimum
strip cover in nonintegral orthogonal polygons?
5. What is the complexity of computing minimum strip
covers in nonorthogonal polygons?
6. Is there a strip cover approximation algorithm for d
directions whose performance is independent of d?
7. Can one obtain approximation algorithms for unrestricted
directions in an arbitrary polygonal domain?
Acknowledgments
We thank Regina Estkowski for helpful discussions.
This research is partially supported by NSF grant CCR-
9732221, and by several grants from Bridgeport Ma-
chines, Hughes Research Labs, ISX Corporation, Sandia
National Labs, Seagull Technology, and Sun Microsystems
--R
The angular-metric traveling salesman problem
Minimal link visibility paths inside a simple polygon.
Finding an approximate minimum-link visibility path inside a simple polygon
Optimal covering tours with turn costs.
The lawnmower problem.
Approximation algorithms for lawn mowing and milling.
Optimization problems related to zigzag pocket machining.
Approximation algorithms for multiple-tool milling
Arc routing methods and applications.
The directed rural postman problem with turn penalties.
Triangulating a simple polygon in linear time.
Solving arc routing problems with turn penalties.
Improved lower bounds for the link length of rectilinear spanning paths in grids.
Arc routing problems
Arc routing problems
Geometry and the Travelling Salesman Problem.
Computers and In- tractability: A Guide to the Theory of NP-Completeness
On the Computational Geometry of Pocket Machining
Hamilton paths in grid graphs.
The traveling cameraman problem
Computational complexity and the traveling salesman problem.
Link length of rectilinear Hamiltonian tours on grids.
Combinatorial Optimization: Networks and Matroids.
Guillotine subdivisions approximate polygonal subdivisions: Part III - Faster polynomial-time approximation schemes for geometric network optimiza- tion
Guillotine subdivisions approximate polygonal subdivisions: A simple polynomial-time approximation scheme for geometric TSP
Geometric shortest paths and network optimization.
Watchman routes under limited visibility.
Hamiltonian cycles in solid grid graphs.
An Introduction to Graph Theory.
--TR
On the computational geometry of pocket machining
Triangulating a simple polygon in linear time
Watchman routes under limited visibility
Minimal link visibility paths inside a simple polygon
Finding an approximate minimum-link visibility path inside a simple polygon
Angle-restricted tours in the plane
Approximation algorithms for multiple-tool miling
Improved lower bounds for the link length of rectilinear spanning paths in grids
Guillotine Subdivisions Approximate Polygonal Subdivisions
The angular-metric traveling salesman problem
Approximation algorithms for lawn mowing and milling
Computers and Intractability
The Traveling Cameraman Problem, with Applications to Automatic Optical Inspection
The Directed Rural Postman Problem with Turn Penalties
Hamiltonian Cycles in Solid Grid Graphs
--CTR
D. Demaine , Sndor P. Fekete , Shmuel Gal, Online searching with turn cost, Theoretical Computer Science, v.361 n.2, p.342-355, 1 September 2006
Sndor P. Fekete , Marco E. Lbbecke , Henk Meijer, Minimizing the stabbing number of matchings, trees, and triangulations, Proceedings of the fifteenth annual ACM-SIAM symposium on Discrete algorithms, January 11-14, 2004, New Orleans, Louisiana | lawn mowing;approximation algorithms;manufacturing;traveling salesman problem TSP;NP-completeness;NC machining;turn costs;milling;covering;m-guillotine subdivisions;polynomial-time approximation scheme PTAS |
365707 | Resolving Motion Correspondence for Densely Moving Points. | AbstractThis paper studies the motion correspondence problem for which a diversity of qualitative and statistical solutions exist. We concentrate on qualitative modeling, especially in situations where assignment conflicts arise either because multiple features compete for one detected point or because multiple detected points fit a single feature point. We leave out the possibility of point track initiation and termination because that principally conflicts with allowing for temporary point occlusion. We introduce individual, combined, and global motion models and fit existing qualitative solutions in this framework. Additionally, we present a new efficient tracking algorithm that satisfies thesepossibly constrainedmodels in a greedy matching sense, including an effective way to handle detection errors and occlusion. The performance evaluation shows that the proposed algorithm outperforms existing greedy matching algorithms. Finally, we describe an extension to the tracker that enables automatic initialization of the point tracks. Several experiments show that the extended algorithm is efficient, hardly sensitive its few parameters, and qualitatively better than other algorithms, including the presumed optimal statistical multiple hypothesis tracker. | Introduction
Motion correspondence has a number of applications in computer vision, ranging from motion analysis,
object tracking and surveillance to optical flow and structure from motion [11], [24], [25], [26]. Motion
# The authors are with the Department of Mediamatics, Faculty of Information Technology and Systems, Delft University of
Technology, P.O.Box 5031, 2600 GA, Delft, The Netherlands. E-mail: fC.J.Veenman, M.J.T.Reinders, E.Backerg@its.tudelft.nl
correspondence must be solved when features are to be tracked that appear identical or that are retrieved
with a simple feature detection scheme which loses essential information about its appearance. Hence, the
motion correspondence problem deals with finding corresponding points from one frame to the next in the
absence of significant appearance identification (see Fig. 1a). The goal is to determine a path or track of
the moving feature points from entry to exit from the scene, or from the start to the end of the sequence.
During presence in the scene, a point may be temporarily occluded by some object. Additionally, a point
may be missed and other points may be falsely detected because of a failing detection scheme, as in Fig. 1b
and Fig. 1c 1 .
(a)
Figure
1: Three moving points are measured at three time instances. The lines represent the point correspondences
in time. In (a) all points are measured at every time instance. In (b) there is an extra or false
measurement at t k+1 , and in (c) there is a missing measurement at t k+1
A candidate solution to the correspondence problem is a set of tracks that describes the motion of
each point from scene entry to exit. We adopt a uniqueness constraint, stating that one detected point
uniquely matches one feature point. When 2-D projections from a 3-D scene are analyzed, this is not trivial,
because one feature point may obscure another. If we further assume that all M points are detected in all
n frames, the number of possible track sets is (M!) n-1 . Among these solutions, there is a unique track set
that describes the true motion of the M points. In order to identify the true motion track set, we need prior
knowledge about the point motion, because otherwise all track sets are equally plausible. This knowledge
1 In the remainder of this paper we display the measurements from different time instances in one box and use t k labels to
indicate the time the point was detected.
can range from general physical properties like inertia and rigidity to explicit knowledge about the observed
objects, like for instance the possible movements of a robot arm in the case that points on a robot arm are
to be tracked. Clearly, generic motion correspondence algorithms cannot incorporate scene information.
Moreover, they do not differentiate between the points in the scene, i.e. all points are considered to have
similar motion characteristics.
When many similar points are moving through a scene, ambiguities may arise, because a detected point
may well fit correctly to the motion model of multiple features points. Additional ambiguities are caused by
multiple detected points that fit correctly to the model of a single feature. These correspondence ambiguities
can be resolved if combined motion characteristics are modeled, like for instance least average deviation
from all individual motion models. Besides resolving these ambiguities we also have to incorporate track
continuation in order to cope with point occlusion and missing detections. Other events that we may need to
model are track initiation and track termination, so that features can enter and leave the scene respectively.
The available motion knowledge is usually accumulated in an appropriate model. Then, a specific strategy
is needed to find the optimal solution among the huge amount of candidate solutions defined by the
model. When the nearest neighbor motion criterion is used, (see also Section 3) and there are neither point
occlusions nor detection errors, track set optimality only depends on the point distances between any two
consecutive frames. It is then legitimate to restrict the scope of the correspondence decision to one frame
ahead, which we call a greedy matching solution to the correspondence problem. In other cases in which
velocity state information is involved, correspondence decisions for one frame influence the optimal correspondence
for the next frames and the problem becomes increasingly more complex. In such cases only a
global matching over all frames can give the optimal result. In this paper, we consider the more difficult
cases, i.e. dense and fast moving points, which makes the use of velocity state information essential. Because
there are no efficient algorithms to find the optimal track set by global matching, only approximation
techniques apply. Several statistical [2] and qualitative approximation techniques have been developed both
in the field of target tracking and computer vision.
Statistical Methods
The two best known statistical approaches are the Joint Probabilistic Data-Association Filter (JPDAF) [9]
and the Multiple Hypothesis Tracker (MHT) [20]. The JPDAF matches a fixed number of features in
a greedy way and is especially suitable for situations with clutter. It does not necessarily select point
measurements as exact feature point locations, but, given the measurements and a number of corresponding
probability density functions, it estimates these positions. The MHT attempts to match a variable number
of feature points globally, while allowing for missing and false detections. Quite a few attempts have been
made to restrain the consequent combinatorial explosion, such as [3], [4], [5], [6], [15], [16]. More recently,
the equivalent sliding window algorithms have been developed, which match points using a limited temporal
scope. Then, these solve a multidimensional assignment problem, which is again NP-hard, but real-time
approximations using Lagrangian relaxation techniques are available [7], [8], [17], [18], [23].
A number of reasons make the statistical approaches less suitable as solution to the motion correspondence
problem. First, the assumptions that the points move independently and, more strongly, that the
measurements are distributed normally around their predicted position may not hold. Second, since statistical
techniques model all events as probabilities these techniques typically have quite a number of param-
eters, such as the Kalman filter parameters, and a priori probabilities for false measurements, and missed
detections. In general, it is certainly not trivial to determine optimal settings for these parameters. In the
experiments section we show that the best known statistical method (MHT) is indeed quite sensitive to its
parameter setting. Moreover, the a priori knowledge used in the statistical models is not differentiated between
the different points. As a consequence, the initialization may be severely hampered if the initial point
speeds are widely divergent, because the state of the motion models only gradually adapts to the measure-
ments. Finally, the statistical methods that optimize over several frames, are despite their approximations
computationally demanding, since the complexity grows exponentially with the number of points.
Heuristic Methods
Alternatively, a number of attempts has been made to solve the motion correspondence problem with deterministic
algorithms [1], [12], [14], [19], [22]. These algorithms are usually conceptually simpler and
have less parameters. Instead of probability density functions, qualitative motion heuristics are used to
constrain possible tracks and to identify the optimal track set. By converting qualitative descriptions like
smoothness of motion and rigidity into quantitative measures, a distance from the optimal motion can be
expressed (where a zero distance makes a correspondence optimal). The most commonly known algorithm
is the conceptually simple greedy exchange algorithm [22], which iteratively optimizes a local smoothness
of motion criterion averaged over all points in a sequence of frames. The advantage of such deterministic
algorithms is that it is quite easy to incorporate additional constraints, like (adaptive) maximum speed, and
a maximum deviation from smooth motion, while this a priori knowledge can restrain the computational
cost and improve the qualitative performance, e.g. [1], [10].
The main contributions of this paper are the presentation of a 1) qualitative motion modeling framework
for the motion correspondence problem. We introduce the notion of individual motion models, combined
motion models, and a global motion model, and we differentiate between strategies to satisfy these mod-
els. Further, we propose a 2) new efficient algorithm that brings together the motion models, an optimal
strategy, and an effective way to handle detection errors and occlusion. Finally, we present an extensive
comparative performance evaluation of a number of different qualitative methods.
The outline of the paper is as follows. We start by giving a formulation of the motion correspondence
problem in the next section. Then in Section 4, we present our qualitative motion model and show how
existing deterministic motion correspondence algorithms can be fit into it. Additionally, we present a new
algorithm that effectively resolves motion correspondence using the presented model in Section 5. In Section
6, we compare the qualitative performance, the efficiency, and the parameter sensitivity of the described
algorithms. Further, we show how the proposed algorithm can be extended with self-initialization and evaluate
it with synthetic data experiments in Section 7. We broaden this evaluation in Section 8, with real-data
experiments. We finish the paper with a discussion on possible extensions and some conclusions.
Problem statement
In this section we describe the motion correspondence problem as treated in this paper. In motion corre-
spondence, the goal is tracking points that are moving in a 2-D space that is essentially a projection of a
3-D world. The positions of the points are measured at regular times, resulting in a number of point locations
for a sequence of frames. For the moment, we assume that we have initial motion information of all
points, which is given by point correspondences between the first two frames. From Section 7 onwards this
restriction is lifted. Since the measured points are projections, points may become occluded and thus miss-
ing. Moreover, the point detection may be imperfect, resulting in missing and false point measurements.
Because long occlusion on the one hand and scene entrance and exit on the other hand are conflicting re-
quirements, we leave out the possibility of track initiation and track termination, so the number of features
to be tracked is constant. Applications using this problem definition range from object tracking in general,
like animal tracking to perform behavior analysis, particle tracking, and cloud system tracking, to feature
tracking for motion analysis. In the remainder of this paper, we abbreviate the moving points to 'points'
and their measured 2-D projections to 'measurements'.
More formally: There are M points, p i , moving around in a 3-D world. Given is a sequence of n time
instances for which at each time instance t k there is a set X k of m k measurements x k
. The measurements x k
are vectors representing 2-D coordinates in a 2-D space,
with dimensions S w (width) and S h (height). The number of measurements, m k , at t k , can be either smaller
(occlusion) or larger (false measurements) than M . At t 1 , the M points are identified among the
Moreover, the corresponding M measurements at t 2 are given. The task is to return a set
of M tracks that represent the (projected) motion of the M points through the 2-D space from t 1 to t n using
the movements between t 1 and t 2 as initial motion characteristics. A track T i , with is an ordered
n-tuple of corresponding measurements: #x 1
. It is assumed that points do
not enter or leave the scene and that the movement can be modeled independently. A track that has been
formed up to t k is called a track head and is denoted as T k
Qualitative Motion Modeling
The assumption underlying the qualitative model that we advocate is that points move smoothly from time
instance to time instance. That is, besides that individual points move smoothly, also the total set of points
moves smoothly between time instances as well as over the whole sequence. Hereto, we define a qualitative
model in which these qualitative statements are explicitly represented by a composition of motion models,
that we have called the global motion model, the combined motion model and the individual motion model.
The individual motion model represents the motion of individual points. To embed the motion smoothness
constraints, we can make use of well-known general physical properties like rigidity and inertia. Without
loss of generality, we only consider first-order motions, and thus leave out acceleration-state information.
Consequently, the motion vector of a feature point can be estimated from only two consecutive measure-
ments. On the basis of the motion vector and the adopted individual motion model, the position of the point
at the next time instance can be predicted. The measurement that is closest to this prediction can then be
selected as corresponding measurement. In reality, however, the points do not move exactly according to
their predictions, because of shortcomings of the adopted individual motion model. These are among others
caused by the limited order of the motion model, the fact that measurements are 2-D projections of 3-D
movements, and by noise in the system.
To express the misfit between a measurement and the predicted position, the candidate motion vector
between the candidate measurement and the last measurement in the track is calculated. Using the inertia
argument the cost representing the misfit is expressed in terms of the candidate motion and the previous
true motion vector. These cost can be used to select the appropriate candidate measurement to make a
correspondence. When points are moving far apart from each other or when they move reasonably according
to their models, their measurements can easily be assigned to the corresponding feature point. With densely
moving point sets, however, assignment conflicts can easily occur. That is, one measurement fits correctly
multiple individual motion models or multiple measurements correctly fit one motion model. To resolve
these ambiguities, the motion smoothness constraint is also imposed on the complete set of points. To this
end, we introduce the combined motion model, that expresses the deviation from this motion constraint. As
an example, we could enforce that the average deviation from the individual motion models is minimal.
Even with the use of the combined motion model it is not always possible to decide on point correspon-
dences. For that reason the motion smoothness constraint is additionally extended over the whole sequence
in the global motion model.
In the remainder of this section, we present some individual motion models, combined motion models
and a global motion model and we give quantitative expressions for each of them. To simplify expressing
the criteria that lead to the point tracks T i , we introduce the assignment matrix A
where the
entries a k
have the following meaning: a k
only if measurement x k+1
j is assigned to track head
i and otherwise zero. Because some measurements are false and others are missing, there can be some
measurements that are not assigned to a track head (all zeros in a column in A k ), and some track heads that
have no measurement assigned to them (all zeros in a row in A k ). Or, more formally:
a k
a k
We use two alternative notations for a correspondence between a measurement and a track head. First,
we define # k
as:
Second, we use ordered pairs (i, j ) to indicate that measurement x k+1
j has been assigned to track head
i . Z k then contains all assignment pairs from t k to t k+1 according to:
Tracks T i can now be derived from A, which is the concatenation of the assignment matrices A k . We
introduce a deviation matrix D k
to denote all individual assignment costs c k
i and measurements x
.
The assignment matrix identifies all correspondences from frame to frame, while the deviation matrix
quantifies the deviation from the individual motion track per correspondence. The matrices A k and D k both
have M rows and m k+1 columns. The rows represent the M track heads, T k
, and the columns represent the
j , that have been detected at t k+1 .
Individual Motion Models
We now formulate three individual motion models, together with an expression to compute a deviation from
the optimal track. The first model uses only one previous measurement to predict the new position. We have
indicated the dependence of only one previous measurement by the order of the individual model: O
The other two individual models depend on two measurements and consequently have order O 2. The
following motion criteria coefficients c k
are all defined from track head T k
i to a measurement x
.
im1 The nearest neighbor model does not incorporate velocity information. It only states that a point
moves as little as possible from t k to t k+1 .
h . (4)
im2 The smooth motion model as introduced by Sethi and Jain [22] assumes that the velocity magnitude
and direction change gradually. The smooth motion is formulated quantitatively in the following
criterion:
x
r
x
im3 The proximal uniformity model by Rangarajan and Shah [19] assumes little motion in addition to
constant speed. The deviation is quantified in the following criterion:
Combined Motion Models
Combined motion models serve to resolve correspondence conflicts between two successive frames in case
of dense moving point sets, making the individual model errors dependent on each other. Next, we give two
combined model criterions C k as a function of A k and D k , that are defined at t k over all established track
heads.
cm1 The average deviation model. This is a typical combined model which usually is realistic. It accounts
for the average deviation from the optimal track according to the individual model [21], [22],
[26]. Quantitatively, we use the generalized mean, which has a z parameter to differentiate between
emphasis on large and small deviations from the optimal individual track (see Fig. 2).
a k
z
(a) 04080120
(b) 04080120
Figure
2: Three moving points that are matched with im1 and cm1 using either As
a consequence larger deviations are penalized more in (b).
cm2 The average deviation conditioned by competition and alternatives model is derived from [1], [19].
In this combined model measurements are assigned to that track head that gives low deviation from
the optimal track, while both the other tracks are less attractive for this measurement and the other
measurements are less attractive for this track.
a k
where:
R a (i ) =m
R a (i ) represents the average cost of alternatives for T k
i and R c ( j ) the average cost for competitors of
.
Global Motion Model
To find the optimal track set (over all frames) according to a certain combined model, we need to compute
the accumulated global motion deviation S(D) as in the following expression:
A#U
k=O im
where U is the set of matrices A that satisfy Eq.1.
That is, the overall minimum of averaged combined criterions defines the optimal track set. Because
finding this minimum is computationally expensive a greedy matching is considered in this paper. This
means that instead of finding correspondences over all frames, we establish optimal correspondences between
two successive frames, given the state of the individual motion models and the combined model up
to that moment. After these sub-optimal correspondences have been established the states of the individual
models are adjusted and the next frame is considered. In other words, Eq.10 is approximated by minimizing
separately, i.e.:
k=O im
min (D k ), where C k
min (D k
This approximation approach reduces the complexity of the problem considerably, although at the cost
of greedy, possibly less plausible, correspondence decisions (see for example Fig. 3). In the remainder, we
leave out the D and D k parameter for S, "
S, and C k
min respectively.
(a) 10203010 20
(b) 10203010 20
Figure
3: Two moving points at four time instances. When the smooth motion model (im2) with the average
deviation model (cm1) are assumed, (b) gives a two times lower deviation from the optimal path than (a).
However, (a) is decided for when greedy matching is used.
Model Constraints
The motion models we have described so far allow for any point speed and for any deviation from smooth-
ness. The models only state that those assignments are preferred that have little deviation from the individual
model. There are, however, situations in which there is more knowledge available about the point motions,
like the minimum speed (d min ) and the maximum speed (d max ) [1], [12], [14], [21], maximum violation of
smoothness spatial or temporal adaptive speed and smoothness violation constraints
[10]. When imposed on the individual motion models, these constraints enable the recognition of
impossible assignments, which can be both qualitatively and computationally beneficial. These constraints
can for instance be implemented by setting the individual criterion to a very high value, when some constraint
is violated. The strategy (see next sub-section) that satisfies the models can exploit these constraints
more adequately by leaving out of consideration those correspondences that violate the motion constraints.
To find the optimal track set, we compute the global motion deviation. However, we are not interested in the
actual value of S, but in the assignment matrix A that results in the minimal global motion deviation. In the
next section, we first show how existing algorithms approximate the minimization of C k and consequently
deliver a sub-optimal solution A k . In Section 5, we present an optimal as well as efficient algorithm to find
that A k that minimizes C k .
Algorithms
Having modeled the feature point motion and having described quantitative expressions that can be used to
identify the optimal track set, we now review a number of existing algorithms and fit them in our motion
framework using our concept of individual and combined motion models. Further, we describe the strategy
they use to find the optimal correspondences. Because all algorithms perform greedy matching, their task
is to find C k
min .
S&S algorithm (im2/cm1/z=1)
The first algorithm we looked at was originally developed by Sethi and Jain [22]. The original algorithm
assumes a fixed number of feature points to be tracked and does not allow for occlusion and detection errors.
Here, we describe the adjusted algorithm by Salari and Sethi [21] that partially fixes these shortcomings.
The algorithm adopts a smooth motion model for individual motion (im2). The combined motion model is
an average deviation model (cm1/z=1). To find an optimum of the global motion, the algorithm iteratively
exchanges measurements between tracks to minimize the criterion on average.
Initially, the tracks are led through the nearest neighboring measurements in the sequence. In this stage
conflicts are 'resolved' on a first come first served basis. That is, at t k+1 measurements are assigned to the
closest track parts T k
i that have been formed up to t k to which no point was assigned yet. Consequently, the
initialization procedure is a greedy im1/cm1 approximation.
Then, each iteration step modifies at most two assignment pairs somewhere in the sequence, by exchanging
the second entry of the pair. The algorithm considers all possible exchanges within the d max range
of two track heads in the whole sequence and the exchange that gives the highest gain by decreasing the
average criterion deviation is executed. The iteration phase stops when gain can no longer be obtained. The
exchange gain between assignment pairs (i, p) and ( j, q) (see Eq.3) is defined in the following way:
To achieve even better tracking results, the algorithm first optimizes correspondences over all frames
in forward direction and then (after this iteration phase stops) it optimizes correspondences in backward
direction. Only when the optimization process did not change anything in either direction, the algorithm
stops. This bi-directional optimization process can indeed increase the tracking quality, but, unfortunately,
this process is not guaranteed to converge, especially with densely moving points [19].
In contrast with what we said before, this algorithm seems to optimize over the whole sequence. How-
ever, when we look carefully at the optimization process within one iteration phase, we see that this is only
partially true. As long as the tracks are wrong at the start, exchanges in the remainder of the track will
mostly be useless. This is due to the fact that the tracks were initialized using another criterion than the
one that is considered in the iteration phase. Consequently, the optimization is only effective at the initial
measurements of the tracks. This problem is most severe when the sequence is long and when the difference
between the initialization criterion (nearest neighbor) and the optimization criterion (smooth motion) is
large, i.e. with high speeds and high densities. We tested this statement by feeding to the S&S algorithm the
example shown in Fig.3. If we do not optimize in both directions, the S&S algorithm indeed makes greedy
correspondences as in Fig.3a, which supports the statement that S&S is a greedy matching algorithm.
The Salari and Sethi version of this, so-called greedy exchange algorithm, additionally proposes a way
to resolve track continuation, initiation and termination. They introduce a number of phantom points to the
set of measurements in each frame. These phantom points serve as replacements of missing measurements,
while satisfying local constraints. By imposing the maximum allowed local smoothness criterion and a
maximum speed, missed measurements are recognized and filled in with phantom points. Moreover, the
constraints also allow the detection of false measurements. Effectively, false measurements are replaced by
phantom points if the introduction of a phantom point results in a lower criterion value.
This approach generally works fine except that missing measurements (represented by a phantom point)
always have the maximum criterion and displacement. For instance, if point p i has not been measured
at t k , the algorithm can easily associate a measurement of p i at t k+1 to another point which is within the
criterion range # max . It is important to remark that the phantom points only enforce that the local movement
constraints are satisfied, but when a phantom point is put in a track, the track is in fact divided up into
two tracks. In other words, this maximum criterion approach solves the correspondence problem up to the
maximum criterion. Choosing a low maximum criterion leads to many undecided track parts and a higher
maximum criterion leads to possibly wrong correspondences. This is where the track initiation/termination
and occlusion events become conflicting requirements as already mentioned in Section 2.
R&S algorithm (im3/cm2)
A different approach to the correspondence problem is chosen by Rangarajan and Shah [19]. They have a
different combined motion model and do not use an iterative optimization procedure. The R&S algorithm
assumes a fixed number of feature points and it allows for temporary occlusion or missing point detections,
but not for false detections. It uses the proximal uniformity model (im3) as individual motion model and
cm2 as the combined motion model. This algorithm does not constrain the individual point motion, i.e. it
does not have a d max or # max parameter.
To find the minimum of the combined model (Eq.8), the authors use a greedy non-iterative algorithm.
In each step of the algorithm, that particular point x k+1
j is assigned to track head T k
i that has a low deviation
from the optimal motion (low individual deviation) while on average all alternative track heads have a larger
deviation with respect to x k+1
j and on average all other measurements have a worse criterion with respect to
We continue the description of the algorithm in terms that fit the proposed motion framework as established
in Section 3. The algorithm selects that assignment pair (i, j ) that maximizes R #
a (i )
all minimal track head extensions, where R # a (i ) and R # c are derived from Eq.9 according to:
a (i ) =m
Then, an optimal assignment pair g(X t , repeatedly selected in the following way:
a (p)
q#Xm
is the set of track head indices that have not yet been assigned a measurement, and X m is the set of
measurement indices that have not yet been assigned to a track head. After an assignment has been found,
the track head and measurement are removed from the respective index sets X t and X m . The algorithm
accumulates the assignment costs, and eventually stops when X t is empty. The criterion computation can
be summarized in the recurrence relation as follows:
The matching assignment pairs are collected similarly:
#, if X
Consequently, this strategy results in the following approximation of C k
and the set of assignment pairs as defined in Eq.3:
Additionally, the algorithm differentiates between two cases: 1) all measurements are present and 2)
some measurements are missing, by occlusion or otherwise. In the first case, the algorithm works as
described above. Otherwise, because there is a lack of measurements at t k+1 , the problem is not which
measurement should be assigned to which track head, but which track head should be assigned to which
measurement. Then, the assignment strategy is similar to the above. When all track head assignments T k
to measurements x k
are found, it is clear for which tracks a measurement is missing. The R&S algorithm
directly fills in these points with extrapolated points. The disadvantage of this track continuation scheme
becomes apparent when the point occlusion lasts for a number of frames. Direct extrapolation results in a
straight extension of the last recognized motion vector, which on the long term can deviate much from the
true motion track so that recovering becomes increasingly difficult (see the experiments in Section 6.2.4).
C&V algorithm (im2/cm2)
The third and last scheme we describe, has been developed by Chetverikov and Verestoy [1]. Their method
allows for track initiation, track termination, and occlusion only during two time instances. C&V assume
the smooth motion model (im2) and cm2 as combined motion model. The algorithm extends track heads T k
by first collecting all candidate measurements x k+1
j in the circle with radius d max around x k
whose criterion
does not exceed # max . The candidate measurements are considered in optimal criterion order with respect
to the track head. Then, for each measurement all competing track heads are collected. The candidate
measurement will be rejected if it is the best alternative for any of the competing track heads. When there
are no candidates left, the track head will not be connected. Remaining unconnected track parts, caused by
occlusion or otherwise, are handled in a post-processing step, which we leave out of the discussion.
This scheme does not maximize the cost of the alternatives (i.e. w in Eq.8) and track heads are
only then considered as competitors if they are within the d max as well as # max range. Moreover, their cost
is not averaged as in Eq.8: any competitor that fits a measurement best, prevents that the measurement is
assigned to T k
The basics of this algorithm can be summarized as follows 2 . Let X a (i ) be the set of alternative track
head extensions for track head T k
i as defined below:
X a (i
q#Xm
where each measurement x
j has a set of competing track heads X c ( j ) according to:
The algorithm selects a measurement from X a (i ) for a track head from X t according to:
q#X a (i)
Substituted into Eq.15-18, this leads to the minimal combined criterion approximation and corresponding
set of assignment pairs Z k .
The advantage of this scheme is that the d max parameter is exploited very efficiently. With low point
densities, there is usually just one candidate point and there are no competing track heads for that point.
However, higher point densities or large d max values can reveal the inadequacy of the strategy to find the
minimal combined motion model deviation. Because the deviation is not averaged over competitors and
alternatives, greedy assignment decisions are the result.
5 Optimal algorithm to minimize C k
In the previous section we saw that known algorithms adopt a sub-optimal search strategy to minimize C k .
In this section, we propose an algorithm that finds the minimum of the combined motion models efficiently.
We describe the algorithm in our own terms assuming a fixed number of points and verification depth = 2, see [1] for details.
To this end, we use the Hungarian algorithm, which efficiently finds the solution of the classical assignment
problem [13]. Danchick and Newman [6] first used it in a similar context; to find hypotheses for the Multiple
Hypothesis Tracker. In general, the algorithm minimizes the following expression:
a
subject to:
a
a
It typically finds the minimal cost assignment, which can be represented in a weighted bipartite graph
consisting of two sets of vertices, X and Y . The m vertices from X are connected to all m vertices of Y with
weighted edges w i j . The algorithm then assigns every vertex from X to a separate vertex in Y in such a way
that the overall cost is minimized.
In order to be able to apply the Hungarian algorithm and to handle detection errors and occlusion, we
prepare the measurement data such that the problem becomes squared. We propose to handle the false
detection problem by introducing false tracks as proposed earlier in [26]. False tracks do not have to adhere
to any motion criterion, so that measurements that do not fit the motion model of any true track will be
moved to these false tracks. By associating a maximum cost deviation (# max ) to assignments to false tracks,
we even recognize false measurements if other measurements are missing.
We propose to implement track continuation by introducing the concept of slave measurements (Fig. 4a),
similar to the interpolation scheme in [26]. Slave measurements have two states: free and bound. A free
slave is not willing to be assigned to a track. Consequently, it has a maximum deviation cost from the
optimal motion track. Free slave measurements serve similar goals as the phantom points in [21]. A
slave measurement is bound when it has been assigned to a track, despite its high deviation. Bound slaves
imitate the movements of their neighboring measurements. Their position is calculated by interpolating
the positions of preceding and succeeding measurements in the track established so far (Fig. 4b). The
interpolated positions enable more accurate calculation of the motion criterion. In this way, we retain
as much motion information as possible and we are therefore able to make plausible correspondences.
Additionally, we assign high cost (> # max ) to correspondences that have d max exceeded. This ensures that
in such cases, a slave measurement is preferred over a measurement that does not fit the model constraints.
(a)
false measurement
true measurement
optional bound slave positions optional track head extensions
Figure
4: (a) shows a true measurement, a false measurement and a free slave measurement at t k+1 . The
slave measurement is on the border of the dotted circle. (b) shows possible bound slave measurement
positions related to possible track head extensions.
Greedy Optimal Assignment (GOA) Tracker: Formal description
To properly handle missing and false measurements, we extend the assignment matrices A k . That is, we
want to be able to assign false measurements to false tracks and slave measurements to true track heads
that have no measurement at t k+1 . Since all measurements can be false and all track heads may miss their
measurement, we add m k+1 rows to allow for m k+1 false tracks, and we add M columns to allow for M
slave measurements, resulting in the definition of the square matrix A k
(resembling the dummy rows and
columns in the validation matrix as proposed in [9]).
The size of the individual criterion matrix is adjusted similarly. The entries in the m k+1 extra rows and
in the M extra columns all equal the maximum cost resulting in cost matrix D k
Having defined these square matrices the linear assignment problem can be solved for one frame after
the other, assuming that the correspondences between the first two frames are given (in case O i m > 1) to be
able to compute the initial velocity vector.
In order to calculate the motion criterion, the individual motion models with O need the vector
need
either of x
# k or x k
i is a slave measurement, we estimate these
vectors by scanning back in T k
i to collect two true measurements in the nearest past being x
and x
respectively, with 1 # p < q # k and # k#q
means k - q times recursive application of # k
Consequently,
the vector estimates are defined as follows:
Having obtained these velocity vector estimates, we can now compute the individual motion criteria c k
We transform the criterion matrix to a bipartite graph and prune all edges with weights that exceed # max .
Then, to satisfy the combined motion model, we adjust the edge weights w i j as defined below.
cm1 average deviation model:
cm2 average deviation conditioned by competition and alternatives, using Eq. 13;
a (i
As mentioned before, the actual value of the minimized C k is not important. Therefore in cm1, the 1/z
power can be ignored, because the 1/z power function is monotonic increasing.
Algorithm
1. Starting with
in the cost matrix D k
as follows:
(a) true tracks to true measurements, i.e. 1 #
If the maximum speed (d max ) constraint is violated then c k
Otherwise c k
is according to the individual motion model.
(b) all other entries: c k
2. Construct a bipartite graph based on the criterion matrix D k
3. Prune all edges that have weights exceeding # max .
4. Adjust the edge weights according to the combined motion model in Eq.24 and 25.
5. Apply the Hungarian algorithm to this graph, which results in the minimal cost assignment. The
resulting edges (assignment pairs) correspond to an output A k
, from which the first M rows and m k+1
columns represent the assignment matrix A k .
6. Increase k; if k < n go to 1, otherwise done.
6 Performance evaluation
To evaluate the performance of the different algorithms, we compared them qualitatively and quantitatively.
In Section 6.1, we start by looking at their correspondence quality by using a specially constructed example
that (also) tests the algorithm's track continuation capabilities. Then, in Section 6.2, we explore the sensitivity
of the algorithms to some problem parameters like point density and the total number of points, and
algorithm parameters like d max . In all experiments in this section, the correspondences between the first two
frames are known and passed to all algorithms (even to those that are capable of self-initialization avoiding
favoring one of the methods).
6.1 Constructed example
The carefully constructed example shows two crossing feature points with a missing measurement at t 4 for
the first and at t 5 for the second point (see Fig. 5a). The difficulty of this data set is that in two consecutive
frames a measurement is missing, but for different points. With all algorithms we used the smooth motion
model (im2). For algorithms that have a # max parameter, we varied its value from 0.05 to 1 (lower values
do not allow the initial motion of p 2 ). Further, we fixed the d max value to 20.
6.1.1 S&S results
The S&S algorithm either leads to wrong correspondences or to disconnected track parts. We used two
different settings of # max to show the shortcomings of S&S. First, with a high # max (0.1 # max # 1), the
algorithm makes wrong correspondences (Fig. 5b). When assigning measurements to track heads T k
i , the
algorithm prefers track heads that have a true measurement at t k over track heads that have a phantom point
at t k . Of course the motion criterion for that true measurement assignment may not exceed the maximum
criterion. On the other hand, if # max is lower (e.g 0.05), the algorithm separates four track parts, while
correspondences between the track parts have to be made afterwards (see Fig. 5c).
6.1.2 R&S results
The R&S algorithm, which has no parameters, chooses the right correspondence when one measurement
lacks at t 4 . Then, it estimates the missing measurement by extrapolation and continues with the next frame.
With point extrapolation for one frame only, the deviation is limited. In the next frame (t 5 ) the situation
is similar to the previous frame. The algorithm connects the single present measurement to the right track
head and extrapolates the missing measurement. The processing of the last 3 frames is straightforward (see
Fig. 5d).
6.1.3 C&V results
At t 4 , C&V assigns the single measurement to the right track head (T 3
). Then at t 5 only one track head T 4remains to which the measurement can be assigned. If it wouldn't fit because the distance was too great this
measurement could start a new track. Since it is not too far away, the only point at t 5 is also assigned to T 2 .
The two track parts that belong to p 1 are not connected in the post-processing step (Fig. 5e).
6.1.4 GOA tracker results
When the algorithm proposed in this paper is applied to this data set with the smooth motion and average
deviation model, all correspondences are made correctly. Moreover, the algorithm interpolates the missing
measurements better than R&S and, hence, forms the most plausible tracks (see Fig. 5f).
6.2 Performance with generated data
In this section we describe the tests we did to evaluate various aspects of the described algorithms. To this
end we used a data set generator that is able of creating data sets of uncorrelated random point tracks of
various densities and speeds. Among the described algorithms only the R&S algorithm does not exploit
the d max parameter to improve quality and efficiency. For the experiments, we added the d max parameter to
R&S (now called similar to the GOA tracker. Then, we tuned all algorithms to find the optimal
setting for each of them and used that setting in all experiments. For C&V, R&S*, and the GOA tracker
the true maximum is optimal and for S&S a very high value, d In Section 6.2.6, we
consider the sensitivity of the algorithms for the d max parameter setting. We did not test the # max sensitivity,
because it constrains the motion similarly. Other experiments evaluate the performance for increasingly
difficult data sets, an increasing number of missing point detections, and the efficiency of the algorithms.
For the generation of the uncorrelated tracks, we used the data set generator called Point Set Motion
Generator (PSMG) according to [27] (see example in Fig. 6). Because this data generator model allows
feature points to enter and leave the 2-D scene, which we do not consider in this paper, we modified the
model to prevent this by replacing invalid tracks until all tracks are valid. The PSMG has the following
3 When dmax is very high then R&S* behaves like the original R&S, i.e. unconstrained speed.
(a) 1030507090
missing
missing
(b) 1030507090
(c) 1030507090
(d)
Figure
5: (a) Two input measurements at 8 time instances. At t 4 a measurement for point p 1 is missing and
at t 5 a measurement of point p 2 is missing. The figures show the results of (b) S&S using
(c) S&S using # and (f) the GOA tracker respectively. In the figures the
estimated points are shown as non-filled boxes and crosses indicate the true positions of the missing points.
parameters (defaults in brackets):
1. Number of feature point tracks
2. Number of frames per point track
3. Size (S of the square space
4. Uniform distributions for both dimensions of initial point positions between 0 and S.
5. Normal distribution for the magnitude of the initial point velocity vector:
6. Uniform distribution for the angle of the initial velocity vector, between 0 and 2# .
7. Normal distribution for the update of the velocity vector magnitude v k
i , from t k to t
8. Normal distribution for the update of the velocity vector angle # k
i from t k to t
9. Probability of occlusion (
Figure
Example PSMG data set with 15 points during 8 time steps.
A number of different measures has been proposed to quantify the quality of performance, like the
distortion measure [19] and the link-based error and track-based error [27]. We use the track-based error
as in [27], which is defined as follows:
correct
otal is the total number of true tracks and T correct is the number of completely correct tracks.
Some remarks about the experiments. First, in all cases the shown results are an average of 100 runs.
We did not incorporate significance levels because the minimal possible track error depends on the actual
presented data, hence, the appropriateness of the individual motion model. Nevertheless, the ranking and
relative quality were for each experiment the same as illustrated in the figures. Second, in this section we
ran the S&S algorithm only with a forward optimization loop, because otherwise the algorithm would not
converge (see also Section 4).
6.2.1 Tuning individual and combined models
To find an optimal combination of individual and combined motion models, we assume that the individual
models and combined models are independent. In order to find the best individual model for the PSMG generated
data, we ran experiments with the individual models im1, im2 and im3, together with the combined
model implemented in the GOA tracker. In Fig. 7a, we show the results of this experiment.
Clearly, the model im2 fits this generated data set best.
In order to identify the best combined model for this data set, we ran tests with
shown in Fig. 7b. We chose w 1 equal to w 2 , because we
want to express that the lack of alternatives is equally important as the absence of competing track heads.
cm2 with even lower w 1 and w 2 values becomes better until it finally equals
From these tests we conclude that the smooth motion model deviation model
is the best combined modeling for PSMG data. Hence, we used these models in the remaining
experiments, if possible. That is, only the GOA tracker allows for combined model settings and can be
adjusted in that sense.
track
error
track
number of points (M)
nearest neighbor
proximal uniformity
smooth motion
(b)0.050.150.25
track
error
track
number of points (M)
Figure
7: (a) Track error of the GOA tracker with the average deviation model (cm1), in combination with
the nearest neighbor, smooth motion or proximal uniformity model. (b) Track error of the GOA tracker
with the smooth motion model in combination with cm1 and cm2.
6.2.2 Variable density performance
To show how the algorithms perform with an increasing number of conflicts, we applied them to several
data sets with an increasing point density. To this end, we generated the data in a fixed sized 2-D space
and vary the number of point tracks. In Fig. 8a, we display the results of all algorithms. The figure clearly
shows that the GOA tracker performs best.
6.2.3 Variable velocity performance
Another experiment to test the tracking performance of the algorithms is varying the mean velocity and
keeping the number of points constant. In order to have reasonable speed variances with all mean velocities,
we scaled both # v 0
and # v u
with the mean values according to # v
and # v
. In addition,
we enlarged the space in which the point tracks are generated to to prevent that mainly diagonal
tracks are allowed. The ranking of the algorithms is similar to the variable density experiment and again the
GOA tracker performs better than all other schemes (see Fig. 8b).
(a)0.020.060.10.14
track
error
track
number of points (M)
RS*
SS
GOA
track
error
track
mean velocity
RS*
SS
GOA
Figure
8: (a) Results of the algorithms applied to increasingly dense point sets. (b) Track error as a function
of the mean velocity.
6.2.4 Track continuation performance
In this experiment, we compared the track continuation performance of the R&S extrapolation scheme and
the slave measurements interpolation, as proposed in this paper. We left out the other two algorithms because
S&S does not really handle track continuation and C&V only allows very limited occlusion. In order
to properly compare the track extrapolation and the slave measurements interpolation, we implemented
them both in the GOA tracker. We tested the track continuation performance in a variable occlusion exper-
iment, with In Fig. 9a, we display the track error results of the GOA tracker with both track
continuation schemes with either 50 or 100 points.
As illustrated in this figure, the slave measurements approach proposed in this paper clearly achieves
better track continuation results than the track extrapolation scheme as proposed by Rangarajan and Shah
[19]. The difference is larger with higher probability of occlusion ( p because then there will be more
often occlusion during a number of consecutive frames, in which case the difference between interpolation
and extrapolation becomes apparent.
6.2.5 Variable volume performance
This test is directed towards measuring the computational efficiency of the different algorithms. Hereto,
we keep the point density constant while increasing the number of point tracks (and thus enlarging the
size of the 2-D space proportionally). Consequently, the correspondence problem remains equally difficult,
but the problem size grows. In Fig. 9b, we show the results with logarithmically scaled axes. The figure
shows that, with optimal d max , C&V is the fastest. Further, the computation time of the algorithms is widely
divergent but all algorithms have polynomial complexity. We list the polynomial orders in the summary of
the experiments in Section 6.3.
(a)0.050.150.250.350.45
track
error
track
probability of occlusion (p
(b)0.1101000
time
number of points (M)
SS
RS*
GOA
Figure
9: (a) Track error of the GOA tracker with either slave interpolation (Inter) or the R&S extrapolation
scheme (Extra) in a variable occlusion experiment with 50 or 100 points. (b) Illustration of the efficiency
of the algorithms in a variable volume experiment.
6.2.6 Sensitivity for d max parameter setting
As mentioned, so far all algorithms used the tuned and optimal settings of the d max parameter. In this sensitivity
experiment, we show the importance of the a priori knowledge about a reasonable value for this
parameter. To this end, we varied the d max parameter from the known true value up to a high upper limit,
values than the true maximum speed are clearly not sensible). Fig. 10a clearly shows
that both S&S and R&S* are most sensitive to variations in this parameter. Remarkably, S&S performs
better when d max is set far too high. We expect that the ill initialization, together with the exchange optimization
causes this effect because every point exchange must obey the d max constraint. Both C&V and
the GOA tracker are hardly sensitive to d max variations (which implies that they do not take advantage of
it either). Computationally, especially the C&V algorithm is hampered by an incorrect or ignorant d max
value as Fig. 10b illustrates. Consequently, the GOA tracker is the fastest when d max is over 5 times the true
maximum speed.
track
error
track
RS*
SS
GOA
time
SS
RS*
GOA
Figure
10: Illustrates the sensitivity of the algorithms to d max variations. (a) shows the track error performance
and (b) shows the computational performance.
6.3 Summary of experiments
In conclusion, for tracking a fixed number of points the GOA tracker is qualitatively the best algorithm
among the algorithms we presented, according to its track continuation handling in the first test and its
performance in all PSMG experiments. Moreover, it is hardly sensitive to the d max parameter setting. S&S
performs only slightly worse, when we used the optimal d max setting (d but it is an order of
magnitude slower than the GOA tracker. Moreover S&S did not perform well on the specially constructed
example, nor does it give interpolated positions of the missed points. The version of R&S*, with added d max
parameter and modified individual model, is efficient and qualitatively good as long as it has an accurate estimate
of d max . The sensitivity experiment shows that R&S* performs worst of all if this value is not known
(or not used as in the original R&S implementation). With (near) optimal maximum velocity setting, C&V
is the fastest. If this optimal value is not known (which is usually the case), then the efficiency of C&V degrades
rapidly. We should also note that, in our experiments, S&S performed consistently better than C&V,
which does not agree with the results reported in [27]. This is probably because in [27] a different d max
setting for S&S is used, for which we showed in Section 6.2.6 the S&S algorithm is quite sensitive. This
implies that S&S can not exploit the d max parameter effectively to handle missing and spurious measure-
ments. Finally, the variable occlusion experiment clearly showed that the slave measurements implement
track continuation better than the point extrapolation scheme [19]. In Table 1, we summarize the PSMG
experiments. The last column shows the polynomial order of complexity of the algorithms as derived from
the variable volume experiment.
Table
1: Summary of the PSMG experiments.
Algorithm
variable
density
track
variable
velocity
track
variable
volume
time
polynomial
order
a in O(M a )
7 Algorithm extension with self-initialization
In the problem statement in Section 2, the correspondences between the first two frames were assumed
to be known. In this section, we generalize the problem, by lifting this restriction and elaborate on how
self-initialization is incorporated in the GOA tracker.
Two algorithms we discussed have an integrated way of automatically initializing the point tracks. That
is, both S&S and C&V only use the measurement positions, for the initialization. R&S on the other hand
use additional information, i.e. the optical flow field, which is computed between the first two frames. We
advocate the integrated approach, because it is more generally applicable and it allows for optimizing the
initial correspondences using a number of frames as we proposed in the global motion model in Section 3.
Here, we propose to extend the GOA tracker with features of the S&S algorithm. After that, we demonstrate
the appropriateness of this extension and again analyze the parameter sensitivity of the algorithms that
support self-initialization.
7.1 Up-Down Greedy Optimal Assignment Tracker (GOA/up-down)
The S&S algorithm has a number of shortcomings, of which its computational performance has been shown
to be the most apparent. Also, as mentioned, we deliberately left out the bi-directional optimization which
quite often does not converge. However, for self-initialization the bi-directional optimization is essential.
We propose to modify the GOA tracker in the spirit of [21] and [22] by initializing the correspondences
between the first two frames using the described optimal algorithm to minimize C k with im1/cm1. After
these correspondences are made, we continue the optimization of the remaining frames (up) in the normal
way and additionally optimize the same frames backwards (down). Further forward and backward optimization
proved to be useless, because the optimization process already converged. The reason for this
fast convergence is that both the initial correspondences and the optimization scheme have been improved
considerably compared to S&S.
7.2 Self-initialization experiments
To test the performance of the algorithms that are capable of self-initializing the tracks, together with the
just described extended GOA/up-down tracker, we did another variable density experiment, and a sensitivity
experiment using the PSMG track generator. The individual models need not be tuned again because the
parameter settings of the PSMG are the same as in Section 6.2. This time, we left out S&S because of
serious convergence problems with their bi-directional optimization scheme, which is essential for self-
initialization. R&S does not implement self-initialization using only point measurements, so it can not be
applied within these experiments.
Although we did not discuss statistical motion correspondence techniques in detail in this paper, we
included the multiple hypothesis tracker (MHT) as described and implemented by Cox and Hingorani [3] in
this experiment in order to see how it relates to non-statistical greedy matching algorithms. We should note
that this MHT implementation is not the most efficient (for improvements see e.g. [15]), though qualitatively
equivalent to the state of the art of the statistical motion correspondence algorithms.
7.2.1 Variable density experiment
For this experiment we tuned the algorithms optimally for the given data sets. That is, both C&V and
GOA/up-down use the true d max . The (eight) parameters of the MHT (like the Kalman filter and Mahalanobis
distances), were tuned with a genetic algorithm, for which we used the track error as fitness
function.
Actually the only difference with the variable density experiment in Section 6.2.2 is that here the initial
correspondences are not given. Fig. 11a shows the performance of the algorithms. Clearly, GOA/up-
performs best and, remarkably, almost as good as when the initial correspondences were given. The
performance of the MHT is similar to the GOA tracker until it seriously degrades, when the number of
points exceeds 50, see Fig. 11a. This can be explained from the fact that the parameters for the MHT were
trained for (only) 50 points. We did not include more points, because the training was already very time
consuming (> 2 days on a Silicon Graphics Onyx II). It is, however, striking to see that the GOA tracker
also performs consistently better than the MHT even with less than 50 points, although the latter optimizes
over several frames. Among others this may be caused by the effective self-initialization scheme of the
GOA tracker. The up-down scheme can be said to optimize the initial correspondences over the whole
sequence when optimizing up. Then the remaining correspondences are established in the down phase.
(a)0.050.150.250.350.45
track
error
track
number of points (M)
GOA/up-down
MHT
track
error
track
GOA/up-down
Figure
11: (a) shows the track error as a function of the number of points in a variable density experiment
with self-initialization, and (b) shows the track error as a function of the d max setting.
7.2.2 Sensitivity experiment
When the correspondences for the initial frames are not given, we expect the algorithms to be more sensitive
to the d max setting. Namely, when the initial velocity is unconstrained, the greedy matching algorithms
easily make implausible initial choices, from which they can not recover. To study this behavior, we did
another sensitivity experiment for C&V and the GOA tracker and additionally an experiment to test the
sensitivity of the MHT. We studied the MHT separately, because is has different parameters (and no d
First, Fig. 11b indeed shows that for C&V a good estimate of d max is essential. GOA/up-down, however,
hardly suffers from lack of a priori knowledge concerning d max , which is partially because the global cost
was optimized for the initial frames. Moreover, the up/down optimization scheme can no longer be considered
purely greedy, because correspondences are reconsidered in backward direction. Since both algorithms
were computationally influenced similarly as when the initial correspondences were given, we did not include
the figure here. We have to mention, however, that the computation time of C&V increased even 10
times faster (11 sec. when d because in this experiment the number of alternatives becomes much
higher in the first frame. As a consequence, the GOA tracker was already the fastest when d max was set over
3 times the true maximum speed.
In order to fairly test the sensitivity of the MHT and to show the results for all parameters in the same
figure, we tested the performance in the range from 1/5 of the optimal setting to 10 times the optimal
setting of all essential parameters (10 runs per setting). Consequently, the results in Fig. 12a can easily be
compared in relation to Fig. 11a, in which 5 (d max ) is also optimal. The figure clearly shows that there is a
small parameter range, in which the performance is (sub)optimal. Especially, increasing or decreasing the
mahalanobis distance or the initial state variance parameter with 1/5th results in a performance penalty of
roughly a factor two. Also the computation time increases dramatically if the parameters are not properly
set, as Fig. 12b shows. We plotted the names of the essential parameters in the figures, but refer to [3], [20]
for a complete description.
track
error
track
value / (5optimalvalue)
position variance x
position variance y
process variance
max.mah. velocity model
initial state variance
value (5optimalvalue)
position variance x
position variance y
process variance
max.mah. velocity model
initial state variance
Figure
12: Parameter sensitivity experiment for the MHT. (a) shows the track error as function of parameter
variations and (b) shows the computation time.
8 Real data experiment: tracking seeds on a rotating dish
Our final experiment is based on real image data. In this experiment we put 80 black seeds on a white
dish and rotated the dish with more or less constant angular velocity, which implies the use of the smooth
motion model (im2/# 0.1). The scene was recorded with a 25 Hz progressive scan camera using 4
ms shutter speed, resulting in a 10 image video sequence with very little motion blur 4 . The segmentation
of the images was consequently rather straightforward, i.e. in all 10 images all 80 seeds were detected and
there were no false measurements. There was a large difference in speed of the seeds ranging from 1 pixel/s
in the center to 42 pixel/s at the outer dish positions. Like in Section 7, we tested only those algorithms
that have self-initialization capability and again we included the MHT. Clearly, in contrast with Section 7,
in this experiment the point motion is strongly dependent. Since all algorithms are hampered equally,
this experiment actually tests the general applicability of the algorithms 5 . To be able to run the MHT
properly we tuned its main parameters by applying a genetic algorithm (The ground truth was established
by manual inspection.) For this experiment we also added the S&S algorithm, because this time it converged
consistently, that is with different d max settings.
Fig. 13 shows the resulting tracks overlaid on the first image of the sequence. Only the GOA/ up-down
tracker was able to find all the true seed tracks, while the d max setting did not influence the results. Even
GOA/up (not down!) was able to track the 80 seeds correctly over all 10 frames, regardless the d max value.
Not surprisingly, the C&V algorithm that already proved to be sensitive to d max suffers severely from the
divergent seed speeds. The S&S algorithm, which is also sensitive to d max , again makes less errors when
d max is relaxed. In general the behavior of S&S turned out to depend greatly on the d max and # max settings.
Although the MHT is extensively tuned and it optimizes over several frames simultaneously, the MHT still
makes a few errors. Besides, the MHT is substantially slower than the other algorithms.
9 Discussion
Throughout this paper, we introduced a framework for motion modeling, and we presented the greedy
optimal assignment (GOA) tracker that we extended with self-initialization. In this section we discuss some
potential other extensions and improvements.
Although the tracking of a variable number of points conflicts with occlusion handling, it is certainly
a feature that should be considered as an extension to the GOA tracker. Among the described algorithms
we have seen two ways to approach this conflict of requirements, either by actually not implementing track
continuation (S&S) or by only allowing occlusion during a very limited number of frames (C&V). First,
4 The rotating dish sequence is available for download at http://www-ict.its.tudelft.nl/tracking/datasets/sequences/rotdish80.tgz.
5 One could, of course, argue that for this data set a rotational individual model or polar coordinates for the measurement
positions would fit better.
(a) S&S: d p/s: 25 errors, 7.4 sec. (b) S&S: d
(c) C&V: d
Figure
13: Results of applying the self-initializing algorithms to the rotating dish sequence, consisting of
frames with each 80 seeds; true d
the GOA tracker can support track initiation and termination by replacing the slave measurements with the
phantom points as in S&S. Alternatively, the GOA scheme can be incorporated in the C&V algorithm. The
idea is that at each time instance, the GOA scheme is applied first to find corresponding measurements for
all point tracks that have been established so far. Then, the original C&V scheme links the remaining measurements
if possible. As a result the tracking features of C&V still apply and its performance increases 6 .
Further, to deal more effectively with the underlying physical motion, the order of the individual motion
models could be increased, e.g by modeling point acceleration. Clearly, extending the scope of the
individual models implies difficulties for the model initialization and the track continuation capabilities.
Finally, the scope over which the global matching is approximated can be extended. In this paper, we
approximated S(D) in a greedy sense, i.e. we only minimized the combined model over two successive
frames. We already illustrated in Fig. 3 that extending the scope for this minimization would yield more
plausible tracking results. Extending the scope, however, implies that we need to cope with an increasingly
complex problem, to which the efficient Hungarian algorithm as such can not be applied anymore.
In this paper we showed an adequate way to model the motion correspondence problem of tracking a fixed
number of feature points in a non-statistical way. By fitting existing algorithms in this motion framework,
we showed which approximations these algorithms make. An approximation that all described algorithms
have in common is that they greedily match measurements to tracks. For this approximation, we proposed
an optimal algorithm, the Greedy Optimal Assignment (GOA) tracker, which obviously qualitatively outperforms
all other algorithms. The way the proposed algorithm handles detection errors and occlusion turned
out to be effective and more accurate than the other described algorithms. Moreover, the experiments show
clearly that its computational performance is among the fastest. Also the self-initializing version of the
GOA tracker turned out to be adequate and hardly sensitive to the maximum speed constraint (d max ) setting.
Briefly, for the tracking of a fixed number of feature points the proposed tracker has proven to be efficient
and qualitatively best.
Among the described algorithms the R&S algorithm is completely surpassed because it operates under
the same conditions, while the GOA tracker outperforms R&S both qualitatively and computationally.
6 We have already implemented this idea, but we did not include it in the experiments for the sake of clarity. With a fixed
number of points, its performance indeed rated in between GOA/up-down and the original C&V algorithm.
The S&S algorithm, which does not support track continuation, is computationally very demanding. The
major drawbacks of the C&V algorithm are its relatively poor performance, especially with respect to the
initialization, its restricted track continuation capability, and its sensitivity to the d max setting. Still, S&S
and C&V may be considered because both support the tracking of a variable number of points and C&V
can be very fast. In the previous section we indicated how their performance can be improved by incorporating
GOA features in these algorithms. In a number of experiments we included the statistical multiple
hypothesis tracker. Even though the MHT optimizes over several frames, which makes it computationally
demanding, it turned out that it does not perform better than the GOA tracker. Possible causes are the effective
initialization of the GOA tracker and the fact that the MHT models the tracking of a varying number
of points, although we set the respective probabilities as to inform that the number of points is fixed. Most
importantly, the MHT has quite a few parameters for which the tuning proved to be far from trivial.
In conclusion, the proposed qualitative motion framework has proven to be an adequate modeling of
the motion correspondence problem. As such, it reveals a number of possibilities to achieve qualitative
improvements, ranging from more specialized individual models to S(D) approximations with an extended
temporal scope.
Acknowledgments
This work was supported by the foundation for Applied Sciences (STW). The authors would like to thank
Dr. Dmitry Chetverikov for the discussions on the details of his tracking algorithm and the anonymous
reviewers for their comments and suggestions.
--R
A review of statistical data association techniques for motion correspondence.
An efficient implementation of Reid's multiple hypothesis tracking algorithm and its evaluation for the purpose of visual tracking.
On finding ranked assignments with applications to multi-target tracking and motion correspondence
A comparison of two algorithms for determining ranked assignments with application to multi-target tracking and motion correspondence
A fast method for finding the exact N-best hypotheses for multitarget tracking
A new algorithm for the generalized multidimensional assignment problem.
A generalized S-D assignment algorithm for multisensor-multitarget state estimation
Sonar tracking of multiple targets using joint probabilistic data association.
Adaptive constraints for feature tracking.
Determining optical flow.
Tracking feature points in time-varying images using an opportunistic selection ap- proach
The hungarian method for solving the assignment problem.
Establishing motion-based feature point correspondence
Optimizing Murty's ranked assignment method.
Combinatorial problems in multitarget tracking - a comprehensive solution
Multidimensional assignments and multitarget tracking.
Data association in multi-frame processing
Establishing motion correspondence.
An algorithm for tracking multiple targets.
Feature point correspondence in the presence of occlusion.
Finding trajectories of feature points in a monocular image sequence.
Computational experiences with hot starts for a moving window implementation of track maintanance.
Uniqueness and estimation of three-dimensional motion parameters of rigid objects with curved surface
The Interpretation of Visual Motion.
A fast and robust point tracking algorithm.
--TR
--CTR
Meghna Singh , Mrinal K. Mandal , Anup Basu, Gaussian and Laplacian of Gaussian weighting functions for robust feature based tracking, Pattern Recognition Letters, v.26 n.13, p.1995-2005, 1 October 2005
C. J. Veenman , M. J. T. Reinders , E. Backer, Establishing motion correspondence using extended temporal scope, Artificial Intelligence, v.145 n.1-2, p.227-243, April
Khurram Shafique , Mubarak Shah, A Noniterative Greedy Algorithm for Multiframe Point Correspondence, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.1, p.51-65, January 2005
Alper Yilmaz , Omar Javed , Mubarak Shah, Object tracking: A survey, ACM Computing Surveys (CSUR), v.38 n.4, p.13-es, 2006 | target tracking;algorithms;motion correspondence;feature point tracking |
365735 | A framework for symmetric band reduction. | We develop an algorithmic framework for reducing the bandwidth of symmetric matrices via orthogonal similarity transformations. This framework includes the reduction of full matrices to banded or tridiagonal form and the reduction of banded matrices to narrower banded or tridiagonal form, possibly in multiple steps. Our framework leads to algorithms that require fewer floating-point operations than do standard algorithms, if only the eigenvalues are required. In addition, it allows for space-time tradeoffs and enables or increases the use of blocked transformations. | INTRODUCTION
Reduction to tridiagonal form is a major step in eigenvalue computations for symmetric
matrices. If the matrix is full, the conventional Householder tridiagonal-
ization approach (e.g., [Golub and Van Loan 1989]) or a block variant thereof
[Dongarra et al. 1989] is usually considered the method of choice.
Hovever, for banded matrices this approach is not optimal if the semibandwidth
b (the number of the outmost nonzero off-diagonal) is very small compared with the
matrix dimension n, since the matrix being reduced has completely filled in after
steps. It is well known that the algorithms of Rutishauser [1963] and Schwarz
This work was supported by the Advanced Research Projects Agency, under contracts
DM28E04120 and P-95006. Bischof also received support from the Mathematical, Information,
and Computational Sciences Division subprogram of the Office of Computational and Technology
Research, U.S. Department of Energy, under contract W-31-109-Eng-38. Lang also received
support from the Deutsche Forschungsgemeinschaft, Gesch-aftszeichen Fr 755/6-1 and La 734/2-1.
This work was partly performed while X. Sun was a postdoctoral associate with the Mathematics
and Computer Science Division at Argonne National Laboratory.
Argonne Preprint ANL/MCS-P586-0496, submitted to ACM Trans. Math. Software
x
Figure
1. Rutishauser's tridiagonalization with Householder transformations.
[1968] (called RS-algorithms in the rest of the paper) are more economical than the
standard approach when b - n. In these algorithms, elements are annihilated one
at a time by Givens rotations; Rutishauser's algorithm annihilates the elements by
diagonals, whereas Schwarz's algorithm proceeds by columns. Each Givens rotation
generates a fill-in element outside of the current band, and the fill-in is chased out
by a sequence of Givens rotations before more fill-in is introduced.
Rutishauser [1963] also suggested another band reduction scheme based on Householder
transformations that annihilates all elements of the current column
instead of only one. Rutishauser used an analogous scheme to chase the triangular
bulge generated by the reduction with a sequence of QR factorizations, as shown in
Figure
1. However, because of the significant work involved in chasing the triangular
bulges, this algorithm is not competitive with the rotation-based RS-algorithms.
The Schwarz algorithm is the basis of the band reduction implementations in EISPACK
[Smith et al. 1976; Garbow et al. 1977]. It can be vectorized along the
diagonal [Kaufman 1984], and this variant is the basis of the band reduction algorithm
in LAPACK [Anderson et al. 1995].
The RS-algorithms require storage for one extra subdiagonal, and Rutishauser's
Householder approach requires storage for subdiagonals. To assess the
storage requirements of various algorithms, we introduce the concept of working
semibandwidth. The working semibandwidth of an algorithm for a symmetric band
matrix is the number of sub(super)diagonals accessed during the reduction. For
instance, the working semibandwidth is for the RS-algorithms and
Rutishauser's second algorithm with Householder transformations.
In both algorithmic approaches, each reduction step has two parts:
-annihilation of one or several elements, and
-bulge chasing to restore the banded form.
Either way, the bulk of computation is spent in bulge chasing. The tridiagonaliza-
tion algorithm described in [Murata and Horikoshi 1975] and, for parallel comput-
ers, in [Lang 1993] (called MHL-algorithm in the following) improves on the bulge
chasing strategy. It employs Householder transformations to eliminate all
subdiagonal entries in the current column, but instead of chasing out the whole
triangular bulge (only to have it reappear the next step) it chases only the first
column of the bulges. These are the columns that (if not removed) would increase
the working semibandwidth in the next step. The working semibandwidth for this
algorithm is also 2b \Gamma 1. By leaving the rest of the bulges in place, the algorithm
requires roughly the same number of floatingpoint operations (flops) as does the
RS-algorithms if the latter are implemented with Givens rotations, and 50% more
flops, if the RS-algorithms are based on fast Givens rotations. Using Householder
transformations considerably improves data locality, as compared with the rota-
tionbased algorithms. On the other hand, the RS-algorithms require less storage
and may be still preferable if storage is tight.
In this paper, we generalize the ideas behind the RS-algorithms and the MHL-
algorithm. We develop a band reduction algorithm that eliminates d subdiagonals
of a symmetric banded matrix with semibandwidth b (d ! b), in a fashion akin to
the MHL tridiagonalization algorithm. Then, like the Rutishauser algorithm, the
band reduction algorithm is repeatedly used until the reduced matrix is tridiagonal.
If it is the MHL-algorithm; and if used for each reduction step, it
results in the Rutishauser algorithm. However, d need not be chosen this way; in-
deed, exploiting the freedom we have in choosing d leads to a class of algorithms for
banded reduction and tridiagonalization with favorable computational properties.
In particular, we can derive algorithms with
(1) minimum algorithmic complexity,
(2) minimum algorithmic complexity subject to limited storage, and
(3) enhanced scope for employing Level 3 BLAS kernels through blocked orthogonal
reductions.
Setting results in the (nonblocked) Householder tridiago-
nalization for full matrices. Alternatively, we can first reduce the matrix to banded
form and then tridiagonalize the resulting band matrix. This latter approach significantly
improves the data locality because almost all the computations can be
done with the Level 3 BLAS, in contrast to the blocked tridiagonalization [Don-
garra et al. 1989], where half of the computation is spent in matrix-vector products.
This paper extends the work of [Bischof and Sun 1992].
The paper is organized as follows. In the next section, we introduce our frame-work
for band reduction of symmetric matrices. In Section 3 we show that several
known tridiagonalization algorithms can be interpreted as instances of this frame-
work. Then we derive new algorithms that are optimal with respect to either
computational cost or space complexity. In Section 4 we discuss several techniques
for blocking the update of an orthogonal matrix U ; this is required for eigenvector
computations. Then, in Section 5 we present some experimental results. Section 6
sums up our findings.
2. A FRAMEWORK FOR BAND REDUCTION
In this section we describe a framework for band reduction of symmetric matrices.
The basic idea is to repeatedly remove sets of off-diagonals. Therefore, we first
present an algorithm for peeling off some diagonals from a banded matrix.
Suppose an n-by-n symmetric band matrix A with semibandwidth b ! n is to
be reduced to a band matrix with semibandwidth ~ b. That
is, we want to eliminate the outermost d of the b nonzero sub(super)diagonals.
Pre
Sym
Post
Post
Pre Sym
Pre
Post
Sym
Figure
2. Annihilation and chasing in the first sweep of the band reduction algorithm. "QR"
stands for performing a QR decomposition, "Pre" and "Post" denote pre- and post-multiplication
with Q T and Q, resp., and "Sym" indicates a symmetric update (Pre and Post). The last picture
shows the block partitioning before the second sweep.
Because of symmetry, it suffices to access either the upper or the lower triangle of
the matrix A; we will focus on the latter case.
Our algorithm is based on an "annihilate and chase" strategy, similar to the RS-
and MHL-algorithms for tridiagonalization. As in the MHL-algorithm, Householder
transformations are used to annihilate unwanted elements, but in the case ~ b ? 1 we
are able to aggregate n b - ~ b of the transformations into the WY or compact WY
representation [Bischof and Van Loan 1987; Schreiber and Van Loan 1989]. Thus,
data locality is further improved.
First, the d outmost subdiagonals are annihilated from the first n b columns of
A. This step can be done with a QR decomposition of an h \Theta n b ,
upper trapezoidal block of A, as shown in the first picture of Figure 2. Then the
WY representation of the transformation matrix is generated. To
complete the similarity transformation, we must apply this block transform from
the left and from the right to A. This requires applying Q from the left to an
h \Theta (d \Gamma n b ) block of A ("Pre"), from both sides to an h \Theta h lower triangular block
("Sym"), and from the right to a b \Theta h block ("Post").
The "Post" transformation generates fill-ins in d diagonals below the band. The
first n b columns of the fill-in are removed by another QR decomposition (second
picture in Figure 2), with "Pre", "Sym", and "Post" to complete the similarity
transformation. The process is then repeated on the newly generated fill-in, and so
on. Each step amounts to chasing n b columns of the fill-in down along the diagonal
by b diagonal elements until they are pushed off the matrix. Then we can start the
second "annihilate and chase" sweep.
Each sweep starts with a matrix that has the following properties:
-The remainder of the matrix, (i.e., the current trailing matrix) is block tridiag-
onal, with all diagonal blocks but the last one being of order b and the last one
being of order - b; see the last picture in Figure 2.
-Every subdiagonal block is upper triangular in its first ~
-The matrix is banded with semibandwidth b + d.
Every similarity transformation within the sweep
-maintains the working semibandwidth b
-involves the same number h of rows and columns, and
-restores the form described above in one subdiagonal block, while destroying it
in the next subdiagonal block.
We arrive at the following algorithm.
Algorithm 1. One-step band reduction bandr1(n; A; b; d; n b )
Input. An n \Theta n symmetric matrix with semibandwidth b, n. The number
of subdiagonals to be eliminated, and a block size n b , 1
Output. An n \Theta n symmetric matrix with semibandwidth b \Gamma d.
while
QR: Perform a QR decomposition of the block B
and replace B by
Pre: Replace the block B
Sym: Replace the block B
Post: Replace the block B
while
end for
The working semibandwidth of the algorithm is b+d. Given the one-step band reduction
algorithm, we can now derive a framework for band reduction in a straight-forward
fashion by "peeling off" subdiagonals in chunks.
Algorithm 2. Multistep band reduction bandr(n; A; b; fd (i)
Input. An n \Theta n symmetric matrix with semibandwidth b, sequence
of positive integers fd (i) g k
b, and a sequence fn (i)
of block
sizes, where n (i)
d (i) .
Output. An n \Theta n symmetric matrix with semibandwidth b \Gamma d.
call bandr1( n,
end for
The working semibandwidth for the multistep algorithm is
(b
d (j) );
which is b + d (1) if the d (i) satisfy
There is also a class of systolic array algorithms that use Givens rotations to
remove several outer diagonals at a time [Bojanczyk and Brent 1987; Ipsen 1984;
Schreiber 1990]. Here, d (i) is chosen as large as b (i) =2 [Bojanczyk and Brent 1987]
in order to increase the parallel scope per systolic operation and hence minimize
the total number of systolic operations.
3. INSTANCES OF THE FRAMEWORK
In this section we discuss several instances of Algorithm 2, including the (non-
blocked) Householder tridiagonalization, Rutishauser's algorithm, and the MHL-
algorithm for tridiagonalizing symmetric band matrices. In addition to these, the
multistep framework allows for new tridiagonalization methods featuring lower flops
count and/or better data locality.
3.1 One-step Tridiagonalization of Full Symmetric Matrices
A full symmetric matrix can be tridiagonalized with bandr(n; A;
1g). Note that ~ 1. Then, the QR decomposition
and the Sym step of Algorithm 1 reduce to determining and applying a suitable
Householder transformation, while the Pre and Post steps vanish. Thus, we arrive
at the nonblocked standard Householder tridiagonalization.
3.2 Two-step Tridiagonalization of Full Symmetric Matrices
Another way to tridiagonalize a full symmetric matrix is the two-step sequence
is some intermediate semibandwidth and n (1)
- ~ b. That is, we first reduce A to
banded form and then tridiagonalize the resulting banded matrix.
In contrast to the blocked Householder tridiagonalization [Dongarra et al. 1989],
where one half of the approximately 4=3n 3 flops for the reduction of A is confined to
matrix-vector products, almost all the operations in the reduction to banded form
can be done within the Level 3 BLAS. For b - n this first reduction constitutes the
vast majority of flops. Tridiagonalizing the banded matrix requires roughly 6bn 2
flops, which can be done with Level 2 BLAS.
Therefore, the two-step tridiagonalization may be superior on machines with a
distinct memory hierarchy (see Table I in Section 5.1).
3.3 One-step Tridiagonalization of Symmetric Band Matrices
As for full matrices, the simplest way to tridiagonalize a matrix with semibandwidth
b is the one-step sequence bandr(n; A; b; fd
1g). Again, the
QR decomposition and the Sym steps reduce to determining and applying single
Householder transformations, and the Pre and Post steps vanish. This one-step
sequence is equivalent to the MHL-algorithm.
3.4 Tridiagonalization by Peeling off Single Diagonals
The sequence bandr(n; A; b; fd
tridiagonalizes a banded matrix by repeatedly peeling off single diagonals. This
corresponds to Rutishauser's algorithm, except that the rotations are replaced with
length-2 Householder transformations.
Figure
3. Nonzero structure of the matrices W and Y for
3.5 Optimal Tridiagonalization Algorithms for Band Matrices
In the following, we derive tridiagonalization algorithms that have a minimum flops
count for a given working semibandwidth, that is,
minimize number of flops to tridiagonalize an n \Theta n banded matrix
with semibandwidth b
subject to working semibandwidth - s, where s
(2)
For the reduction of banded matrices, blocking the transformations always significantly
increases the flops count. Figure 3 shows the nonzero pattern of the
that result from aggregating n b length-(d+1) Householder
transformations into a blocked Householder transformation. The nonzeros
in Y form a parallelogram, while W is upper trapezoidal. Multiplying W to the
columns of some m \Theta (d of A takes approximately n b (2d
flops and multiplication with Y T costs another 2n b (d even if the
multiplication routine GEMM is able to take full advantage of zeros. Thus, in the
banded context, using the WY representation increases the overall flops count in
two ways. First, the W and Y factors must be generated. Second, applying the
blocked Householder transform requires more than the 4n b (d+1)m operations that
would be needed for the n b single length-(d transformations in W
and Y .
The same argument applies also to the compact WY representation
[Schreiber and Van Loan 1989]. In addition, this blocking technique requires
one more matrix multiplication, . The cost for this additional
multiplication is not negligible in the banded case, in contrast to the reduction
of full matrices, where the average length of the Householder vectors significantly
exceeds n b . Therefore, we prefer the "standard" WY representation for blocking
the Householder transformations.
Because this section focuses on minimizing the flops count, we consider only
nonblocked reduction algorithms (i.e., n (i)
On machines with a distinct
memory hierarchy, however, the higher performance of the Level 3 BLAS may
more than compensate for the overhead introduced by blocking the transforma-
tions. Thus, both flops count and BLAS performance should be taken into account
in order to minimize the execution time on such machines. For clarity in our analy-
sis, we omitted the resulting weighting of the various algorithmic components, but
the ideas presented here easily can be extended to this more general analysis.
In the nonblocked case, the arithmetic cost of Algorithm 1 in flops is
cost(n; b; d) - (4(d
For any band difference sequence fd (i) g, Algorithm 2 therefore requires
flops.
Given a limit on s, we can use dynamic programming [Aho et al. 1983] to determine
an optimal sequence fd (i) g from the cost function given in (3). By taking
storage requirements into account, we allow for space tradeoffs to best use the
available memory.
3.5.1 No Storage Constraints. We first consider Problem (2) with s - 2b \Gamma 1.
Here are no storage constraints, since the maximum working semibandwidth of
Algorithm 2 is 2b \Gamma 1. However, the optimal sequence fd (i) g is quite different from
the one-step sequence of the MHL-algorithm.
For example, for a 50; 000-by-50; 000 symmetric matrix with semibandwidth 300,
the optimal sequence is
the reduction requires 3:48\Delta10 12 floatingpoint operations, and the working semiband-
width is 310. In contrast, the MHL-algorithm requires its
working semibandwidth is 599. We also note that the constant sequence
requires flops, with a working semibandwidth of 316. Another constant
sequence
requires working semibandwidth of 332. Hence a constant-
stride sequence seems to be just as good a choice as the optimal one from a practical
point of view, and saves the dynamic programming overhead.
3.5.2 Minimum Storage. Now consider another extreme case of the constraint in
Problem 1. That is, we have space for at most one other subdiagonal.
Even for small b, the MHL-algorithm is not among the candidates, since its working
semibandwidth is candidate d-sequence for this case should
satisfy the condition (1) with d 1. The sequence d
satisfies this condition. Instead, we suggest
We call it the doubling-stride sequence, since the bandwidth reduction size doubles
in each round. For the example in the preceding subsection, the RS-algorithms
require the doubling-stride algorithm requires 3:90
flops.
row update
rank-1 column update
symmetric rank-2 update
x
Figure
4. Data area visited by the MHL-algorithm.
3.5.3 Understanding Optimality. Rutishauser's algorithm peels off subdiagonal
one by one and requires minimum storage, but it is not optimal among the algorithms
with minimum storage. On the other hand, although the MHL-algorithm
eliminates all subdiagonals in one step, it is also not optimal when b is large. In
the following, we give an intuitive explanation why the complexity of our scheme
is superior to both these approaches.
First, let us compare the MHL-algorithm and Algorithm 2 with a sequence fd; b \Gamma
1g. During the bulge chasing following the reduction in column 1, the data
area accessed by the MHL-algorithm with rank-1 row and/or column updating
and symmetric rank-2 updating is shown in Figure 4. The data area accessed by
Algorithm 2 is shown in Figure 5. In comparing different bandreduction sequences,
we take the rank-1 updating as basic unit of computation and examine the total
data area visited by each algorithm for the purpose of rank-1 updating. Since a
update applied to an m \Theta n matrix requires approximately 4mn flops, the
area involved in a rank-1 update is a good measure of complexity. The symmetric
rank-2 update of a triangular n \Theta n matrix is as expensive as a rank-1 update of a
full matrix of that size; both require about 4n 2 flops.
For the MHL-algorithm, the total area visited with rank-1 updates is
For the two successive band reductions, it is
a
Denoting 1)=b, we obtain as the ratio
aMHL
which takes its minimum ae - 11=12 at ffi Therefore, the two-step reduction
can save some 8% of the flops, as compared with the MHL-algorithm. Of course,
the same idea can then be applied recursively on the tridiagonal reduction for the
matrix with reduced semibandwidth b \Gamma d.
b-d
b-d
Figure
5. Data area visited in the two steps bandr1(n, A, b, d) and bandr1(n, A,
of Algorithm 2.
x
x
Figure
6. Data area visited by Rutishauser's algorithm in the ``elimination and bulge chasing''
rounds for the three outmost elements a 91 , a 81 , and a 71 of the first column (light, medium, and
dark grey shading, resp.
Let us consider now how often a row or column is repeatedly involved in a reduction
or chasing step. The data area visited by Rutishauser's algorithm for
annihilating 3 elements in the first column in 3 rounds is shown in Figure 6. We
see in particular that the total area visited in the first b \Theta b block is almost twice
as big as the one by Algorithm 1 with as a result of the revisiting of the first
row and column in the last visited area.
Thus, in comparison with Rutishauser's and the MHL-algorithm, our algorithm
can be interpreted as balancing the counteracting goals of
-decreasing the number of times a column is revisited (by the use of Householder
transformations), and
-decreasing the area involved in updates (by peeling off subdiagonals in several
chunks).
3.6 Bandwidth Reduction
In some contexts it is not necessary to fully reduce the banded matrix to tridiagonal
form. For example, in the invariant subspace decomposition approach (ISDA) for
eigensystem computations [Lederman et al. 1991], the spectrum of a matrix A is
condensed into two narrow clusters by repeatedly applying a function f to the
matrix. If f is a polynomial of degree 3, each application of f roughly triples A's
bandwidth. Therefore, A will be full after a moderate number of iterations if no
countermeasures are taken. To prevent this situation, the bandwidth of A can be
periodically reduced to a "reasonable" value after a few applications of f .
This bandwidth reduction can also be done by using Algorithm 2, either in one
or multiple reduction steps.
4. AGGREGATING THE TRANSFORMATIONS
If the eigenvectors of the matrix A are required, too, then all the transformations
from Algorithm 1 must also be applied to another matrix, say, an n \Theta n matrix U .
For banded matrices A, the update of U dominates the floating-point complexity
(2n 3 for updating U versus 6bn 2 for the reduction of A). Therefore, we should
strive to maximize the use of BLAS 3 kernels in the update of U to decrease the
total time.
In this section we discuss several techniques for updating U with blocked Householder
transformations. Some of these methods are tailored to the cases ~
where the transformations of A cannot be blocked.
4.1 On-the-Fly Update
The update can easily be incorporated in the reduction by inserting the lines
if the aggregate transformation matrix U is required
Replace U (:;
between the QR and Pre steps of Algorithm 1. That is, each transformation is
applied to U as soon as it is generated and applied to A (on-the-fly update).
If the matrix is not reduced to tridiagonal form, i.e., ~ then the work
on A and U can be done with blocked Householder transformations, thus enabling
the use of the Level 3 BLAS.
4.2 Backward Accumulation
For tridiagonalization, however, ~ which means that the work
on A cannot be done in a blocked fashion.
At first glance, this fact seems also to preclude the use of Level 3 BLAS in the
update of U . Fortunately, however, the obstacle can be circumvented by decoupling
the work on U from the reduction of A.
Let us first consider the tridiagonalization of a full symmetric matrix A. As in
the LAPACK routine SYTRD, the update of U can be delayed until the reduction
of A is completed. Then, the Householder transformations are aggregated into
blocked Householder transforms with arbitrary n U
b , and these are applied to U . In
addition to enabling the use of the Level 3 BLAS, the decoupling of the update
from the reduction allows reducing the flop count by reverting the order of the
transformations (backward accumulation).
For complexity reasons, the backward accumulation technique should also be
used in the reduction from full to banded form, even if the on-the-fly update can
also be done with blocked Householder transforms.
4.3 Update in the Tridiagonalization of Banded Matrices
When a full matrix is reduced to either banded or tridiagonal form, the Householder
vectors can be stored conveniently in the zeroed-out portions of A and an additional
vector - . In the tridiagonalization of banded matrices, this strategy is no longer
possible: eliminating one length-(b \Gamma 1) column of the band requires n=b length-
b Householder vectors, because of the bulge chasing. Therefore, it is impractical
to delay the update of U until A is completely tridiagonalized. We use another
technique [Bischof et al. 1994] in this case.
denote the kth Householder transformation that is generated in the jth
sweep of Algorithm 1. That is, H j
1 eliminates the first column of the remaining
band, and H j
3 , . are generated during the bulge chasing.
During the reduction, the transformations of each sweep must be determined
and applied to A in the canonical order H j
3 , . , because each H j
depends
on data modified in the Post step of H j
. Once the transformations are known,
this dependence no longer exists. Since the transformations from one sweep involve
disjoint sets of U 's columns, they may be applied to U in any order (see Figure 7).
We are, however, not entirely free to mix transformations from different sweeps.
must be preceded by
and
, since it affects columns that are modified
by these two transformations in sweep
sweep
\Gamma\Psi
\Gamma\Psi
\Gamma\Psi
\Gamma\Psi
\Gamma\Psi
\Gamma\Psi
\Gamma\Psi
\Gamma\Psi
\Gamma\Psi
\Gamma\Psi
\Gamma\Psi
\Gamma\Psi
\Gamma\Psi
\Gamma\Psi
\Gamma\Psi
sweep
oe oe oe
oe oe oe
oe oe oe
oe oe oe
oe oe oe
\Gamma\Psi
\Gamma\Psi
\Gamma\Psi
\Gamma\Psi
\Gamma\Psi
\Gamma\Psi
\Gamma\Psi
\Gamma\Psi
\Gamma\Psi
\Gamma\Psi
\Gamma\Psi
\Gamma\Psi
\Gamma\Psi
\Gamma\Psi
\Gamma\Psi
Figure
7. Interdependence of the Householder transformations H j
for the work on A (left picture)
and on U (right picture). "H / ~
H cannot be determined and applied to A
(cannot be applied to U) until H has been applied to A (to U ).
To make use of the additional freedom, we delay the work on U until a certain
number n U
b of reduction sweeps
are completed. Then
the update of U is done bottom up by applying the transformations in the order
Figure
8. Columns of U affected by each transformation H j
of the first four reduction sweeps.
In this example, 6. The transformations with the same hatching pattern can be
aggregated into a blocked Householder transformation.
, . , H J
1 . That is, the transformations in Figure 7 would be applied in the order H 1
6 ,
5 , . , H 1
1 . This order preserves the inter-sweep
dependence mentioned above. In addition, the n b transformations H
k with the
same index k can be aggregated into a blocked Householder transformation. In
Figure
8, the transformations contributing to the same block transformation are
hatched identically.
A similar technique can also be used, for example, in the QR algorithm for computing
the eigensystem of a symmetric tridiagonal matrix [Lang 1995]. It allows
updating the eigenvector matrix with matrix-matrix products instead of single rotations
5. EXPERIMENTAL RESULTS
The numerical experiments were performed on single nodes of the IBM SP parallel
computer located at the High-Performance Computing Research Facility, Mathematics
and Computer Science Division, Argonne National Laboratory, and on single
nodes of the Intel Paragon located at the Zentralinstitut f?r Angewandte Mathe-
matik, Forschungszentrum J-ulich GmbH.
All timings are for computations in double precision. The matrices had random
entries chosen from [0; 1]; since none of the algorithms is sensitive to the actual
matrix entries, the following timings can be considered as representative.
Five codes were used in the experiments:
-DSYTRD: LAPACK routine for blocked tridiagonalization of full symmetric matrices
[Dongarra et al. 1989],
Table
I. Timings (in seconds) on one node of the IBM SP for the one-step (LAPACK routine
DSYTRD) and two-step reduction (routines DSYRDB and DSBRDT) of full symmetric matrices of order
to tridiagonal form. The timings do not include the update of U . The intermediate
semibandwidth in the two-step reduction was always b
One-Step Reduction Two-Step Reduction
Total Time full \Gamma! banded banded \Gamma! tridiagonal
-DSBTRD: LAPACK routine for tridiagonalizing banded matrices using Kaufman's
modification of Schwarz's algorithm [Schwarz 1968; Kaufman 1984] (called SK-
algorithm in the following) to improve vectorization,
-DSYRDB: blocked reduction of full symmetric matrices to banded form (Algo-
rithm 1),
-DSBRDB: blocked reduction of banded matrices to narrower banded form (Algo-
rithm 1), and
-DSBRDT: tridiagonalization of banded matrices (MHL-algorithm with the technique
from Section 4.3 for updating U ).
The latter three codes are described in more detail in [Bischof et al. 1996].
All programs are written in Fortran 77. For the IBM SP node, which for our
purposes can be viewed as a 66 MHz IBM RS/6000 workstation, the codes were
compiled with xlf -O3 -qstrict and linked with -lessl for the vendor-supplied
BLAS. For the Intel Paragon node, the compilation was done with if77 -Mvect
-O4 -nx, and the BLAS were linked in with -lkmath.
The data presented in this section demonstrate that it may be advantageous to
consider multistep reductions instead of conventional direct methods. In particular,
on machines with memory hierarchies, it is not clear a priori which approach is
superior for a given problem.
5.1 Tridiagonalization of Full Matrices
We first compared the one-step tridiagonalization routine DSYTRD from LAPACK
with the two-step reduction (routines DSYRDB and DSBRDT) from Section 3.2. Table I
shows that for large matrices, the two-step reduction can make better use of the
Level 3 BLAS than can the direct tridiagonalization, where onehalf of the operations
is confined to matrix-vector products. The reduction to banded form runs at up to
MFlops, which is close to the peak performance of the IBM SP node. Note that
the tridiagonalization of the banded matrix cannot be blocked; therefore, the block
size n b affects only the reduction to banded form. The bad performance of this
reduction with due to the fact that the routine DSYRDB does not provide
optimized code for the nonblocked case.
If the transformation matrix U is required, too, then direct tridiagonalization is
always superior because it has a significantly lower flops count.
Matrix size
SP1, no update
Matrix size
SP1, with update
Matrix size
Paragon, no update
200 400 600 800 1000 1200
Matrix size
Paragon, with update
Figure
9. Speedup of the MHL-algorithm (routine DSBRDT) for tridiagonalizing banded matrices
over the SK-algorithm (LAPACK routine DSBTRD) with and without updating the matrix U (left
and right pictures, resp. In the update, n U
transformations were aggregated
into a block transform.
5.2 One-step Tridiagonalization of Banded Matrices
Next, we compared the SK-algorithm (LAPACK routine DSBTRD) with the MHL-
algorithm DSBRDT. Both are one-step tridiagonalization algorithms for band matri-
ces. In the MHL-algorithm, the update of U was done by using blocked Householder
transformations with the default block size n U
Figure
9 shows the results for
various matrix dimensions and semibandwidths on both machines.
Lang [1993] already noted that the LAPACK implementation is not optimal
except for very small semibandwidths. The reason is that the bulk of computations
must be done in explicit Fortran loops, since no appropriate BLAS routines cover
them. Therefore, the routine runs at Fortran speed, whereas the MHL-algorithm
can rely on the Level 2 BLAS for the reduction and Level 3 BLAS for the update
of U .
On the SP node, the MHL-algorithm runs at up to 45 MFlops in the reduction and
53 MFlops in the update, while DSBTRD only reaches 20 and 30 Mflops, respectively.
(In addition, the SK-algorithm requires 50% more flops when U is updated.) This
situation may be different on some vector machines because DSBTRD features vector
operations with a higher average vector length, albeit with nonunit stride.
Table
II. Timings (in seconds) on one node of the IBM SP for tridiagonalizing banded symmetric
matrices of order 1200. The intermediate semibandwidth in the two-step reduction was
One-step reduction with DSBRDT 8.3 15.0 25.0
Two-step reduction with DSBRDB and DSBRDT 11.3 13.6 22.9
5.3 Two-step Tridiagonalization of Banded Matrices
In Section 3.5.3 we showed that peeling off the diagonals in two equal chunks requires
fewer flops than direct tridiagonalization with the MHL-algorithm. Table II
shows that the time for the two-step approach can be lower, too, if the semiband-
width is large enough.
In contrast to the theoretical results, however, the intermediate semibandwidth
b=2 is not always optimal. For example, it took only 20.03 seconds to first
reduce the semibandwidth from then tridiagonalize that
matrix.
The explanation is that in the case b more of the work is done in
the bandwidth reduction, which can rely on blocked Householder transformations,
whereas the final tridiagonalization cannot. Therefore, the higher performance of
the Level 3 BLAS more than compensates for the slighly higher flops count as
compared with b
For small semibandwidths b, the lower flops count of the two-step scheme was
outweighed by the lower overhead (e.g., fewer calls to the BLAS with larger subma-
trices) of the one-step reduction. As in the reduction of full matrices, the one-step
tridiagonalization is always superior when U is required, too.
5.4 Doubling-stride Tridiagonalization of Banded Matrices
While the MHL-algorithm is clearly superior on machines where the BLAS performance
significantly exceeds that of pure Fortran code, it may not be applicable if
storage is tight. Therefore, we also compared two algorithms that need only one
additional subdiagonal as working space:
-the SK-algorithm (routine DSBTRD from LAPACK), and
-the doubling-stride sequence from Section 3.5.2: multiple calls to DSBRDB (with
for the reduction of A and the update of U ) and one call to DSBRDT (with
The results given in Figure 10 show that the doubling-sequence tridiagonalization,
too, can well outperform the SK-algorithm on both machines.
6. CONCLUSIONS
We introduced a framework for band reduction that generalizes the ideas underlying
the Householder tridiagonalization for full matrices and Rutishauser's algorithm
and the MHL-algorithm for banded matrices. By "peeling off" subdiagonals
in chunks, we arrived at algorithms that require fewer floating-point operations and
less storage. We also provided an intuitive explanation of why our approach, which
eliminates subdiagonals in groups, has a lower computational complexity than that
Matrix size
SP1, no update
Matrix size
SP1, with update
200 400 600 800 1000 1200
Matrix size
Paragon, no update
200 400 600 800 1000 1200
Matrix size
Paragon, with update
Figure
10. Speedup of the doubling-sequence tridiagonalization (routines DSBRDB and DSBRDT
with
over the SK-algorithm (LAPACK routine DSBTRD) with and without updating
the matrix U (left and right pictures, resp.
of the previous algorithms for banded matrices, which eliminated subdiagonals either
one by one or all at once. The successive bandreduction (SBR) approach
improves the scope for block operations. In particular, the update of the transformation
matrix U can always be done with blocked Householder transformations.
We also presented results that show that SBR approaches can provide better
performance, by either using less memory to achieve almost the same speed, or by
achieving higher performance. Our experience suggests that it is hard to provide a
"rule of thumb" for selecting the parameters of an optimal bandreduction algorithm.
While the flops count can be minimized by using the cost function (3), the actual
performance of an implementation depends on the machine-dependent issues of
floating-point versus memory access cost. In our experience, developing a more
realistic performance model for advanced computer architectures is difficult (see,
for example [Bischof and Lacroute 1990]), even for simpler problems. Another paper
[Bischof et al. 1996] describes the implementation issues of a public-domain SBR
toolbox enabling computational practioners to experiment with the SBR approach
on problems of interest.
--R
Data Structures and Algorithms.
Parallel tridiagonalization through two-step band reduction
The WY representation for products of Householder matrices.
An adaptive blocking strategy for matrix factorizations.
The SBR toolbox - software for successive band reduction
A framework for band reduction and tridiagonaliza- tion of symmetric matrices
Tridiagonalization of a symmetric matrix on a square array of mesh-connected processors
Block reduction of matrices to condensed forms for eigenvalue computations.
Matrix Eigensystem Routines - EISPACK Guide Extension
Matrix Computations (2nd
Singular value decompositions with systolic arrays.
Banded eigenvalue solvers on vector machines.
A parallel algorithm for reducing symmetric banded matrices to tridiagonal form.
Using level 3 BLAS in rotation based algorithms.
A parallelizable eigensolver for real diagonalizable matrices with real eigenvalues.
A new method for the tridiagonalization of the symmetric band matrix.
On Jacobi rotation patterns.
Bidiagonalization and symmetric tridiagonalization by systolic arrays.
Journal of VLSI Signal Processing
A storage-efficient WY representation for products of Householder transformations
Tridiagonalization of a symmetric band matrix.
Matrix Eigensystem Routines - EISPACK Guide (2nd ed
--TR
The WY representation for products of householder matrices
Solution of large, dense symmetric generalized eigenvalue problems using secondary storage
A storage-efficient WY representation for products of householder transformations
An adaptive blocking strategy for matrix factorizations
A parallel algorithm for reducing symmetric banded matrices to tridiagonal form
Matrix computations (3rd ed.)
Using Level 3 BLAS in Rotation-Based Algorithms
Banded Eigenvalue Solvers on Vector Machines
Band reduction algorithms revisited
Algorithm 807
Data Structures and Algorithms
Automatically Tuned Linear Algebra Software
--CTR
Daniel Kressner, Block algorithms for reordering standard and generalized Schur forms, ACM Transactions on Mathematical Software (TOMS), v.32 n.4, p.521-532, December 2006
Christian H. Bischof , Bruno Lang , Xiaobai Sun, Algorithm 807: The SBR Toolboxsoftware for successive band reduction, ACM Transactions on Mathematical Software (TOMS), v.26 n.4, p.602-616, Dec. 2000 | blocked Householder transformations;symmetric matrices;tridiagonalization |
365880 | Exact Analysis of Exact Change. | We introduce the k-payment problem: given a total budget of N units, the problem is to represent this budget as a set of coins, so that any k exact payments of total value at most N can be made using k disjoint subsets of the coins. The goal is to minimize the number of coins for any given N and k, while allowing the actual payments to be made on-line, namely without the need to know all payment requests in advance. The problem is motivated by the electronic cash model, where each coin is a long bit sequence, and typical electronic wallets have only limited storage capacity. The k-payment problem has additional applications in other resource-sharing scenarios.Our results include a complete characterization of the k-payment problem as follows. First, we prove a necessary and sufficient condition for a given set of coins to solve the problem. Using this characterization, we prove that the number of coins in any solution to the k-payment problem is at least k HN/k, where Hn denotes the nth element in the harmonic series. This condition can also be used to efficiently determine k (the maximal number of exact payments) which a given set of coins allows in the worst case. Secondly, we give an algorithm which produces, for any N and k, a solution with a minimal number of coins. In the case that all denominations are available, the algorithm finds a coin allocation with at most (k+1)HN/(k+1) coins. (Both upper and lower bounds are the best possible.) Finally, we show how to generalize the algorithm to the case where some of the denominations are not available. | Introduction
Consider the following everyday scenario. You want to withdraw N units of money from your
bank. The teller asks you "how would you like to have it?" Let us assume that you need to
have "exact change," i.e., given any payment request P - N , you should be able to choose a
subset of your "coins" whose sum is precisely P . Let us further assume that you would like
to withdraw your N units with the least possible number of coins. In this case, your answer
depends on your estimate of how many payments you are going to make. In the worst case, you
may be making N payments of 1 unit each, forcing you to take N coins of denomination 1. On
the other extreme, you may need to make only a single payment P . In this case, even if you
don't know P in advance, log coins are sometimes sufficient (as we explain later). In
this article, we provide a complete analysis of the general question, which we call the k-payment
problem: what is the smallest set of coins which enables one to satisfy any k exact payment
requests of total value up to N .
Motivation. In any payment system, be it physical or electronic, some transactions require
payments of exact amounts. Forcing shops to provide change to a customer, if she does not
possess the exact change, simply shifts the problem from the customers to the shops. The
number of coins is particularly important in electronic cash (see, e.g., [2, 3]), because electronic
coins are inherently long bit sequences and their handling is computationally intensive, while the
typical "smart-card" used to store them has small memory space and computational power [7].
Another interesting application of the problem arises in the context of resource sharing. For
concreteness, consider a communication link whose total bandwidth is N . When the link is
shared by time-multiplexing, there is a fixed schedule which assigns the time-slots (say, cells in
ATM lines) to the different connections. Typical schedules have small time slots, since assigning
big slots to small requests entails under-utilization. It is important to note, however, that there
is an inherent fixed overhead associated with each time slot (e.g., the header of an ATM cell).
It would be therefore desirable to have a multiplexing schedule (with time slots of various sizes)
which can accommodate any set of requests with the least number of slots. Similarly to the
withdrawal scenario, an improvement upon the trivial N unit-slot solution can be achieved, if
we know how many connections might be running in parallel. When we know a bound k on
this number, and if we restrict ourselves to long-lived connections, the problem of designing a
schedule naturally reduces to the k-payment problem.
The k-payment problem: Definition. Formally, the problem is as follows. There are two
parameters, the budget, denoted N , and the number of payments, denoted k. The problem is
to find, for each i - 1, the number of i-coins (also called coins of denomination i), denoted c i ,
such that the following two requirements are satisfied.
Budget compliance:
k-partition: For any sequence of k payment requests, denoted
N , there exists a way to exactly satisfy these payments using the coins. That is, there exist
non-negative integers represents the number of i-coins used in the j-th payment),
such that
The problem can thus be broken into two parts as follows. The coin allocation problem is to to
partition N into coins given N and k, i.e., determining the c i 's. The coin dispensing problem
is how, given the c i 's, to actually make a payment.
There are a few possible variants of the k-payment problem. First, in many systems, not all
denominations are available (for example, we do not know of any system with 3-coins). Even
in the electronic cash realm, denominations may be an expensive resource. (This is because
each denomination requires a distinct pair of secret/public keys of the central authority; see,
e.g., [1, 8].) Thus an interesting variant of the allocation problem is the restricted denominations
version, where the set of possible solutions is restricted to only those in which c
for a given allowed denomination set D.
Also, one may consider the on-line coin dispensing problem, where the algorithm is required
to dispense coins after each payment request, without knowledge of future requests, or the
off-line version, where the value of all k payments is assumed to be known before the first coin
is dispensed.
Results. It turns out that the coin dispensing problem is easy: the greedy strategy works,
even in the on-line setting. Most of our results concern the coin allocation problem, and we
sometimes refer to this part of the problem as the k-payment problem. Our basic result is a
simple necessary and sufficient condition for a sequence c 1 of coins to solve the allocation
problem (Theorem 3.1). Using this characterization, we prove (in Theorem 3.6) a lower bound
of kHN=k - k ln(N=k) on the number of coins in any solution for all N and k, where H n denotes
the nth element of the harmonic series. The lower bound is the best possible in the sense that it
is met with equality for infinitely many N 's and k's (Theorem 3.8). The characterization can be
used to efficiently determine the maximal number k for which a given collection of coins solves
the k payment problem (Corollary 3.9). Our next major result is an efficient algorithm which
finds a solution for any N and k using the least possible number of coins. We first deal with
the case where all denominations are allowed (Theorem 4.1). In this case the number of coins
is never more than (Theorem 4.7). Similarly to our lower bound, the upper
bound is the best possible in general (Theorem 4.8). Finally, using the same ideas in a slightly
more refined way, we extend the algorithm to the general case of restricted denomination set
(Theorem 5.1).
Related work. To the best of our knowledge, the current work is the first to formulate the
general k-payment problem, and hence the first to analyze it. One related classic combinatorial
problem is k-partition, where the question is in how many ways can a natural number N be
represented as a sum of k positive integers. This problem is less structured than the k-payment
problem, and can be used to derive lower bounds (however, these bounds are suboptimal: see
Section 2). The postage-stamp problem is also closely related: cast in our terms, the postage-
stamp problem is to find a set of denominations which will allow to pay any request of value
using at most h coins, so that N is maximized. The postage-stamp problem can
be viewed as an inverse of the 1-payment problem: there is one payment to make, the number of
coins is given, and the goal is to find a denomination set of a given size which will maximize the
budget. We remark that the postage-stamp problem is considered a difficult problem even for
a very small number of denominations. See, e.g., [10, 11, 4]. Another related question is change
making [6], which is the problem of how to represent a given budget with the least number
of coins from a given allowed denomination set. General change-making is (weakly) NP-hard.
Kozen and Zaks [5], and Verma and Xu [14], study the question of which denominations sets
allow one to use the greedy strategy for optimal change making.
Organization. In Section 2 we introduce notation, give some preliminary observations and
briefly discuss a few suboptimal results. In Section 3 we prove a characterization of the k-
payment problem and a lower bound on the number of coins in any solution. In Section 4 we
present and analyze an optimal algorithm for the unrestricted denomination case. In Section 5
we extend the algorithm for the case of restricted denominations. Finally, in Section 6 we give
a short overview of the applications of our results for electronic cash.
Notation and simple results
In this section we develop some intuition for the k-payment problem by presenting a few simple
upper and lower bounds on the number of coins required. The notation we shall use throughout
this article is summarized in Figure 1. The solution S to which the c
to should be clear from the context.
For the remainder of this article, fix N and k to be arbitrary given positive integers. Note
that we may assume without loss of generality that k - N , since payment requests of value 0
can be ignored.
Let us now do some rough analysis of the k-payment problem. As already mentioned above,
the case is trivial to solve: take c no better solution
is possible since c 1 ! k would not satisfy k payments of value 1 each. The case of
quite simple, at least when for an integer 1: we can solve it with
coins of denominations Given any request P , we can satisfy it by using the
coins which correspond to the ones in the binary representation of P .
However, it is not immediately clear how can one generalize this solution to arbitrary N and
k. Consider an arbitrary N : the usual technique of "rounding up" to the next power of 2 does
Parameters of problem specification:
the total budget.
ffl k: the number of payments.
ffl D: the set of allowed denominations. In the unrestricted case,
Quantities related to solution specification S:
the number of coins of denomination i (a.k.a. i-coins) in S. By convention,
the budget allocated in S using coins of denomination i or less. Formally,
ffl m: the largest denomination of a coin in S. Formally,
always.)
Quantities related to making a payment:
refer to the respective quantities after a payment has been made.
Standard quantities:
. By convention, H
Figure
1: Glossary of notation.
not seem appropriate in the k-payment problem: can we ask the teller of the bank to round up
the amount we withdraw just because it is more convenient for us? But let us ignore this point
for the moment, and consider the problem of general k. If we were allowed to make the dubious
assumption that we may enlarge N , then one simple solution would be to duplicate the solution
for 1-payment k times, and let each payment use its dedicated set of coins. Specifically, this
means that we allocate k 1-coins, k 2-coins, k 4-coins and so on, up to k 2 dlog 2
-coins.
The result is approximately k log 2 N coins, and the guarantee we have based on this simplistic
construction is that we can pay k payments, but for all we know each of these payments must
be of value at most N . However, the total budget allocated in this solution is in fact kN , and
thus it does not seem to solve the k-payment problem as stated, where the only limit on the
value of payments is placed on their sum, rather than on individual values.
As an aside, we remark that one corollary of our work (specifically, Theorem 3.1) is that
if coin dispensing is done using the greedy strategy (see below), then the above "binary"
construction for coin allocation indeed solves the general k-payment problem. More precisely,
highest possible denomination
use as many i-coins as possible
4 dispense j i-coins
Figure
2: The greedy algorithm for coin dispensing. P is the amount to be paid.
assume that a power of 2. Then the coin allocation algorithm allocates k coins of
each denomination
. Clearly, the number of coins is k log 2
and their sum is N . Coin dispensing is made greedily: at each point, the largest possible coin
is used. description of the greedy dispensing algorithm is presented in Figure 2.)
However, one should note that, perhaps surprisingly, our results also indicate that the binary
algorithm is not the right generalization for 1: the best algorithm (described in Section
yields a factor of about improvement, i.e., roughly 30% fewer coins.
Let us now re-consider the question of a general N . Once we have an algorithm for infinitely
many values of k and N , generalizing to arbitrary N and k is easy: find a solution for N 0 and
dispense a payment of value . The remaining set
of coins is a coin allocation of total budget N , and it can be used for k additional payments
since the original set solved the 1)-payment problem.
We close this section with a simple lower bound on the number of coins required in any
solution to the k-payment problem. The bound is based on a counting argument, and we only
sketch it here. For the number of distinct possible payment requests is
allowed). Observe that the algorithm must dispense a different set of coins in response to each
request. It follows that the number of coins in any solution must be at least log 2 (N 1).
This argument can be extended to a general k, using the observation that the number of
distinct responses of the algorithm (disregarding order), is at least p k (N ), the number of ways
to represent N as a sum of k positive integers. Using standard bounds for partitions (see, e.g.,
[13]), and since the number of responses is exponential in the number of coins, one can conclude
that the number of coins is \Omega\Gamma k log(N=k 2 )).
3 Problem characterization
In this section we first prove a simple condition to be necessary and sufficient for a set of coins
to correctly solve the k-payment problem. Using this result, we obtain a sharp lower bound on
the number of coins in any solution to the k-payment problem. Finally, we outline an efficient
algorithm which, given a set of coins S, determines the maximal k for which S is a solution of
the k-payment problem. Please refer to Figure 1 for notation.
3.1 A Necessary and sufficient condition
Theorem 3.1 S solves the k-payment problem if and only if T i - ki for all
The theorem is proven by a series of lemmas below. The necessity proof is not hard: the intuition
is that the "hardest" cases are when all payment requests are equal. The more interesting part
is the sufficiency proof. We start by upper bounding m, the largest denomination in a solution.
Lemma 3.2 If S solves the k-payment problem, then m - dN=ke :
Proof: By contradiction. Suppose m ? dN=ke, and consider k payments of values bN=kc and
dN=ke such that their total sum is N . Clearly, none of these payments can use m-coins, and
since c m - 1 by definition, the total budget available for these payments is at most N \Gamma m,
contradiction.
The following lemma is slightly stronger than the condition in Theorem 3.1. We use this
version in the proof of Theorem 3.6.
Lemma 3.3 If S solves the k-payment problem, then T i - ki for all
Proof: Let i - bN=kc. Then ki - N . Consider k payments of value i each such
payment can be done only with coins of denomination at most i, hence T i - ki.
We now turn to sufficiency. We start by showing that if the condition of Theorem 3.1 holds
single payment of value up to N can be satisfied by S under the greedy
algorithm.
Lemma 3.4 Let P be a payment request, and suppose that in S we have that for some j,
1.
2.
Then P can be satisfied by the greedy algorithm using only coins of denomination j or less.
Proof: We prove, by induction on j, that the claim holds for j and any P . The base case is
and hence there are at least P 1-coins, which can be used to pay
any amount up to their total sum. For the inductive step, assume that the claim holds for j
and all P , and consider j 1. Let a be the number of (j 1)-coins dispensed by the greedy
algorithm, namely,
Let R denote the remainder of the payment after the algorithm dispenses the (j
1). Note that (2) trivially holds after dispensing the (j + 1)-coins; we need to
show that (1) holds as well. We consider two cases. If a = c j+1 , then using (1) we get
and we are done for this case.
If a ! c j+1 , then since the algorithm is greedy, it must be the case that R 1. On the
other hand, by (2) we have that T j - j, and hence T j - R and we are done in this case too.
The following lemma is the key invariant preserved by the greedy algorithm. It is interesting
to note that while the algorithm proceeds from larger coins to smaller ones, the inductive proof
goes in the opposite direction. Recall that "primed" quantities refer to the value after a payment
is done.
Lemma 3.5 If T i - ki holds for all after the greedy algorithm dispenses any
amount up to Tm , T 0
holds for all
Proof: By induction on i. For the claim is trivial. Assume that the claim holds for all
is the amount dispensed using coins of denomination
at most i).
namely j is the smallest remaining denomination which
is larger than i. Note that j is well defined since namely i is not the largest remaining
coin. Next, note that since all the coins of denomination (whose sum is
used by the algorithm, we have that
Now, observe that since the algorithm is greedy, and since at least one j-coin was not used by
the algorithm, it must be the case that the total amount dispensed using coins of denomination
smaller than j is less than j, i.e., Using Eq. (1), we get that T i -
Finally, using the assumption applied to
We now complete the proof of the characterization.
Proof (of Theorem 3.1): The necessity of the condition follows directly from Lemmas 3.2
and 3.3. For the sufficiency, assume that T i - ik for all m, and consider a sequence
of up to k requests of total value at most N . After the l-th request is served by the greedy
algorithm, we have, by inductive application of Lemma 3.5, that T 0
Moreover, by Lemma 3.4, any amount up to the total remainder can be paid from S, so long
as which completes the proof.
3.2 A lower bound on the number of coins
Using Theorem 3.1, we derive a lower bound on the number of coins in any solution to the
k-payment problem.
Theorem 3.6 The number of coins in any solution to the k-payment problem is at least
We first prove a little lemma we use again in Theorem 4.7.
Lemma 3.7 For any number j,
Proof:
Proof (of Theorem 3.6): By Lemma 3.7 and Lemma 3.3:
bN=kc
ki
The lower bound of Theorem 3.6 is the best possible in general, as shown in the following
theorem.
Theorem 3.8 For any natural numbers are infinitely many N ?
that there exists a solution for the k-payment problem with budget N with exactly kH bN=kc coins.
Proof: Choose a natural number m such that m! ?
km. In this case the solution with solves the k-payment problem
by Theorem 3.1, and its total number of coins is precisely kH bN=kc .
3.3 Determining k for a given set
Consider the "inverse problem:" we are given a set of coins, with c i coins of denomination i
for each i, and the question is how many payments can we make using these coins, i.e., find k.
Note that k is well defined: we say that k is 0 if there is a payment request which cannot be
satisfied by the set, and k is never more than the total budget. Theorem 3.1 can be directly
applied to answer such a question efficiently, as implied by the following simple corollary.
Corollary 3.9 Let S be a given coin allocation. Then S solves the k-payment problem if and
only if k - min fbT i =ic mg.
4 An optimal algorithm for unrestricted denominations
We now present a coin allocation algorithm which finds a minimal solution to the k-payment for
arbitrary N and k, assuming that all denominations are available. We first prove the optimality
of the algorithm, and then give an upper bound on the number of coins it allocates. In Section
5 we generalize the algorithm to handle a restricted set of denominations.
The algorithm. Given arbitrary integers N - k ? 0, the algorithm presented in Figure
3 finds an optimal coin allocation. Intuitively, the algorithm works by scanning all possible
denominations for each i, the least number of i-coins which suffices
to make T i - ki. The number of coins c i is thus an approximation of the i-th harmonic element
multiplied by k. When the budget is exhausted, the remainder is added simply as a single
addiotinal coin.
4.1 Optimality of Allocate
Theorem 4.1 Let S denote the solution produced by Allocate. Then S solves the k-payment
problem using the least possible number of coins.
Allocate(N;
4 while denominations in order
add i-coins but don't overflow
9 if N ? t add remainder
Figure
3: Algorithm for optimal solution of the k-payment problem with unrestricted denominations
Proof: Follows from Lemma 4.3 and Lemma 4.5 below.
The important properties of the algorithm are stated in the following loop invariant.
Lemma 4.2 Whenever Allocate executes line 4, the following assertions hold.
(ii) if
Proof: Line 4 can be reached after executing lines 1-3 or after executing the loop of lines 5-8.
In the former case, we have and the lemma holds trivially.
Suppose now that line 4 is reached after an iteration of the loop. We denote by t 0 the value
of the variable t before the last execution of the loop. If lines 7 and 8 are not executed, then (i)
holds trivially. If they are executed, then we have that
and hence (i) holds after the loop is executed. Also note that t -
l ki\Gammat 0
1), and therefore (iii) holds true after the execution of the loop.
Finally, we prove that (ii) holds after executing the loop. If lines 7-8 are not executed, (ii)
holds trivially. Suppose that lines 7-8 are executed, and that i t. In this case, by line
which means that
Therefore, by line 7, c
l ki\Gammat
, and hence
l ki\Gammat 0
and we are
done.
Using Lemma 4.2 and Theorem 3.1, correctness is easily proven. We introduce some notation
to facilitate separate handling of the remainder:
i , for all i, is the value of the c i variable when line 9 is executed.
.
Lemma 4.3 S solves the k-payment problem.
Proof: By (i) of Lemma 4.2, in conjunction with lines 9-10 of the code, we have that upon
completion of the algorithm,
By (ii) of Lemma 4.2, and since (by lines 4 and 10)
the largest denomination is m -
m , we have that T i - T
denominations
therefore, by Theorem 3.1, S solves the k-payment problem.
Proving optimality takes more work. We first deal with the the coins allocated before line 9
is reached, and then show that even if the remainder was non-zero (and an additinal coin was
allocated in line 10), the solution S produced by Allocate is still optimal. To this end, we fix
an arbitrary solution T for the k-payment problem, and use the following additional notation:
ffl d i is the number of i-coins in T .
ffl U i is the budget allocated in T using coins of denomination i or less: U
ffl n is the largest denomination of a coin in T : Formally,
Note that by the definitions, the following holds for all i - 0:
c
The lemma shows that Allocate is optimal-ignoring the remainder.
Lemma 4.4
c
Proof: By contradiction. Suppose
that l is the
smallest such index, i.e., using Eq. (2), l. Hence, by Eq. (3),
we have that d l ! c
which implies, by integrality, that d l - c
since T
by (iii) of Lemma 4.2, we have
ld l
ic
i.e., U l ! kl, contradiction to Theorem 3.1, since l ! n.
We now prove that S is optimal even when considering the remainder.
Lemma 4.5
d i .
Proof: We consider two cases. If n ? m then using Lemma 4.4, and the fact that d n - 1 by
definition of n, we get
c
and we are done for this case.
So suppose n - m. Let
We prove the lemma by showing that
First, note that since by Lemma 4.4 we have using Eq. (3),
we get
On the other hand, since U
Therefore, it follows from Eq. (2) that
We now consider two subcases. If T
reduces to
nb
and we are done for this subcase.
Otherwise,
and the proof of Lemma 4.5 is complete.
4.2 The number of coins allocated by Allocate
Theorem 4.1 proved that the number of coins in the solution produced by Allocate is optimal.
We now give a tight bound on that number in terms of N and k. There is a nice interpretation to
the bound: it says that the worst penalty for having N and k which are not "nice" is equivalent
to requiring an extra payment (i.e., solving the 1)-payment problem) for "nice" N an k.
We remark that we do not know of any direct reduction which proves this result.
First, we prove an upper bound on m, which is slightly sharper than the general bound of
Lemma 3.2.
Lemma 4.6 The largest denomination of a coin generated by Allocate satisfies m -
l N
Proof: By (iii) of Lemma 4.2 we have T
1)i, and in particular
By (ii) of Lemma 4.2 we have that T
have that T
m. Using also Eq. (5), we get that
1, and by integrality, m -
l N
Theorem 4.7 The number of coins in the solution produced by Allocate is at most (k
Proof: By (iii) of Lemma 4.2, and by integrality, T
in conjunction with Lemmas 3.7 and 4.6, implies that
c
and therefore,
The upper bound given by theorem 4.7 is the best possible in general, as proven in the
following theorem.
Theorem 4.8 For any natural numbers are infinitely many N ?
that any solution for the k-payment problem for budget N requires at least (k
coins.
Proof: Choose a natural number m such that
and divisible by 2; 3;
these N and k, we have that the algorithm produces largest denomination
and c It follows that the number of coins in
this case is precisely
5 Generalization to a restricted set of denominations
We now turn our attention to the restricted denomination case. In this setting, we are given a
set D of natural numbers, and the requirement is that in any solution, c
D.
To get an optimal algorithm for an arbitrary set of denominations which allows for a
solution, 1 we follow the idea of algorithm Allocate: ensure, with the least number of coins,
that m. The treatment of the remainder is more complicated here, but
has the same motivation: add the remainder with the least possible number of coins. We also
have to make sure that the invariant maintained by the main loop is not broken, so we restrict
the remainder allocation to use only denominations which were already considered. In fact,
remainder allocation is precisely the change-making problem, solvable by dynamic program-
ming. (For some allowed denomination sets [5, 14], the greedy strategy works too.) The main
algorithm Allocate-Generalized is given in Figure 4. For completeness, we also include,
in
Figure
5, a description of a dynamic programming algorithm for optimal change-making. In
Allocate-Generalized, and in the analysis, we use the following additional notation.
ig, the largest allowed denomination smaller than i.
ig, the smallest denomination larger than i.
We now prove that Allocate-Generalized is correct and that the number of coins
it allocates is optimal. We remark that the general arguments are similar to those for the
unrestricted case, but are somewhat more refined here.
Theorem 5.1 Let S denote the solution produced by Allocate-Generalized. Then S
solves the k-payment problem with allowed denominations D using the least possible number
of coins.
Proof: Follows from Lemmas 5.3 and 5.8 below.
We again use a loop invariant to capture the important properties of the algorithm.
Lemma 5.2 Whenever Allocate-Generalized executes line 4, the following assertions
hold.
(ii) if
Proof: Line 4 can be reached after executing lines 1-3 or after executing the loop of lines
5-10. In the former case and the lemma holds trivially.
1 Observe that the problem is solvable if and only if 1 2 D: if there are no 1-coins then certainly we cannot
satisfy any payment request of value 1; and if 1 is allowed, then the trivial solution of N 1-coins works for all k.
Allocate-Generalized(N; k; D)
4 while
is the largest den. smaller than the next one in D
l up to j
8 then if
l kj \Gammat
check if budget is exhausted
9 then c i /
l kj \Gammat
if not, add coins
else goto 12
13 then D 0 / fd j d 2 D; d - ig add remainder without greater denominations
Figure
4: Algorithm for optimal solution of the k-payment problem with allowed denominations
D.
In the latter case, let t 0 denote the value of the variable t before the last execution of the
loop. If lines 9 and 10 are not executed, then (i) holds trivially. If they are executed, then
holds after the loop is executed. Then,
whether or not lines 9 and 10 are executed t -
l kj \Gammat 0
thus (iii) is true after the execution of the loop.
Finally, we show (ii) holds after executing the loop. If line 8 is not executed, or if line 11
is executed then (ii) holds trivially. Otherwise lines 9-10 are executed, and c
l kj \Gammat
, hence
l kj \Gammat 0
The correctness of Allocate-Generalized is proven in the next lemma. Following the
conventions of Section 4, we denote by S the allocation produced by Allocate-Generalized
before line 12. We denote by c
i the values in S corresponding to c i and T i in S,
respectively.
Lemma 5.3 S solves the k-payment problem with allowed denominations D.
Proof: By (i) of Lemma 5.2, in conjunction with lines 12-14 of the code, we have that upon
completion of the algorithm,
be the highest denomination allocated by the
algorithm. Then, by (ii) of Lemma 5.2, and the fact that Make-Change does not use coins
with higher denominations (line 13), we have that for all 1
Therefore, by Theorem 3.1, S solves the k-payment problem.
Make-Change(R;D)
do if i 2 D
6 do for i / R downto 1
7 do if M
8 then for all
9 do if M [i \Gamma j]
return M [R]
Figure
5: Dynamic programming algorithm for finding least number of coins whose sum is R
units using denomination set D. M is an array of multisets.
We now turn to prove optimality of the algorithm. The analysis proceeds similarly to that
of Section 4: We first show that the allocation of the coins up to the remainder (i.e., just
before line 12 is executed), is optimal. We then prove that the handling of the remainder is
optimal as well. The proof here is complicated by the fact that in order to ensure correctness,
Allocate-Generalized allocates the remainder by using only denominations which were
already considered in the main loop, and hence it is not immediately clear that the remainder
is allocated optimally.
For the remainder of this section, fix an arbitrary "competitor" solution T . We define for T
notions analogous to those defined for the solution S produced by Allocate-Generalized.
ffl d i is the number of i-coins in T .
ffl U i is the budget allocated in T using coins of denomination i or less: U
ffl n is the largest denomination of a coin in T : Formally,
To facilitate treatment of the remainder, we fix an arbitrary subsulotion T of T which
solves the k-payment problem for budget T
. (This is possible since a solution for budget N is
also a solution for any budget smaller than N .) We use the following additional definitions.
jd
. (Analogous to T
i in S).
is the largest denomination in T .
is the largest denomination allocated up to line 12 by Allocate-Generalized.
Hence, T
Finally, we define
As before, by the definitions,
for all i - 0 we have
d
c
d
We start by proving that Allocate-Generalized produces an optimal solution-ignoring
the remainder.
Lemma 5.4 For all 1
d
c
Proof: Suppose
that l is the smallest such index, i.e., using
l. Hence, by Eq. (7), we have that d
which implies, by integrality, that d
l - c
l by (iii) of
Lemma 5.2. Therefore, since l
U
ld
l
id
ic
or U
a contradiction to Theorem 3.1, since l
Corollary 5.5 n - m .
Proof: Suppose not. Then, by Lemma 5.4, U
which
is a contradiction to the fact that U
Next, we prove that no solution can use a denomination larger than the largest one used by
S.
Lemma 5.6 n - m.
Proof: We consider two cases. Suppose first that line 12 is reached after testing the condition
line 4. Then m hence the remainder allocated by the dynamic algorithm is
m. Note that n - max(n
and therefore, n - max(m and we are done for this case.
Suppose next that line 12 is reached from line 11. In this case we have
and since T
and hence
or
Now suppose for contradiction that n ? m. Then at least one coin of denomination
allocated by T , hence N - we get from
Eq. (8) that a contradiction to Theorem 3.1 since
It follows that the remainder is allocated optimally (for any choice of a subsolution T ).
Corollary 5.7
d
c
Proof: By Lemma 5.6, the amount of
m must be allocated in T using only denominations
used in S. The statement therefore follows from the optimality of dynamic programming used
in Make-Change.
We can now prove optimality of the full solution S.
Lemma 5.8
d i .
Proof: By Corollary 5.7, it is sufficient to prove that the number of coins in T is at least as
much as in S . If n - m , then we are done by Lemma 5.4. It remains to consider the case of
. We do it similarly to Lemma 4.5: define
. With this notation, it is
sufficient to show that
By Lemma 5.4 we have using Eq. (7) we get
U
i(d
Also, since U
U
c
ic
Hence,
b. Thus, from Eq. (6) we get
(d
as required.
6 Exact payments in electronic cash
In this section we discuss the issue of exact payments in the model of electronic cash (e-cash).
Briefly, in this model we have a bank, a set of users and a set of shops. There are withdrawal
and deposit protocols involving the bank and a user or shop, and a payment protocol, involving
a shop and a user. It is required that the users are anonymous: given a coin, it should be
impossible to infer which user withdrew it, even if the shops and bank collaborate. On the
other hand, the identity of the user should be revealed if the user pays with the same coin
more than once. The model has to be implemented with "electronic wallets" called smart cards
which have a severely limited capacity for storage and computation [7].
There are a few ways to implement exact payments in e-cash. The simplest way is to use
multiple coins. In this case, the results presented in this paper explain what coins should be
withdrawn. Another approach is divisible coins [9, 8, 12]: arbitrary portions of the coin can be
spent so long as their sum does not exceed the value of the coin. As expected, the size of a
divisible coin and the computational resources (such as time and communication) required for
its manipulation are greater than those of non-divisible coins.
An intermediate approach is presented in [12]: a composite coin is divisible only to a set
of prescribed set of subcoins. Subcoins can be used for payment and deposit, but cannot be
divided further. Although the size of the composite coin grows linearly with the number of its
constituent subcoins (for C subcoins, a typical size of the composite coin is 500
the size is still smaller than the size of arbitrarily divisible coins for a sufficiently small number
of subcoins. The results of this paper are particularly useful for the composite coin
if one can get a reasonable bound k on the number of exact payments, then composite coins
are better than arbitrarily divisible coins in all respects for certain ranges of N and k. With
current technology, the technique of [12] is better when k
Acknowledgments
We thank Mike Saks, Richard Stanley and Yishay Mansour for helpful discussions, Eric Bach
for bringing the postage stamp problem to our attention, Rakesh Verma for providing us with
a copy of [14], and Agnes Chan for valuable suggestions and comments.
--R
Untraceable off-line cash in wallets with observers
Blind signatures for untraceable payments.
Untraceable electronic cash.
Construction of distributed loop networks.
Optimal bounds for the change-making problem
Knapsack Problems: Algorithms and Computer Implementations.
Cryptographic smart cards.
An efficient divisible electronic cash scheme.
Universal electronic cash.
On the postage stamp problem with 3 denominations.
Associate bases in the postage stamp problem.
Efficient Electronic Notions and Techniques.
A Course in Combinatorics.
On optimal greedy change making.
--TR | coin allocation;change-making;exact change;electronic cash;k-payment problem |
365883 | Graph Searching and Interval Completion. | In the early studies on graph searching a graph was considered as a system of tunnels in which a fast and clever fugitive is hidden. The "classical" search problem is to find a search plan using the minimal number of searchers. In this paper, we consider a new criterion of optimization, namely, the search cost. First, we prove monotone properties of searching with the smallest cost. Then, making use of monotone properties, we prove that for any graph G the search cost of G is equal to the smallest number of edges of all interval supergraphs of G. Finally, we show how to compute the search cost of a cograph and the corresponding search strategy in linear time. | Introduction
Search problems on graphs attract the attention of researchers from different
fields of Mathematics and Computer Science for a variety of reasons. In the
first place, this is the resemblance of graph searching to certain pebble games
[18] that model sequential computation. The second motivation of the interest
to the graph searching arises from the VLSI theory. Exploitation of game-theoretic
approaches to some important parameters of graphs layouts such as
the cutwidth [23], the topological bandwidth [22] and the vertex separation
number [11] is very useful for the construction of efficient algorithms. Yet
another reason is connections between graph searching, the pathwidth and the
treewidth. These parameters play very important role in the theory of graph
minors developed by Robertson and Seymour (see [1, 10, 29]). Also some search
Department of Operations Research, Faculty of Mathematics and Mechanics,
St.Petersburg State University, Bibliotechnaya sq.2, St.Petersburg, 198904, Russia, e-mail:
fomin@gamma.math.spbu.ru The research of this author was partially supported by the RFBR
grant N98-01-00934
y Department of Applied Mathematics, Faculty of Mathematics, Syktyvkar State Univer-
sity, Oktyabrsky pr., 55, Syktyvkar, 167001, Russia, e-mail: golovach@ssu.edu.komi.ru. The
research of this author was partially supported by the RFBR grant N96-00-00285.
problems have applications in motion coordinations of multiple robots [30] and
in problems of privacy in distributed environments with mobile eavesdroppers
('bugs') [13]. More information on graph searching and related problems one
can find in surveys [1, 12, 25].
In the 'classical' node-search version of searching (see, e.g.[18]) at every move
of searching a searcher is placed at a vertex or is removed from a vertex. Initially,
all edges are contaminated (uncleared). A contaminated edge is cleared once
both its endpoints are occupied by searchers. A clear edge e is recontaminated
if there is a path without searchers leading from e to a contaminated edge.
The 'classical' search problem is in finding the search program such that the
maximal number of searchers used at any move is minimized. In this paper we
are interested in another criterion of optimization. We are looking for node-
search programs with the minimal sum (the sum is taken over all moves of
the search program) of numbers of searchers. We call this criterion the search
cost. Loosely speaking, the cost of a search program is the total number of
'man-steps' used in this program and the search cost is the cost of an optimal
program. The reader is referred to section 2 for formal definitions of searching
and its cost.
One of the most important issues concerning searching is that of recontam-
ination. In some search problems (see [2, 20]) the recontamination does not
help to search a graph, i.e., if searchers can clear the graph then they can do it
without recontamination of previously cleared edges. We establish the monotonicity
of search programs of the smallest cost. To prove the monotonicity
result we use special constructions named clews. Clews are closely related to
crusades used by Bienstock and Seymour in [2] and the notion of clew's measure
is related to the notion of linear width introduced by Thomas in [31].
This paper is organized as follows. In x 2 we give necessary definitions. In x 3
we introduce clews and prove the monotonicity of graph searching. In x 4 it is
proved that for any graph G the search cost of G is equal to the smallest number
of edges of an interval supergraph of G. In x 5 it is shown that the problem of
computing the search cost is equivalent to the vertex separation sum problem
and profile minimization problem. In x 6 we obtain some estimates of the
search costs in terms of the vertex separation number and the sum bandwidth.
In x 7 we show how to compute the search cost of graph's product. In x 8 we
give a linear time algorithm determining the search cost of a cograph and the
corresponding search program.
2 Statement of the problem
We use the standard graph-theoretic terminology compatible with [6], to which
we refer the reader for basic definitions. Unless otherwise specified, G is an
undirected, simple (without loops and multiple edges) and finite graph with
the vertex set V (G) and the edge set E(G); n denotes the order of G, i.e.,
n. The degree of a vertex v in G is denoted by deg(v) and the
maximum degree of the vertices of a graph G by \Delta(G).
A search program \Pi on a graph G is the sequence of pairs
such that
I. for
II. for vertex incident with an edge in A j
an edge in E(G) \Gamma A j
i is in Z j
III. for
IV. (placing new searchers and clearing edges) for there is
such that Z 1
is the set of all incident with v edges having one end in Z 2
V. (removing searchers and possible recontamination) for
i is the set of all edges e 2 A 1
i such that every path
containing e and an edge of E(G) \Gamma A 1
i , has an internal vertex in Z 2
We call this the search axioms. It is useful to treat Z 1
i as the set of vertices
occupied by searchers right away after placing a new searcher at the ith step;
i as the set of vertices occupied by searchers immediately before making the
1)th step; and A 1
i as the sets of cleared edges.
The well known node search problem [18] is in finding \Pi with the smallest
(this maximum can be treated as the maximum number of
searchers used in one step). Let us suggest an alternative measure of search.
We define the cost of \Pi to be
j. One can interpret the cost of a search
program as the total number of 'man-steps' used for the search or as the total
sum that searchers earn for doing their job. The search cost of a graph G,
denoted by fl(G), is the minimum cost of a search program where minimum is
taken over all search programs on G.
A search program
if for each
(recontamination does not occur when
searchers are removed at the ith step). The monotone search cost of G,
is the cost of the minimal (over all monotone search programs) search program
on G.
Notice that search programs can be defined not only for simple graphs but
for graphs with loops and multiple edges as well. Adding loops and multiple
edges does not change the search cost.
3 Monotone programs and clews
Let G be a graph. For X ' E(G) we define V (X) to be the set of vertices
which are endpoints of X and let
We consider clews only in graphs of special structure. Let G 0 be obtained
by adding a loop at each vertex of a graph G. A clew in G 0 is the sequence
of subsets of E(G 0 ) such that
1.
2. for
3. for i the loop at v also belongs to X i .
The measure of the clew is
progressive
Notice that if a clew (X
Theorem 1 For any graph G and k - 0 the following assertions are equivalent:
(ii) Let G 0 be obtained by adding a loop at each vertex of G. There is a clew
in G 0 of measure - k.
(iii) Let G 0 be obtained by adding a loop at each vertex of G. There is a
progressive clew in G 0 of measure - k.
Proof. (i) As mentioned above,
be a search program on G 0 with the cost - k. We prove that A 2
is the clew of measure - k in G 0 . The third search axiom implies A 2
E(G). The second search axiom says that for every
m is the clew
if for every
Suppose that for some
inequality does not hold. Then there are vertices u; v such
that
Notice that loops e u at vertices u and v
belongs to A 2
. From the fifth search axiom A 1
it follows that
e
. The latter contradicts the fourth axiom.
Choose a clew (X such that
and, subject to (1),
First we prove that for i
is the clew. Using (1), we get
It is easy to check that jffij satisfies the submodular inequality
Combining (3) and (4), we obtain
the loop at v belongs to X
Therefore, V (X we obtain that
is the clew. Taking into account
(5), (1) and (2), we get jX j. Thus we have X
is the clew
contradicting (2). Hence, (X
progressive clew of measure - k in
G 0 . We define the search program on G 0 setting Z 1
ng. Suppose that at the
ith step searchers are placed at vertices of Z 1
i and all edges of X i are cleaned.
Obviously no recontamination occurs by removing all searchers from vertices of
Every edge of X is either the loop at
v, or is incident with v and with a vertex of ffi (X
. Then at the (i
step the searcher placed at v cleans all edges of X Finally,
4 Interval graphs
A graph G is an interval graph, if and only if one can associate with each
vertex an open interval I on the real line, such that for
all v; w 2 V (G), v 6= w: (v; w) 2 E(G), if and only if I v " I w 6= ;. The set of
is called an (interval) representation for G.
It is easy to check that every interval graph has an interval representation in
which the left endpoints are distinct integers Such a representation
is said to be canonical.
A graph G is a supergraph of the graph G 0 if
E(G). Let G be an interval graph and let I = fI v g v2V (G) be a canonical
representation of G. The length of G with respect to I, denoted by l(G; I), is
We define the length l(G) of an interval graph G as the minimum length over all
canonical representations of G. For any graph G we define the interval length
of G, denoted by il(G), as the smallest length over all interval supergraphs of
G.
We shall use the following property of canonical representation in the proof
of Theorem 3.
I be an interval graph of n vertices and I = fI
, be its canonical representation such that
For ng let P (i) be the set of intervals I v , containing i.
Proof. (6) implies that every interval I contains br
integers. Every ng belongs to jP (i)j intervals of I. Therefore,
For every
(the number of intervals I plus the number
of intervals I From (8) it follows that
which, combined with (7), proves Lemma 2. 2
Theorem 3 For any graph G and k ? 0 the following assertions are equivalent:
(iii) there is an interval supergraph I of G such that jE(I)j - k.
Proof. (i) then by Theorem 1 there is a monotone search
program
on G with cost - k. We choose " ! 1 and assign to each vertex v of G the
interval (l "), where a searcher is placed on v at the l v th step and this
searcher is removed from v at the r v th step, i.e., l
and r
g. After the nth step of the search program
all edges of G are cleared, hence for every edge e of G there is a step such
that both ends of e are occupied by searchers. Therefore, interval graph I with
canonical representation is the supergraph of G.
Since for sufficiently small "
(in the left hand side equality each vertex v is counted br times), we see
that
immediately from Lemma 2.
, be a canonical representation
of the minimal length of an interval supergraph I of G. It is clear that r v ! n+1,
Let us describe the following search program on G:
ng we put Z 1
is the vertex assigned
to the interval with the left endpoint
Such actions of searchers do not imply recontamination because each path in I
(and hence in G) from a vertex to a vertex u, l u
a vertex of P (i 1). For each edge e of I (and hence for each edge of G) there
is ng such that both ends of e belong to Z 1
. Then at the nth step
all edges are cleared.
The cost of the program advanced is
5 Linear layouts
A linear layout of a graph G is a one-to-one mapping ng.
There are various interesting parameters associated with linear layouts. For
example, setting bw(G; E(G)g, one can define
the bandwidth of G as
is a linear layout of Gg:
The bandwidth minimization problem for arises in different applications (see
[8] for a survey).
In some applications instead of taking the maximum difference
it is more useful to find an 'average bandwidth'. The bandwidth sum of a graph
G is
is a linear layout of Gg:
Define the profile [8] of a symmetric n \Theta n matrix as the minimum
value of the sum n
taken over all symmetric permutations of A, it being assumed that a
ng. Profile may be redefined as a graph invariant p(G) by finding a
linear layout f of G which minimizes the sum
stands for 'is adjacent or equal to'. Notice, that bw(G;
ug. Sum bandwidth and profile reductions are also
relevant to the speedup of matrix computations; see [8] for further references.
Also these problems arise in VLSI layouts.
For a linear layout f of G we define
and
cw(G;
i2f1;:::;ng
If we 'draw' G with vertices in a straight line in the order given by f then
(G; f) is the number of edges that cross the gap between i and i + 1. The
cutwidth of G is
is a linear layout of Gg:
It is easy to check that for any layout f
For
U and there exists v 2 V (G) n U such that (u; v) 2 E(G)g:
Ellis et al. [11] (see also [21]) studied the vertex separation of a graph. Let f
be a linear layout of a graph G. Denote by S i (G; f ng, the set of
vertices ig. The vertex separation with respect to layout f
is
vs(G;
and the vertex separation of a graph G, we denote it by vs(G), is the minimum
vertex separation over all linear layouts of G. Let us define an alternative
'average norm' of vertex separation. The vertex separation sum with respect to
layout f is
vs sum (G;
and the vertex separation sum of a graph G [16] denoted by vs sum (G) is the
minimum vertex separation sum over all linear layouts of G.
Owing to Billionnet [3], for any graph G, the profile of G is equal to the
smallest number of edges over all interval supergraphs of G.
Theorem 4 For any graph G and k ? 0 the following assertions are equivalent:
Proof. Because (i) , (iii) follows from [3] and Theorem 3, it remains to prove
(i) , (ii).
be a monotone search program on G with cost - k. Define a layout
ng so that f(u) ! f(v) iff u accepts a searcher before v does. By the
second search axiom for each i 2 ng jZ 2
and we conclude
that vs sum (G; f) - k.
ng be a linear layout such that vs sum (G; f) -
k. We define the following subsets of V (G):
ng we put Z 1
ng Z 2
For ng and
i be the set of edges induced by [ i
k .
Note that for each edge (u; v) 2 E(G) there is 1g such that
(v)). On this basis it is straightforward
(as in Theorem 3) to prove that the sequence
is the (monotone) search program of cost - k on G. 2
5.1 Complexity remark
The problem of Interval Graph Completion:
Instance: A graph E) and an integer k.
Question: Is there an interval graph G
is NP-complete even when G is stipulated to be an edge graph (see [14], Problem
GT35). Interval Graph Completion arises in computational biology (see,
e.g., [4]) and is known to be FPT [7, 17].
From Theorem 3 it follows immediately that the problem of Search Cost:
deciding given a graph G and an integer k, whether fl(G) - k or not, is NP-complete
even for edge graphs and that finding the search cost is FPT for a
fixed k.
An O(n 1:722 ) time algorithm was given in [19] for the profile problem for
the case that G is a tree with n vertices.
6 Estimates of the search cost
A split in a graph G is a partition of V of the vertex set of
G such that jV i and the edges of G going from V 1 to V 2
induce a complete bipartite graph. Let V be a split of G and let
be the vertices of the associated complete bipartite graph.
The following proposition can be found in [25].
Proposition 5 Every monotone search program on G has a step at which all
vertices from A 1 or all vertices from A 2 simultaneously carry a searcher.
Corollary 6 Let us think that
Proof. Suppose that the jth step is the first step of the monotone program \Pi
at which all vertices of A 1 are occupied. Then at this step
are placed in A 2 and n 1 searchers in A 1 . Therefore the cost of \Pi is at least2 (j
As an illustration of the corollary we refer to a complete bipartite graph
with bipartition (X; Y j. The monotone search
program of cost 1(n
is easily be constructed. First we
place searchers on vertices of X and then on vertices of Y . Then by Corolary 6
Owing to Theorem 3 one can formulate the following proposition.
Proposition 7 Let G be a graph on n vertices and m edges. Then
and only if G is an interval graph, and
and only if G is a complete graph.
Further estimates of the search cost are obtained by making use of the vertex
separation sum.
Proposition 8 Let G be a graph on n vertices. Then2
Moreover, if G is connected then2
Proof. Let f be a linear layout of G such that vs sum
It is readily shown that there are
(G;
for
The proof of the second inequality is similar. Note that for any i 6=
Makedon and Sudborough [23] obtain some bounds for the cutwidth in
terms of the maximal degree and the search number. By virtue of (9) it is
not surprising that there are strong connections between the bandwidth sum
and the search cost.
Proposition 9 For any linear layout f of G bw sum (G; f) - \Delta(G)vs sum (G; f).
Therefore, bw sum (G) - \Delta(G)fl(G).
Proof. The proof is apparent from (9), because for any i 2 ng cw i (G; f) -
The line graph of a graph G is the graph with vertex set E(G), two vertices
being adjacent iff they are adjacent (as edges) in G.
be the line graph of G with
Then
Proof. Let f be an optimal (for the bandwidth sum) linear layout of G. Consider
a linear layout g: E(G) of L(G) such that for any edges a = (u; v),
a
For ng we define
Using
we obtain
vs sum (L(G);
e
Summing
we get
This implies that
vs sum (L(G); g) -
By (10) and Cauchy inequality,
out
Clearly, for every \Delta(G). If we combine this with (11)
and (12), we get
vs sum (L(G)) - \Delta(G)bw sum
:7 Product of graphs
The disjoint union of graphs G and H is the graph G -
[H with the vertex set
[V (H) and the edge set E(G) -
(where -
[ is the disjoint union on
graphs and sets, respectively);
We use G \Theta H to denote the following type of 'product' of G and H : G \Theta H
is the graph with the vertex set V (G) -
[V (G) and the edge set E(G) -
The following theorem is similar to results on edge-search and node-search
numbers [15, 25] and on the pathwidth and the treewidth [5] of a graph.
Theorem 11 Let G
Proof.
(I) is trivial.
(II). Let f be an optimal layout of G 1 \Theta G 2 , i.e. vs sum (G 1 \Theta G 2
k be the smallest number ensuring
clarity's sake we suppose that
be the 're-
striction' of f to V (G 2 ), i.e. for any u; v
Clearly, for any i 2
In addition, for any i 2 Consequently
i=k\Gamman 1
Finally,
For the other direction. Let f be an arbitrary layout of G 1 and let g be a
layout of G 2 such that vs sum
g)j. Define the layout h of
by the rule
ae
It follows easily that j@S i and
1g. We
conclude that
Similarly,
8 Cographs
Theorem 11 can be used to obtain linear time algorithms for the search cost and
corresponding search strategy on cograph. Recall that a graph G is a cograph
if and only if one of the following conditions is fulfilled:
There are cographs G
There are cographs G
A similar algorithm for the treewidth and pathwidth of cographs was described
in [5]. The main idea of the algorithm is in constructing a sequence of operations
[' and `\Theta' producing the cograph G. With each cograph G one can associate
a binary labeled tree, called the cotree TG . TG has the following properties:
1. Each internal vertex v of TG has a label label(v)2 f0; 1g.
2. There is a bijection - between the set of leaves of TG and V (G).
3. To each vertex v 2 (T G ) we assign the subgraph G v of G as follows:
(a) If v is a leaf then G
(b) If v is an internal vertex and label(v)= 0 then G
[Gw , where
u; w are the sons of v.
(c) If v is an internal vertex and label(v)= 1 then G
u; w are the sons of v.
Notice that if r is the root of TG then G G. Corneil et al. [9] gave an
determining whether a given graph G is a
cograph and, if so, for constructing the corresponding cotree.
Theorem 12 The search cost of a cograph given with a corresponding cotree
can be computed in O(n) time.
Proof. Let r be the root of TG . First we call COMPUTE-SIZE(r). This
recursive procedure computes jV (G v )j for each
procedure COMPUTE-SIZE(v: vertex);
begin
if v is a leaf of TG
then
else
begin COMPUTE-SIZE(left son of v);
COMPUTE-SIZE(right son of v);
size(v):=size(left son of v)+size(right son of v);
Then we call COMPUTE-GAMMA(r).
procedure COMPUTE-GAMMA(v: vertex);
begin
if v is a leaf of TG
then fl(v):=0
else
begin COMPUTE-GAMMA(left son of v);
COMPUTE-GAMMA(right son of v);
if label(v)=0
then fl(v):=maxffl(left son of v), fl(right son of v)g
else
fl(v):= minfsize(left son of v) (size(left son of v)\Gamma1)/2 fl(right son of v),
size(right son of v) (size(right son of v)\Gamma1)/2 son of v)g
son of v)\Lambdasize(right son of v)
By Theorem 11
are called once for each vertex of TG , so the time complexity of these procedures
is O(n). 2
Theorem 13 Let G be a cograph of n vertices and e edges. The optimal search
program on G can be computed in O(n
Proof. Let TG be the cotree of G (this tree can be computed in O(n
time [9]). Let r be the root of TG . First call COMPUTE-GAMMA(r) and
COMPUTE-SIZE(r). Then call SEARCH(r).
procedure SEARCH(v: vertex);
begin
if v is the root of TG
then begin
if v is a leaf of TG
then begin
else if label(v)=0
then begin
first(left son of v):= first(v);
last(left son of v):= first(v)+ size(left son of v)\Gamma1;
first(right son of v):= first(v)+ size(left son of v);
last(right son of v):= last(v);
SEARCH(left son of v);
SEARCH(right son of v)
else ( label(v)=1)
if size(left son of v) (size(left son of v)\Gamma1)/2 +fl(right son of v)
? size(right son of v) (size(right son of v)\Gamma1)/2 +fl(left son of v)
then begin k:=first(v);
for each vertex w 2 V (G) that is a leaf descendants of the right son of v
begin place(w):=k; remove(w):=last(v); k:=k+1 end
first(left son of v):= first(v)+ size(right son of v);
last(left son of v):= last(v);
SEARCH(left son of v)
else begin k:=first(v);
for each vertex w 2 V (G) that is a leaf descendants of the left son of v
begin place(w):=k; remove(w):=last(v); k:=k+1 end
first(right son of v):= first(v)+ size(left son of v);
last(right son of v):= last(v);
SEARCH(right son of v)
For each leaf computes the numbers place(v) and
remove(v). Clearly, the procedure is linear in size of the cotree. For the vertex
of G at the place(v)th step a searcher is placed at - \Gamma1 (v) and at the
remove(v)th step the searcher is removed from - \Gamma1 (v). Determining the sets
and
can be done with bucket sort in O(n) time. Define Z 1
computing the sets
can be done in linear time. Finally, the edge sets A j
i can be constructed in
O(n+ e) time as follows. Put A 1
;. For the vertex
its neighbors and
Using the characteristic vector of [ i
i the addition of edges incident to v
can be done in O(deg(v)) steps.
We have proved that the sequence
can be constructed in O(n+e) time. By Theorem 11 this sequence is the search
program and the cost of this program is fl(G). 2
9 Concluding remarks
In this paper, we introduced a game-theoretic approach to the problem of interval
completion with the smallest number of edges. There are similar approaches
to the pathwidth and treewidth parameters. The interesting problem is whether
there is a graph-searching 'interpretation' of the fill-in problem.
--R
Graph searching
Monotonicity in graph searching
Basic graph theory: Paths and circuits
A linear recognition algorithm for cographs
The vertex separation and search number of a graph
A graph-theoretic approach to privacy in distributed systems
Computers and Intractability
Tractability of parameterized completion problems on chordal and interval graphs: Minimum fill-in and physical mapping
Searching and pebbling
The profile minimization problem in trees
Recontamination does not help to search a graph
On minimizing width in linear layouts
The complexity of searching a graph
Some extremal search problems on graphs
Graph minors - a survey
Optimal algorithms for a pursuit-evasion problem in grids
Tree decompositions of graphs.
--TR
--CTR
Yung-Ling Lai , Gerard J. Chang, On the profile of the corona of two graphs, Information Processing Letters, v.89 n.6, p.287-292, 31 March 2004
Paolo Detti , Carlo Meloni, A linear algorithm for the Hamiltonian completion number of the line graph of a cactus, Discrete Applied Mathematics, v.136 n.2-3, p.197-215, 15 February 2004
Fedor V. Fomin , Dimitrios M. Thilikos, On the monotonicity of games generated by symmetric submodular functions, Discrete Applied Mathematics, v.131 n.2, p.323-335, 12 September
Sheng-Lung Peng , Chi-Kang Chen, On the interval completion of chordal graphs, Discrete Applied Mathematics, v.154 n.6, p.1003-1010, 15 April 2006
Barrire , Paola Flocchini , Pierre Fraigniaud , Nicola Santoro, Capture of an intruder by mobile agents, Proceedings of the fourteenth annual ACM symposium on Parallel algorithms and architectures, August 10-13, 2002, Winnipeg, Manitoba, Canada
Fedor V. Fomin , Dieter Kratsch , Haiko Mller, On the domination search number, Discrete Applied Mathematics, v.127 n.3, p.565-580, 01 May | interval graph completion;graph searching;linear layout;search cost;profile;cograph;vertex separation |
365895 | Probabilistic Models of Appearance for 3-D Object Recognition. | We describe how to model the appearance of a 3-D object using multiple views, learn such a model from training images, and use the model for object recognition. The model uses probability distributions to describe the range of possible variation in the object's appearance. These distributions are organized on two levels. Large variations are handled by partitioning training images into clusters corresponding to distinctly different views of the object. Within each cluster, smaller variations are represented by distributions characterizing uncertainty in the presence, position, and measurements of various discrete features of appearance. Many types of features are used, ranging in abstraction from edge segments to perceptual groupings and regions. A matching procedure uses the feature uncertainty information to guide the search for a match between model and image. Hypothesized feature pairings are used to estimate a viewpoint transformation taking account of feature uncertainty. These methods have been implemented in an object recognition system, OLIVER. Experiments show that OLIVER is capable of learning to recognize complex objects in cluttered images, while acquiring models that represent those objects using relatively few views. | Introduction
Object recognition requires a model of appearance that can be matched to new images. In
this paper, a new model representation will be described that can be derived automatically
from a sample of images of the object. The representation models an object by a probability
distribution that describes the range of possible variation in the object's appearance. Large
and complex variations are handled by dividing the range of appearance into a conjunction
of simpler probability distributions. This approach is general enough to model almost any
range of appearance, whether arising from different views of a 3-D object or from different
instances of a generic object class.
The probability distributions of individual features can help guide the matching process
that underlies recognition. Features whose presence is most strongly correlated with that
of the object can be given priority during matching. Features with the best localization can
contribute most to an estimate of the object's position, while features whose positions vary
most can be sought over the largest image neighborhoods. We hypothesize initial pairings
between model and image features, use them to estimate an aligning transformation, use the
transformation to evaluate and choose additional pairings, and so on, pairing as many features
as possible. The transformation estimate includes an estimate of its uncertainty derived
from the uncertainties of the paired model and image features. Potential feature pairings are
evaluated using the transformation, its uncertainty, and topological relations among features
so that the least ambiguous pairings are adopted earliest, constraining later pairings. The
method is called probabilistic alignment to emphasize its use of uncertainty information.
Two processes are involved in learning a multiple-view model from training images
(Fig. 1). First, the training images must be clustered into groups that correspond to distinct
views of the object. Second, each group's members must be generalized to form a model
view characterizing the most representative features of that group's images. Our method
couples these two processes in such a way that clustering decisions consider how well the
resulting groups can be generalized, and how well those generalizations describe the training
images. The multiple-view model produced thus achieves a balance between the number
of views it contains, and the descriptive accuracy of those views.
Related research
In recent years, there has been growing interest in modeling 3-D objects with information
derived from a set of 2-D views (Breuel, 1992; Murase and Nayar, 1995). For an object that
is even moderately complex, however, many qualitatively distinct views may be needed.
Thus, a multiple-view representation may require considerably more space and complexity
in matching than a 3-D one. Space requirements can be reduced somewhat by allowing
views to share common structures (Burns and Riseman, 1992) and by merging similar views
after discarding features too fine to be reliably discerned (Petitjean, Ponce, and Kriegman,
1992). In this paper, we develop a representation that combines nearby views over a wider
range of appearance by representing the probability distribution of features over a range of
Training Images
Clusters
Model Views
Clustering
Generalization
Figure
1: Learning a multiple-view model from training images requires a clustering of the
training images and a generalization of each cluster's contents.
One method for improving the space/accuracy trade-off of a multiple-view representation
is to interpolate among views. Ullman and Basri (1991) have shown that with three
views of a rigid object whose contours are defined by surface tangent discontinuities, one
can interpolate among the three views with a linear operation to produce other views under
orthographic projection. If the object has smooth contours instead, six views allow for
accurate interpolation. However, for non-rigid or generic models, a more direct form of
sampling and linear interpolation can be more general while giving adequate accuracy, as
described in this paper.
There has also been recent development of methods using dense collections of local fea-
tures, with rotational invariants computed at corner points (Schmid and Mohr, 1997). This
approach has proved very successful with textured objects, but is less suited to geometrically
defined shapes, particularly under differing illumination. The approach described in
this paper can be extended to incorporate any type of local feature into the model represen-
tation, so a future direction for improvement would be to add local image-based features.
Initially, the system has been demonstrated with edge-based features that are less sensitive
to illumination change.
Other approaches to view-based recognition include color histograms (Swain and Bal-
lard, 1991), eigenspace matching (Murase and Nayar, 1995), and receptive field histograms
(Schiele and Crowley, 1996). These approaches have all been demonstrated successfully on
isolated or pre-segmented images, but due to their more global features it has been difficult
to extend them to cluttered and partially occluded images, particularly for objects lacking
distinctive feature statistics.
2.1 Matching with uncertainty
One general strategy for object recognition hypothesizes specific viewpoint transformations
and tests each hypothesis by finding feature correspondences that are consistent with it. This
strategy was used in the first object recognition system (Roberts, 1965), and it has been used
in many other systems since then (Brooks, 1981; Bolles and Cain, 1982; Lowe, 1985; Grimson
and Lozano-P-erez, 1987; Huttenlocher and Ullman, 1990; Nelson and Selinger, 1998).
An example of this approach is the iterative matching in the SCERPO system (Lowe,
1987). A viewpoint transformation is first estimated from a small set of feature pairings.
This transformation is used to predict the visibility and image location of each remaining
model feature. For each of these projected model features, potential pairings with nearby
image features are identified and evaluated according to their expected reliability. The best
ranked pairings are adopted, all pairings are used to produce a refined estimate of the trans-
formation, and the process is repeated until acceptable pairings have been found for as many
of the model features as possible. This paper describes an enhanced version of iterative
matching that incorporates feature uncertainty information.
In a related approach, Wells (1997) has shown how transformation space search can be
cast as an iterative estimation problem solved by the EM algorithm. Using Bayesian theory
and a Gaussian error model, he defines the posterior probability of a particular set of
pairings and a transformation, given some input image. In more recent work, Burl, Weber,
and Perona (1998) provide a probabilistic model giving deformable geometry for local image
patches. The current paper differs from these other approaches by deriving a clustered
view-based representation that accounts for more general models of appearance, incorporating
different individual estimates of feature uncertainty, and making use of a broader range
of features and groupings.
2.2 Use of uncertainty information in matching
Iterative alignment has been used with a Kalman filter to estimate transformations from feature
pairings in both 2D-2D matching (Ayache and Faugeras, 1986) and 2D-3D matching
(Hel-Or and Werman, 1995). Besides being efficient, this allows feature position uncertainty
to determine transformation uncertainty, which in turn is useful in predicting feature
positions in order to rate additional feature pairings (Hel-Or and Werman, 1995). However,
this (partial) least-squares approach can only represent uncertainty in either image or model
features, not both; total least squares can represent both, but may not be accurate in predicting
feature positions from the estimated transformation (Van Huffel and Vandewalle,
1991). Most have chosen to represent image feature uncertainty; we have chosen to emphasize
model feature uncertainty, which in our case carries the most useful information.
3 Model representation
An object model is organized on two levels so that it can describe the object's range of appearance
both fully and accurately. At one level, large variations in appearance are handled
by subdividing the entire range of variation into discrete subranges corresponding to
distinctly different views of the object; this is a multiple-view representation. At a second
level, within each of the independent views, smaller variations are described by probability
distributions that characterize the position, attributes, and probability of detection for individual
features.
The only form of appearance variation not represented by the model is that due to varying
location, orientation, and scale of the object within the image plane. Two mechanisms
accommodate this variation. One is the viewpoint transformation, which aligns a model
view with an appropriate region of the image; we shall describe it in section 4. The other is
the use of position-invariant representations for attributes, which allow feature attributes to
be compared regardless of the feature positions.
3.1 Simplifying approximation of feature independence
Our method lets each model view describe a range of possible appearances by having it
define a joint probability distribution over image graphs. However, because the space of
image graphs is enormous, it is not practical to represent or learn this distribution in its most
general form. So instead, the joint distribution is approximated by treating its component
features as though they were independent. This approximation allows the joint distribution
to be decomposed into a product of marginal distributions, thereby greatly simplifying the
representation, matching, and learning of models.
One consequence of this simplification is that statistical dependence (association or co-
among model features cannot be accurately represented within a single model
view. Consider, for example, an object whose features are divided among two groups, only
one of which appears in any instance. With its strongly covariant features, this object would
be poorly represented by a single view. However, where one view cannot capture an important
statistical dependence, multiple views can. In this example, two model views, each
containing one of the two subsets of features, could represent perfectly the statistical dependence
among them.
By using a large enough set of views, we can model any object as accurately as we wish.
For economy, however, we would prefer to use relatively few views and let each represent
a moderate range of possible appearances. The model learning procedure described in section
6 gives a method for balancing the competing aims of accuracy and economy.
3.2 Model view representation
A single model view is represented by a model graph. A model graph has nodes that represent
features, and arcs that represent composition and abstraction relations among features.
Each node records the information needed to estimate three probability distributions characterizing
its feature:
1. The probability of observing this feature in an image depicting the modeled view of
the object. This is estimated from a record of the number of times the model feature
has been identified in training images by being matched to a similar image feature.
2. Given that this feature is observed, the probability of it having a particular position.
This is characterized by a probability distribution over feature positions. We approximate
this distribution as Gaussian to allow use of an efficient matching procedure
based on least squares estimation. The parameters of the distribution are estimated
from sample feature positions acquired from training images.
3. Given that this feature is observed, the probability of it having particular attribute
values. This is characterized by a probability distribution over vectors of attribute
values. Little can be assumed about the form of this distribution because it may depend
on many factors: the type of feature, how its attributes are measured, possible
deformations of the object, and various sources of measurement error. Thus we use a
non-parametric density estimator that makes relatively few assumptions. To support
this estimator, the model graph node records sample attribute vectors acquired from
training images.
3.3 Model notation
An object's appearance is modeled by a set of model graphs fG i g. A model graph G i is a
tuple hF; R; mi, where F is a set of model features, R is a relation over elements of F , and
m is the number of training images used to produce G i .
A model feature j 2 F is represented by a tuple of the form ht
j's type is represented by t j , whose value is one of a set of symbols denoting different
types of features. The element m j specifies in how many of the m training images feature
was found. The series A j contains the attribute vectors of those training image features
that matched j. The dimension and interpretation of these vectors depend on j's type. The
series contains the mean positions of the training image features that matched j. These
positions, although drawn from separate training images, are expressed in a single, common
coordinate system, which is described in the following section.
From j's type t j , one can determine whether j is a feature that represents a grouping or
abstraction of other features. If so then R will contain a single element, hk; l specifying
j's parts as being l 1 through l n . The number of parts n may depend on j. Moreover,
any l i may be the special symbol ?, which indicates that the part is not defined, and perhaps
not represented in the model graph.
4 Coordinate systems and viewpoint transformations
A feature's position is specified by a 2-D location, orientation, and scale. Image features
are located in an image coordinate system of pixel rows and columns. Model features are
located in a model coordinate system shared by all features within a model graph.
Two different schemes are used to describe a feature's position in either coordinate system
xy's The feature's location is specified by [x y], its orientation by ', and its scale by s.
xyuv The feature's location is specified by [x y], and its orientation and scale are represented
by the direction and length of the 2-D vector [u v].
We shall use the xy's scheme for measuring feature positions, and the xyuv scheme to provide
a linear approach for aligning features in the course of matching a model with an image.
They are related by
Where it is not otherwise clear we
shall indicate which scheme we are using with the superscripts xy's and xyuv .
The task of matching a model with an image includes that of determining a viewpoint
transformation that closely aligns image features with model features. The viewpoint trans-
formation, T , is a mapping from 2-D image coordinates to 2-D model coordinates-it transforms
the position of an image feature to that of a model feature.
4.1 Similarity transformations
A 2-D similarity transformation can account for translation, rotation, and scaling of an ob-
ject's projected image. It does not account for effects of rotation in depth, nor changes in
perspective as an object moves towards or away from the camera.
A 2-D similarity transformation decomposed into a rotation by ' t , a scaling by s t , and
a translation by [x t y t ], in that order, can be expressed as a linear operation using the xyuv
scheme, as Ayache and Faugeras (1986), among others, have done. The linear operation has
two formulations in terms of matrices. We shall present both formulations here, and have
occasion to use both in section 5.
We shall develop the formulations by first considering the transformation of a point location
from [x k y k ] to [x 0
k ]. We can write it as
sin ' t cos ' t
(1)
Defining allows us to rewrite this as either
(2)
or
Now consider a vector [u k v k ] whose direction represents an orientation and whose magnitude
represents a length. When mapped by the same transformation, this vector must be
rotated by ' t and scaled by s t to preserve its meaning. Continuing to use u
, we can write the transformation of [u k v k ] as either
or
Equations 3 and 5 together give us one complete formulation of the transformation. We
can write it with a matrix A k representing the position b being transformed,
and a vector t representing the transformation:
Equations 2 and 4 together give us another complete formulation. We can write it with
a matrix A t representing the rotation and scaling components of the transformation, and a
vector x t representing the translation components:
Because it can be expressed as a linear operation, the viewpoint transformation can be
estimated easily from a set of feature pairings. Given a model feature at b j and an image
feature at b k , the transformation aligning the two features can be obtained as the solution to
the system of linear equations b additional feature pairings, the problem of
estimating the transformation becomes over-constrained; then the solution that is optimal in
the least squares sense can be found by least squares estimation. We shall describe a solution
method in section 5.3.
5 Matching and recognition methods
Recognition requires finding a consistent set of pairings between some model features and
some image features, plus a viewpoint transformation that brings the paired features into
close correspondence. Identifying good matches requires searching among many possible
combinations of pairings and transformations. Although the positions, attributes, and relations
of features provide constraints for narrowing this search, a complete search is still
impractical. Instead the goal is to order the search so that it is likely to find good matches
sooner rather than later, stopping when an adequate match has been found or when many of
the most likely candidates have been examined. Information about feature uncertainty can
help by determining which model features to search for first, over what size image neighbourhoods
to search for them, and how much to allow each to influence an estimate of the
viewpoint transformation.
5.1 Match quality measure
A match is a consistent set of pairings between some model and image features, plus a transformation
closely aligning paired features. We seek a match that maximizes both the number
of features paired and the similarity of paired features.
Pairings are represented by
image feature k, and e j =? if it matches nothing. H denotes the hypothesis that the modeled
view of the object is present in the image. Match quality is associated with the probability
of H given a set of pairings E and a viewpoint transformation T , which Bayes' theorem lets
us write as
There is no practical way to represent the high-dimensional, joint probability functions P(E j
them by adopting simplifying assumptions of feature
independence. The joint probabilities are decomposed into products of low-dimensional,
marginal probability functions, one per feature:
Y
The measure is defined using log-probabilities to simplify calculations. Moreover, all positions
of a modeled view within an image are assumed equally likely, so P(T
With these simplifications the measure becomes
log P(e
log
P(H), the prior probability that the object as modeled is present in the image, can be estimated
from the proportion of training images used to construct the model. The remaining
terms are described using the following notation for random events: ~
the event that
model feature j matches image feature k; ~ e j =?, the event that it matches nothing; ~
the event that it matches a feature whose attributes are a; and ~
b, the event that it
matches a feature whose position, in model coordinates, is b.
There are two cases to consider in estimating the conditional probability, P(e
for a model feature j.
1. When j is unmatched, this probability is estimated by considering how often j was
found during training. We use a Bayesian estimator, a uniform prior, and the -
m and
recorded by the model:
2. When j is matched to image feature k, this probability is estimated by considering
how often j matched an image feature during training, and how the attributes and position
of k compare with those of previously matching features:
P( ~
Viewpoint
transformation
Image coordinates
Image feature
position pdf
Model coordinates
Model feature
position pdf
Image feature
position pdf
Transformation space
Figure
2: Comparison of image and model feature positions. An image feature's position
is transformed from image coordinates (left) to model coordinates (right) according to an
estimate of the viewpoint transformation. Uncertainty in the positions and the transformation
are characterized by Gaussian distributions that are compared in the model coordinate
space.
P(~e j 6=?) is estimated as in (10). P(~ a estimated using the series of attribute
vectors -
recorded with model feature j, and a non-parametric density estimator described
in (Pope, 1995). Estimation of P( ~
the probability that model feature j will
match an image feature at position b k with transformation T , is described in Sect. 5.2.
Estimates of the prior probabilities are based, in part, on measurements from a collection
of images typical of those in which the object will be sought. From this collection we obtain
prior probabilities of encountering various types of features with various attribute values.
Prior distributions for feature positions assume a uniform distribution throughout a bounded
region of model coordinate space.
5.2 Estimating feature match probability
The probability that a model and image feature match depends, in part, on their positions and
on the aligning transformation. This dependency is represented by the P( ~
term in (11). To estimate it, we transform the image feature's position into model coor-
dinates, and then compare it with the model feature's position (Fig. 2). This comparison
considers the uncertainties of the positions and transformation, which are characterized by
Gaussian PDFs.
Image feature k's position is reported by its feature detector as a Gaussian PDF in xy's
image coordinates with mean b xy's
k and covariance matrix C xy's
k . To allow its transformation
into model coordinates, this PDF is re-expressed in xyuv image coordinates using an
approximation adequate for small ' and s variances. The approximating PDF has a mean,
k , at the same position as b xy's
k , and a covariance matrix C xyuv
k that aligns the Gaussian
envelope radially, away from the [u v] origin:
l
l
and oe 2
l , oe 2
s and oe 2
' are the variances in image feature position, scale and orientation estimates.
T is characterized by a Gaussian PDF over [x t y t u t v t ] vectors, with mean t and covariance
t estimated from feature pairings as described in Sect. 5.4. Using it to transform the
image feature position from xyuv image to model coordinates again requires an approxima-
tion. If we would disregard the uncertainty in T , we would obtain a Gaussian PDF in model
coordinates with mean A k t and covariance A t C k A T
. Alternatively, disregarding the uncertainty
in k's position gives a Gaussian PDF in model coordinates with mean A k t and
covariance A k C t A T
. With Gaussian PDFs for both feature position and transformation,
however, the transformed position's PDF is not of Gaussian form. At best we can approximate
it as such, which we do with a mean and covariance given in xyuv coordinates by
Model feature j's position is also described by a Gaussian PDF in xyuv model coordinates.
Its mean b j and covariance C j are estimated from the series of position vectors -
recorded
by the model.
The desired probability (that j matches k according to their positions and the transforma-
tion) is estimated by integrating, over all xyuv model coordinate positions r, the probability
that both the transformed image feature is at r and the model feature matches something at
r:
P( ~
Z
r
Here ~r j and ~r kt are random variables drawn from the Gaussian distributions N(b
It would be costly to evaluate this integral by sampling it at various r, but
fortunately the integral can be rewritten as a Gaussian since it is essentially one component
in a convolution of two Gaussians:
P( ~
where G(x; C) is a Gaussian with zero mean and covariance C. In this form, the desired
probability is easily computed.
5.3 Matching procedure
Recognition and learning require the ability to find at match between a model graph and an
image graph that maximizes the match quality measure. It does not seem possible to find an
optimal match through anything less than exhaustive search. Nevertheless, good matches
can usually be found quickly by a procedure that combines qualities of both graph matching
and iterative alignment.
5.3.1 Probabilistic alignment
To choose the initial pairings, possible pairings of high level features are rated according to
the contribution each would make to the match quality measure. The pairing hj; ki receives
the rating
log P(~e
This rating favors pairings in which j has a high likelihood of matching, j and k have similar
attribute values, and the transformation estimate obtained by aligning j and k has low
variance. The maximum over T is easily computed because P(~e
in T .
Alignments are attempted from these initial pairings in order of decreasing rank. Each
alignment begins by estimating a transformation from the initial pairing, and then proceeds
by repeatedly identifying additional consistent pairings, adopting the best, and updating the
transformation estimate with them until the match quality measure cannot be improved fur-
ther. At this stage, pairings are selected according to how each might improve the match
quality measure; thus hj; ki receives the rating
This favors the same qualities as equation 12 while also favoring pairings that are aligned
closely by the estimated transformation. In order for hj; ki to be adopted, it must rate at least
as well as the alternative of leaving j unmatched, which receives the rating
Significant computation is involved in rating and ranking the pairings needed to extend
an alignment. Consequently, pairings are adopted in batches so that this computation need
only be done infrequently. Moreover, in the course of an alignment, batch size is increased
as the transformation estimate is further refined so that each batch can be made as large as
possible. A schedule that seems to work well is to start an alignment with a small batch of
pairings (we use five), and to double the batch size with each batch adopted.
5.4 Estimating the aligning transformation
From a series of feature pairings, an aligning transformation is estimated by finding the
least-squares solution to a system of linear equations. Each pairing hj; ki contributes to the
system the equations
A k is the matrix representation of image feature k's mean position,
the transformation estimate, and b j is model feature j's mean position. U j is the upper
triangular square root of j's position covariance (i.e., C
weights both sides
of the equation so that the residual error ~
e has unit variance.
A recursive estimator solves the system, efficiently updating the transformation estimate
as pairings are adopted. We use the square root information filter (SRIF) (Bierman, 1977)
form of the Kalman filter for its numerical stability, and its efficiency with batched measure-
ments. The SRIF works by updating the square root of the information matrix, which is the
inverse of the estimate's covariance matrix. The initial square root, R 1 , and state vector, z 1 ,
are obtained from the first pairing hj; ki by
With each subsequent pairing hj; ki, the estimate is updated by triangularizing a matrix composed
of the previous estimate and data from the new pairing:6 4 R
When needed, the transformation and its covariance are obtained from the triangular R i by
back substitution:
Verification
Once a match has been found between a model graph and an image graph, it must be decided
whether the match represents an actual instance of the modeled object in the image.
A general approach to this problem would use decision theory to weigh prior expectations,
evidence derived from the match, and the consequences of an incorrect decision. However,
we will use a simpler approach that only considers the number and type of matching features
and the accuracy with which they match.
The match quality measure used to guide matching provides one indication of a match's
significance. A simple way to accept or reject matches, then, might be to require that this
measure exceeds some threshold. However, the measure is unsuitable for this use because
its range differs widely among objects according to what high-level features they have. High-level
features that represent groupings of low-level ones violate the feature independence
assumption; consequently, the match quality measure is biased by an amount that depends
on what high-level features are present in the model. Whereas this bias seems to have no
adverse effect on the outcome of matching any one model graph, it makes it difficult to establish
a single threshold for testing the match quality measure of any model graph. Thus the
verification method we present here considers only the lowest-level features of the model
graph-those that do not group any other model graph features.
When counting paired model features, we weight each one according to its likelihood of
being paired, thereby assigning greatest importance to the features that contribute most to
the likelihood that the object is present. For model feature j, the likelihood of being paired,
is estimated using statistics recorded for feature j.
The count of each model feature is also weighted according to how well it is fit by image
features. When j is a curve segment, this weighting component is based on the fraction
of j matched by nearby image curve segments. The fraction is estimated using a simple
approximation: The lengths of image curve segments matching j are totaled, the total length
is transformed into model coordinates, and the transformed value is divided by the length of
j. With s t denoting the scaling component of the viewpoint transformation T , and s j and s k
denoting the lengths of j and k in model and image coordinates, respectively, the fraction
of j covered by image curve segments is defined as
curve segment.
If we were to accept matches that paired a fixed number of model features regardless of
model complexity, then with greater model complexity we would have an increased likelihood
of accepting incorrect matches. For example, requiring that ten model features be
paired may make sense for a model of twenty features, but for a model of a thousand fea-
tures, any incorrect match is likely to contain at least that many "accidental" pairings.
Thus we have chosen instead to require that some minimum fraction of the elements of
C be paired. We define this fraction as
A match hE; T i is accepted if Support(E; T ) achieves a certain threshold - . To validate
this verification method and to determine a suitable value for - , we have measured
the distribution of Support(E; T ) for correct and incorrect matches between various model
graphs and their respective training image graphs. The distributions are well separated, with
most correct matches achieving most incorrect matches achieving
6 Model learning procedure
The learning procedure assembles one or more model graphs from a series of training images
showing various views of an object. To do this, it clusters the training images into
groups and constructs model graphs generalizing the contents of each group. We shall describe
first the clustering procedure, and then the generalization procedure, which the clustering
procedure invokes repeatedly.
We use X to denote the series of training images for one object. During learning, the
object's model M consists of a series of clusters X i ' X , each with an associated model
graph -
G i . Once learning is complete, only the model graphs must be retained to support
recognition.
6.1 Clustering training images
An incremental conceptual clustering algorithm is used to create clusters among the training
images. Clustering is incremental in that, as each training image is acquired, it is assigned
to an existing cluster or used to form a new one. Like other conceptual clustering
algorithms, such as COBWEB (Fisher, 1987), the algorithm uses a global measure of over-all
clustering quality to guide clustering decisions. This measure is chosen to promote and
balance two somewhat-conflicting qualities. On one hand, it favors clusterings that result in
simple, concise, and efficient models, while on the other hand, it favors clusterings whose
resulting model graphs accurately characterize the training images.
The minimum description length principle (Rissanen, 1983) is used to quantify and balance
these two qualities. The principle suggests that the learning procedure choose a model
that minimizes the number of symbols needed to encode first the model and then the training
images. It favors simple models as they can be encoded concisely, and it favors accurate
models as they allow the training images to be encoded concisely once the model has been
provided. The clustering quality measure to be minimized is defined as L(M)+L(X j M),
where L(M) is the number of bits needed to encode the model M, and L(X j M) is the
number of bits needed to encode the training images X when M is known.
To define L(M) we specify a coding scheme for models that concisely enumerates each
of a model's graphs along with its nodes, arcs, attribute vectors and position vectors (see
(Pope, 1995) for full details of the coding scheme). Then L(M) is simply the number of
bits needed to encode M according to this scheme.
To define L(X j M) we draw on the fact that given any probability distribution P(x),
there exists a coding scheme, the most efficient possible, that achieves essentially
P(x). Recall that the match quality measure is based on an estimate of the probability
that a match represents a true occurrence of the modeled object in the image. We use this
probability to estimate P(X j -
the probability that the appearance represented by image
may occur according to the appearance distribution represented by model graph -
This probability can be computed for any given image graph X and model graph G i , using
the matching procedure (Sect. 5.3) to maximize P(H
used to estimate the length of an encoding of X given -
E)
The L u (X; E) term is the length of an encoding of unmatched features of X , which we
define using a simple coding scheme comparable to that used for model graphs. Finally, we
define L(X j M) by assuming that for any X 2 X i ' X , the best match between X and
any -
will be that between X and -
(the model graph obtained by generalizing the
group containing X). Then the length of the encoding of each X 2 X in terms of the set of
model graphs M is the sum of the lengths of the encodings of each in terms of its respective
model
As each training image is acquired it is assigned to an existing cluster or used to form a
new one. Choices among clustering alternatives are made to minimize the resulting L(M)+
evaluating an alternative, each cluster's subset of training images X i is
first generalized to form a model graph -
G i as described below.
6.2 Generalizing training images
Within each cluster, training images are merged to form a single model graph that represents
a generalization of those images. An initial model graph is formed from the first training
image's graph. That model graph is then matched with each subsequent training image's
graph and revised after each match according to the match result. A model feature j that
matches an image feature k receives an additional attribute vector a k and position b k for
its series -
A j and -
Unmatched image features are used to extend the model graph, while
model features that remain largely unmatched are eventually pruned. After several training
images have been processed in this way the model graph nears an equilibrium, containing
the most consistent features with representative populations of sample attribute vectors and
positions for each.
7 Experimental results
In this section we describe several experiments involving a system implemented to test our
recognition learning method. This system, called OLIVER, learns to recognize 3-D objects
in 2-D intensity images.
OLIVER has been implemented within the framework of Vista, a versatile and extensible
software environment designed to support computer vision research (Pope and Lowe, 1994).
Both OLIVER and Vista are written in C to run on UNIX workstations. The execution times
reported here were measured on a Sun SPARCstation 10/51 processor.
The focus of this research has been on the model learning and representation, which
is independent of the particular features used for matching. To test the approach, we have
chosen to use a basic repertoire of edge-based features. While some recent approaches to
recognition have been based on image pixel intensities or image derivative magnitudes, the
locations of intensity discontinuities may be more robust to illumination and imaging vari-
ations. For example, the silhouette boundaries of an object on a cluttered background will
have image derivatives of unknown sign and magnitude. The same is true for edges separating
surfaces of different orientation under differing directions of illumination.
Figure
3: Bunny training images. Images were acquired at 5 ffi intervals over camera elevations
of 0 ffi to 25 ffi and azimuths of 0 ffi to 90 ffi . Shown here are three of the 112 images.
In the following experiments, the lowest-level features are straight, circular and elliptical
edge segments. An edge curve of any shape can be represented by approximating it
with a series of primitive segments. Additional higher-level features represent groupings of
these, such as junctions, groups of adjacent junctions, pairs of parallel segments, and convex
regions. For full details on the derivation of these features, see (Pope, 1995). In the future,
this set could be augmented with other features, such as those based on image derivatives,
color, or texture, but the current features are suited for a wide range of objects.
7.1 Illustrative experiment
The experiment described in this section demonstrates typical performance of the system in
learning to recognize a complex object. The test object is a toy bunny shown in figure 3.
Training images of the bunny were acquired at 5 ffi increments of camera elevation and azimuth
over 25 ffi of elevation and 90 ffi of azimuth.
Feature detection, including edge detection, curve segmentation, and grouping, required
about seconds of CPU time per training image (we believe that much faster grouping
processes are possible, but this was not the focus of the research). Figure 4 depicts some of
the features found in one image, which include 4475 edgels, 81 straight lines and 67 circular
arcs.
During the first phase of clustering, the system divided the training images among 19
clusters. In the second phase, it reassigned two training images that remained the sole members
of their clusters, leaving the 17 clusters shown in figure 5. Because this object's appearance
varies smoothly with changes in viewpoint across much of the viewing range, it is not
surprising that the clusters generally occupy contiguous regions of the viewsphere.
When training images were presented to the system in other sequences, the system produced
different clusterings than that shown in figure 5. However, although cluster boundaries
varied, qualities such as the number, extent, and cohesiveness of the clusters remained
largely unaffected.
As this is a one-time batch operation, little effort was devoted to optimizing efficiency.
Altogether, 19.4 hours of CPU time were required to cluster the training images and to in-
(a) (b)
(c) (d)
Figure
4: Features of a bunny training image. Shown here are selected features found in
the 0 training image (right image in figure 3). (a) Edgels. (b) Curve
features. (c) L-junction features. (d) Parallel-curve features (depicted by parallel lines), Region
features (depicted by rectangles), and Ellipse features (depicted by circles).
duce a model graph generalization of each cluster.
Figure
5 shows features of the model graph representing cluster C. Ellipses are drawn
for certain features to show two standard deviations of location uncertainty. To reduce clutter
and to give some indication of feature significance, they are drawn only for those features
that were found in a majority of training images. Considerable variation in location
uncertainty is evident. Some L-junction features have particularly large uncertainty and,
consequently, they will be afforded little importance during matching.
Figure
7 reports the results of matching the image graph for test image 1 with each of the
bunny model graphs. For this test, match searches were allowed to examine all alignment
hypotheses. Typically, there were 10-20 hypotheses examined for each pair of model and
image graphs, and about five seconds of CPU time were needed to extend and evaluate each
one. The matches reported here are those achieving the highest match quality measure.
The model graph generalizing cluster D provides the best match with the image graph
(as judged by each match's support measure). This is to be expected as the test image was
acquired from a viewpoint surrounded by that cluster's training images. Moreover, other
model graphs that match the image graph (although not as well) are all from neighbouring
A A J J J
A A J J P
A A A B A J
D D D D M K
I O I
Elevation
Azimuth (a) (b)
(c) (d)
Figure
5: On the left are the training image clusters. Seventeen clusters, designated A
through Q, were formed from the 112 training images. Contours delineate the approximate
scope of the view classes associated with some of the clusters. On the right are selected features
of the model graph obtained by generalizing the training images assigned to cluster C.
Each feature is drawn at its mean location. (a) Curve features. (b) L-junction features. (c)
Connected edge features. (d) Parallel curve, Region and Ellipse features.
Figure
test image 1. Left: Image. Right: Match of bunny model graph D with test
image.
MODEL MATCH WITH TEST IMAGE 1 MATCH WITH TEST IMAGE 2
GRAPH Correct Quality Pairings Support Correct Quality Pairings Support
A 998 164 0.35 862 191 0.34
I 356 91 0.19 879 186 0.35
O 151 72 0.14 187 90 0.23
Figure
7: Each row documents the results of matching a model graph with the two image
graphs, those describing bunny test images 1 and 2. Reported for each match are the fol-
lowing. Correct: whether the match correctly identified most features of the object visible
in the image, as judged by the experimenter. Quality: the match's match quality measure,
Pairings: the number of image features paired. Support: the match's support mea-
regions of the viewsphere. Image features included in the best match, that with model graph
D, are shown in figure 6.
For test image 2, shown in figure 8, additional clutter was present. Figure 7 reports the
result of matching the image graph of test image 2 with each of the bunny model graphs.
This time each match search was limited to 20 alignment hypotheses and about six seconds
of CPU time were needed to extend and evaluate each hypothesis. Due to the additional
clutter in the image, only model graph D correctly matched the image.
This section has demonstrated the system's typical performance in learning models of
complex objects, and in using those models to accomplish recognition. Figure 9 shows
recognition of an even more complex object with significant occlusion.
7.2 Additional Experiments
In this section, some additional experiments are briefly described in order to illustrate certain
noteworthy aspects of the system's behaviour.
Figure
8: Left: Bunny test image 2 with clutter and occlusion. Right: Match of bunny model
graph D with test image 2.
Figure
9: Example showing recognition of a complex object with substantial occlusion.
Left: One of several training images. Right: Image curve features included in the match.
7.2.1 Effects of feature distribution
When some regions of an object are much richer in stable features than others, those regions
can dominate the matching processes that underlie learning and recognition. For example,
most features of the shoe shown in figure 10 are concentrated near its centre. Moreover, as
the shoe rotates about its vertical axis, features near the shoe's centre shift by small amounts
while those near its heel and toe undergo much larger changes. Thus, when training images
of the shoe are clustered during model learning, the many stable features near the shoe's
centre are used to match training images over a large range of rotation, while the few variable
features defining the heel and toe are dropped as being unreliable. The result is a model
graph, like that shown in figure 11 (left), with relatively few features defining the shoe's
extremities.
If the dropped features are deemed important, we can encourage the system to retain
them in the models it produces by setting a higher standard for acceptable matches. For
example, requiring a higher Support measure ensures that matches will include more of
Figure
10: Shoe training images. Images were acquired at 6 ffi intervals over camera elevations
of 0 ffi to 12 ffi and azimuths of 0 ffi to 60 ffi . Shown here are three of the 33 images.
Figure
11: Left: Curve features for a model that generalizes 14 training images (support
threshold of 0.5). Right: Model that generalizes 7 training images (support threshold of
0.6).
an object's features. Thus, fewer of those features will be judged unreliable and more will
be retained by the model. Figure 11 (right) shows a model graph that was produced with
a Support threshold of 0.6 rather than the usual value of 0.5; it provides somewhat more
accurate representation of the shoe's heel and toe. Figure 12 shows this model graph being
used for recognition.
7.2.2 Articulate objects
Just as the system will use multiple views to model an object's appearance over a range of
viewpoints, it will use additional views to model a flexible or articulate object's appearance
over a range of configurations. In general, the number of views needed increases exponen-
Figure
12: Shoe recognition example. Left: Test image. Right: Image curve features included
in a match to shoe model.
sail angleffi elevation, 0
sail angleffi elevation, 0
sail angle
Figure
13: Boat training images. Images were acquired at elevations of 0
azimuths of 0 angles of 0 Shown here are three of the
images.
Figure
14: Boat model graphs. Shown here are curve features of model graphs that have
been generalized from two clusters of boat training images.
tially with the number of dimensions along which the object's appearance may vary. This
could presumably be addressed by a part-based modeling and clustering approach that separated
the independent model parts.
The toy boat shown in figure 13 has a sail that rotates about the boat's vertical axis.
Training images were acquired at camera elevations of 0
of angles of 0 . The system's learning procedure clustered
these 120 images to produce 64 model views. Features of two of the model graphs are
shown in figure 14. In comparison, only 13 views were needed to cover the same range of
viewpoints when the sail angle was kept fixed at 0 ffi .
8 Conclusion
We have presented a method of modeling the appearance of objects, of automatically acquiring
such models from training images, and of using the models to accomplish recognition.
This method can handle complex, real-world objects. In principle, it can be used to recognize
any object by its appearance, provided it is given a sufficient range of training images,
sufficient storage for model views, and an appropriate repertoire of feature types.
The main features of the method are as follows:
(a) Objects are modeled in terms of their appearance, rather than shape, to avoid any need
to model the image formation process. This allows unusually complex objects to be
modeled and recognized efficiently and reliably.
(b) Appearance is described using discrete features of various types, ranging widely in
scale, complexity, and specificity. This repertoire can be extended considerably, still
within the framework of the approach, to accommodate a large variety of objects.
(c) An object model represents a probability distribution over possible appearances of the
object, assigning high probability to the object's most likely manifestations. Thus,
learning an object model from training images amounts to estimating a distribution
from a representative sampling of that distribution.
(d) A match quality measure provides a principled means of evaluating a match between
a model and an image. It combines probabilities that are estimated using distributions
recorded by the model. The measure leads naturally to an efficient matching proce-
dure, probabilistic alignment, used to accomplish both learning and recognition.
(e) The model learning procedure has two components. One component identifies clusters
of training images that ought to correspond to distinct model views. It does so by
maximizing a measure that, by application of the minimum description length prin-
ciple, combines the qualities of model simplicity and accuracy. The second component
induces probabilistic generalizations of the images within each cluster. Working
together, the two components construct a model by clustering training images, and,
within each cluster, generalizing the images to form a model view.
8.1 Topics for further research
Modeling a multifarious or highly flexible object with this approach may require an impractically
large number of model views. For these objects, a more effective strategy may be first
to recognize parts, and then to recognize the whole object as a configuration of those parts.
The present method could perhaps be extended to employ this strategy by assigning parts
the role of high level features.
Speed in both learning and recognition tasks could be greatly improved by the addition
of an indexing component, which would examine image features and suggest likely model
views for the matching procedure to consider. Existing indexing methods (Beis and Lowe,
1999) could be used, with the attribute vectors of high-level features serving as index keys.
Of course, more efficient methods for feature detection would also be important.
Extending the feature repertoire would allow the method to work more effectively with
a broader class of objects. It would be useful to have features representing additional groupings
of intensity edges, such as symmetric arrangements and repeated patterns, and features
representing local image regions with color or texture properties.
Some challenging issues remain regarding how to organize a large collection of acquired
models for greater efficiency. Savings in both storage and recognition time could be achieved
by identifying parts or patterns common to several objects, factoring those parts out of their
respective models, and recognizing the parts individually prior to recognizing their aggre-
gates. Associating new feature types with some of the common parts and patterns would
provide a means of automatically extending the feature repertoire and adapting it to the objects
encountered during training. Furthermore, the same techniques of identifying and abstracting
parts could be used to decompose flexible objects into simpler components, allowing
those objects to be modeled with fewer views.
Acknowledgments
The authors would like to thank Jim Little, Bob Woodham, and Alan Mackworth for their
ongoing comments on this research. This research was sponsored by the Natural Sciences
and Engineering Research Council of Canada (NSERC) and through the Institute for Robotics
and Intelligent Systems (IRIS) Network of Centres of Excellence.
--R
HYPER: A new approach for the recognition and positioning of two-dimensional objects
Indexing without invariants in 3D object recognition.
Factorization Methods for Discrete Sequential Estimation.
Recognizing and locating partially visible objects: The local- feature-focus method
Geometric Aspects of Visual Object Recognition.
Symbolic reasoning among 3-D models and 2-D images
A probabilistic approach to object recognition using local photometryand global geometry.
Knowledge acquisition via incremental conceptual clustering.
Pose estimation by fusing noisy data of different dimensions.
IEEE Trans.
Recognizing solid objects by alignment with an image.
International Journal of Computer Vision 5(2)
Perceptual Organization and Visual Recognition.
Artificial Intelligence
Visual learning and recognition of 3D objects from appear- ance
A cubist approach to object recognition.
Computing exact aspect graphs of curved ob- jects: Algebraic surfaces
Learning to Recognize Objects in Images: Acquiring and Using Probabilistic Models of Appearance.
Vista: A software environment for computer vision research.
A universal prior for integers and estimation by minimum description length.
Annals of Statistics 11(2)
Machine perception of three-dimensional solids
Object recognition using multidimensional receptive field his- tograms
Local grayvalue invariants for image retrieval.
Color indexing.
Recognition by linear combination of models.
The Total Least Squares Problem: Computational Aspects and Analysis
Statistical approaches to feature-based object recognition
--TR
HYPER: a new approach for the recognition and positioning to two-dimensional objects
Three-dimensional object recognition from single two-dimensional images
Localizing overlapping parts by searching the interpretation tree
Recognizing solid objects by alignment with an image
Recognition by Linear Combinations of Models
Color indexing
Geometric aspects of visual object recognition
Computing exact aspect graphs of curved objects
Pose Estimation by Fusing Noisy Data of Different Dimensions
Visual learning and recognition of 3-D objects from appearance
Statistical Approaches to Feature-Based Object Recognition
Local Grayvalue Invariants for Image Retrieval
Indexing without Invariants in 3D Object Recognition
Perceptual Organization and Visual Recognition
Knowledge Acquisition Via Incremental Conceptual Clustering
A Probabilistic Approach to Object Recognition Using Local Photometry and Global Geometry
Object Recognition Using Multidimensional Receptive Field Histograms
Learning to recognize objects in images
A Cubist Approach to Object Recognition
--CTR
Rui Nian , Guangrong Ji , Wencang Zhao , Chen Feng, Probabilistic 3D object recognition from 2D invariant view sequence based on similarity, Neurocomputing, v.70 n.4-6, p.785-793, January, 2007
Wei Zhang , Jana Koeck, Hierarchical building recognition, Image and Vision Computing, v.25 n.5, p.704-716, May, 2007
David G. Lowe, Distinctive Image Features from Scale-Invariant Keypoints, International Journal of Computer Vision, v.60 n.2, p.91-110, November 2004
Manuele Bicego , Umberto Castellani , Vittorio Murino, A hidden Markov model approach for appearance-based 3D object recognition, Pattern Recognition Letters, v.26 n.16, p.2588-2599, December 2005
Christian Eckes , Jochen Triesch , Christoph von der Malsburg, Analysis of cluttered scenes using an elastic matching approach for stereo images, Neural Computation, v.18 n.6, p.1441-1471, June 2006
Fred Rothganger , Svetlana Lazebnik , Cordelia Schmid , Jean Ponce, 3D Object Modeling and Recognition Using Local Affine-Invariant Image Descriptors and Multi-View Spatial Constraints, International Journal of Computer Vision, v.66 n.3, p.231-259, March 2006
Marcus A. Maloof , Ryszard S. Michalski, Incremental learning with partial instance memory, Artificial Intelligence, v.154 n.1-2, p.95-126, April 2004
M. A. Maloof , P. Langley , T. O. Binford , R. Nevatia , S. Sage, Improved Rooftop Detection in Aerial Images with Machine Learning, Machine Learning, v.53 n.1-2, p.157-191, October-November | object recognition;model indexing;appearance representation;visual learning;model-based vision;clustering |
365936 | On the Fourier Properties of Discontinuous Motion. | Retinal image motion and optical flow as its approximation are fundamental concepts in the field of vision, perceptual and computational. However, the computation of optical flow remains a challenging problem as image motion includes discontinuities and multiple values mostly due to scene geometry, surface translucency and various photometric effects such as reflectance. In this contribution, we analyze image motion in the frequency space with respect to motion discontinuities and translucence. We derive the frequency structure of motion discontinuities due to occlusion and we demonstrate its various geometrical properties. The aperture problem is investigated and we show that the information content of an occlusion almost always disambiguates the velocity of an occluding signal suffering from the aperture problem. In addition, the theoretical framework can describe the exact frequency structure of Non-Fourier motion and bridges the gap between Non-Fourier visual phenomena and their understanding in the frequency domain. | Introduction
A fundamental problem in processing sequences of
images is the computation of optical
ow, an approximation
to image motion dened as the projection
of velocities of 3D surface points onto the
imaging plane of a visual sensor. The importance
of motion in visual processing cannot be under-
stated: in particular, approximations to image
motion may be used to estimate 3D scene properties
and motion parameters from a moving visual
sensor [21, 30, 31, 42, 51, 50, 1, 5, 38, 22, 54, 56,
34, 20, 16, 23], to perform motion segmentation
[7, 40, 45, 36, 47, 14, 25, 8, 2, 46, 15], to compute
the focus of expansion and time-to-collision [44,
41, 48, 24, 49, 9], to perform motion-compensated
image encoding [10, 13, 35, 37, 39, 55], to compute
stereo disparity [3, 12, 26, 28], to measure blood
ow and heart-wall motion in medical imagery
[43], and, recently, to measure minute amounts
of growth in corn seedlings [6, 29].
1.1. Organization of Paper
This contribution addresses the problem of multiple
image motions arising from occlusion and
translucency phenomena. We present a theoretical
framework for discontinuous optical
ow in
the Fourier domain. The concept of image velocity
as a geometric function is described in Section
1.
S. S. Beauchemin and J. L. Barron
Section 2 is an analysis of occlusion in Fourier
space with a constant model of velocity. Our
approach focuses on the frequency structure of
occluding surfaces and the theoretical results are
constructed incrementally. For instance, a simple
model of velocity is used to develop the structure
of occlusion with sinusoidal signals which are
then generalized to arbitrary signals. These theoretical
results demonstrate that occlusion may be
dierentiated from translucency and the motions
associated with both the occluding and occluded
surfaces can be discriminated.
Section 3 is an investigation of the aperture
problem and degenerate 1 signals, as they appear
in the theoretical framework. For example, it is
shown that the full velocity of a degenerate signal
is almost always computable at the occlusion.
Section 4 is a study of related issues such
as translucency phenomena, Non-Fourier motion,
generalized occlusion boundaries and phase shifts.
Numerical experiments supporting the framework
are presented. Results obtained with sets of sinusoidal
signals created synthetically are compared
with their corresponding theoretical predictions.
Section 5 summarizes our results.
1.2. Contribution
The motivation for the theoretical framework emanates
from the observation that occlusion and
translucency in the context of computing optical
ow constitute di-cult challenges and threatens
its precise computation. The theoretical results
cast light on the exact structure of occlusion and
translucency in the frequency domain.
The results are essentially theoretical and stated
in the form of Theorems and Corollaries. Relevant
numerical experiments which support the
theoretical results are presented. In addition, this
contribution bridges what is seen as an important
gap between Non-Fourier models of visual stimuli
and optical
ow methods in Computer Vision.
In fact, Non-Fourier visual stimuli, to which belong
translucency and occlusion eects, have been
studied mainly with respect to the motion percept
these stimuli elicit among human subjects
[11, 52, 53]. However, more recently, it has been
conjectured that a viable computational analysis
of Non-Fourier motion could be carried out with
Fourier analysis, since many Non-Fourier stimuli
turn out to have simple frequency characterizations
[19]. The results presented herein extend the
concept of Non-Fourier stimuli such as occlusion
and translucency from being not at all explained
by its Fourier characteristics to the establishment
of exact frequency models of visual stimuli exhibiting
occlusions and translucencies.
As a rst attempt to understand occlusion, the
simplest set of controllable parameters were used,
such as the structure of occlusion boundaries and
the number of distinct frequencies for representing
the occluding and occluded surfaces. A constant
model of velocity was also used and no signal deformations
(such as those created by perspective
projection) were permitted. These preliminary results
are extended to image signals composed of an
arbitrary number of discrete frequencies. Dirichlet
conditions are hypothesized for each signal thus
allowing to expand them as complex exponential
series.
The potential use of the information-content
of an occlusion boundary is outlined. Occluding
boundaries contain a wealth of information that is
not exploited by conventional optical
ow frame-
works, due to a theoretical void. It is shown that
a degenerate occluding signal exhibiting a linear
spectrum is supplemented by the linear orientation
of its occluding boundary. These two spectra
almost always yield the full velocity of an occluding
signal suering from the aperture problem.
The structure of occlusion when both signals are
degenerate is also shown. It is demonstrated that
this particular case collapses to a one-dimensional
structure.
The Corollaries show that additive translucency
phenomena may be understood as a special case of
the theoretical framework. In addition, the velocities
associated with both the occluding and occluded
signals may be identied as such, without
the need of scenic information such as depth.
1.3. Image Motion
Image motion is expressed in terms of the 3D
motion parameters of the visual sensor and the
3D environmental points of the scene: let P
(X; Y; Z) be an environmental
point,
x
y
z be the visu-
On the Fourier Properties of Discontinuous Visual Motion 3
Y
Z
y
x
Image plane
Environmental surface
z
y
y
x
x
Fig. 1. The geometry of the visual
sensor.
x
y
z are the instantaneous rotation and
translation of the visual sensor. p(t) is the perspective projection of P(t) onto the imaging surface.
al sensor's respective instantaneous rate of change
in rotation and translation and
P^ z the perspective
projection of P onto the imaging surface
(the focal length of the sensor is assumed to be 1),
z is a normalized vector along the line-of-
sight axis Z. The setup is shown in Figure 1. The
instantaneous 3D velocity of P is given by
P: (1)
The relationship between the 3D motion parameters
and 2D velocity that results from the projection
of V onto the image plane can be obtained
by temporally dierentiating p:
_
Z
Z
_
Y
Z
Y _
Z
Using
x
z Y; T
y
z X
x
Y
y X) for
substitution in (2), one obtains the image velocity
equation [31]:
xy
y
z
x
xy
y
x
z
Hence, image motion is a purely geometric quantity
and, consequently, for optical
ow to be exactly
image motion, a number of conditions have
to be satised. These are: a) uniform illumina-
surface re
ectance and c) pure
translation parallel to the image plane. Realisti-
cally, these conditions are never entirely satised
in scenery. Instead, it is assumed that these conditions
hold locally in the scene and therefore locally
on the image plane. The degree to which these
conditions are satised partly determines the accuracy
with which optical
ow approximates image
motion.
1.4. Multiple Motions
Given an arbitrary environment and a moving visual
sensor, the motion eld generated onto the
imaging plane by a 3D scene within the visual
eld is represented as function (3) of the motion
parameters of the visual sensor. Discontinuities
in image motion are then introduced in (3) whenever
the depth Z is other than single-valued and
dierentiable 2 . The occurrence of occlusion causes
the depth function to exhibit a discontinuity,
whereas translucency leads to a multiple-valued
depth function.
4 S. S. Beauchemin and J. L. Barron
1.5. Models of Optical Flow
Generally, the optical
ow function may be expressed
as a polynomial in some local coordinate
system of the image space of the visual sensor. It
is assumed that the center of the neighborhood
coincides with the origin of the local coordinate
system. In this case, we may write the Taylor series
expansion of a i th velocity about the origin
as:
r
x;t=0
However, we simply adopt
in what follows Fleet and Jepson's [18] constant
model of optical
ow denoted as:
where a i is now the velocity vector. Hence, a
2D intensity prole I 0 translating with velocity
yields the following spatiotemporal image intensity
We use a negative translational rate in (5) without
loss of generality and for mere mathematical
convenience.
1.6. Signal Translation in the Frequency Domain
Consider a signal I i (x) translating at a constant
t). For this signal, the Fourier transform
of the optical
ow constraint equation is obtained
with the dierentiation property as:
F
where i is the imaginary number, ^ I i (k) is the
Fourier transform of I i (x) and -(k T a
a Dirac delta function. Expression (7) yields
a constraint on velocity. Simi-
larly, the Fourier transform of a translating image
signal I i (x; t) is obtained with the shift property
as:
Z Z
I
Z Z
I
e i!t dt
which also yields the constraint k T a
Hence, (7) and (8) demonstrate that the frequency
analysis of image motion is in accordance with
the motion constraint equation [18]. It is also observed
that k T a in the frequency
domain, an oriented plane passing through
the origin, with normal vector a i descriptive of full
velocity, onto which the Fourier spectrum of I i (x)
lies.
1.7. Related Literature
Traditionally, motion perception has been equated
with orientation of power in the frequency domain.
The many optical
ow methods use what Chubb
and Sperling term the Motion-From-Fourier-
Components (MFFC) principle [11] in which the
orientation of the plane or line through the origin
of the frequency space that contains most of the
spectral power gives the rate of image translation.
The MFFC principle states that for a moving
stimulus, its Fourier transform has substantial
power over some regions of the frequency domain
whose points spatiotemporally correspond to sinusoidal
gratings with drift direction consonant with
the perceived motion [11]. In addition, current
models of human perception involve some frequency
analysis of the imagery, such as band-pass l-
tering and similar processes. However, some classes
of moving stimuli which elicit a strong percept
in subjects fail to show a coherent spatiotemporal
frequency distribution of their power and cannot
be understood in terms of the MFFC principle.
Examples include drift-balanced visual stimuli
[11], Fourier and Non-Fourier plaid superpositions
[52], amplitude envelopes, sinusoidal beats
and various multiplicative phenomena [19]. By
drift-balanced it is meant that a visual stimulus
with two (leftward and rightward, for example) or
more dierent motions shows identical contents of
Fourier power for each motion and therefore, according
to the MFFC principle, should not elicit
a coherent motion percept. However, some classes
of drift-balanced stimuli dened by Chubb and
Sperling do elicit strong coherent motion percept-
On the Fourier Properties of Discontinuous Visual Motion 5
s, contrary to the predictions of the usual MFFC
model.
Sources of Non-Fourier motion also include the
motion of texture boundaries and the motion of
motion boundaries. For instance, transparency as
considered by Fleet and Langley [19] is an example
of Non-Fourier motion, as transparency causes
the relative scattering of Fourier components
away from the spectrum of the moving stimuli. In
addition, occlusion, modeled as in (13), is another
example of Non-Fourier motion which is closely
related to the Theta motion stimuli of Zanker [53],
where the occlusion window moves independently
from both the foreground and the background,
thus involving three independent velocities.
It has been observed by Fleet and Langley that
many Non-Fourier motion stimuli have simple
characterizations in the frequency domain, namely
power distributions located along lines or planes
which do not contain the origin of the frequency
space, as required by the MFFC idealization
[19]. Occlusion and translucency being among
those Non-Fourier visual stimuli, we develop their
exact frequency representations, state their properties
with respect to image motion (or optical
ow), consider the aperture problem and include
additive translucency phenomena within the theoretical
framework.
1.8. Methodology
To analyze the frequency structure of image signals
while preserving representations that are as
general as possible, an eort is made to only pose
those hypotheses that would preserve the generality
of the analysis to follow. We describe the
assumptions and the proof techniques with which
the theoretical results were obtained.
Image Signals The geometry of visual scenes
under perspective projection generally yields
complex image signals. Conceptually, assumptions
concerning scene structure should
not be made, as they constrain the geometry
of observable scenes. In addition, any measured
physical signal, such as image intensi-
ties, satises Dirichlet conditions. Such signals
admit a nite number of nite discon-
tinuities, are absolutely integrable and may
be expanded into complex exponential series.
Dirichlet conditions constitute the sum of assumptions
made on image signals.
Velocity On a local basis, constant models of
signal translation may be adequate to describe
velocity. However, linear models admit an increased
number of deformations, such as signal
dilation. Hence, the extent used for signal
analysis may be larger with linear models. We
considered a constant model of velocity, leaving
deformations of higher order for further
analysis.
Occluding Boundaries Object frontiers and
their projection onto the imaging plane are
typically unconstrained in shape and are dicult
to model on a large spatial scale. Simpler,
local models appear to be more appropriate.
The framework includes occlusion boundaries
as locally straight edges, represented with step
functions. This hypothesis only approximates
reality and limits the analysis to local image
regions. However, we outline in which way
this hypothesis can be relaxed to include occlusion
boundaries of any shape.
Proof Techniques The Theorems and their
Corollaries established in this analysis emanate
from a general approach to modeling
visual scenes exhibiting occlusion discontinuities
or translucency. An equation which describes
the spatio-temporal pattern of the superposition
of a background and an occluding
signal is established [17], in which a characteristic
function describing the position of an
occluding signal within the imaging space of
the visual sensor is dened:
within the occluding signal
and two image signals I 1 (x) and I 2 (x), corresponding
to the occluding and occluded signals
respectively, are dened to form the over-all
signal pattern:
velocity. Note that
the characteristic function describing the object
has the same velocity as its corresponding
6 S. S. Beauchemin and J. L. Barron
intensity pattern I 1 (x). In (10) are inserted
the hypotheses made on its various components
and the structure of occlusion in the frequency
domain is developed. That is to say,
signal structures are expanded into complex
exponential series, such as:
I
n= ~c in e ix T Nk
where I i (x) is the i th intensity pattern, c in
are complex coe-cients, k i are fundamental
are integers
and I . Occlusion boundaries become
locally straight edges, represented with step
functions such as:
where n 1 is a vector normal to the tangent of
the occluding boundary. In addition, degenerate
image signals under occlusion are inves-
tigated, thus describing the aperture problem
in the context of the framework. Whenever
technically possible, the theoretical results
were compared with numerical experiments
using Fast Fourier Transforms operating on
synthetically generated image sequences.
Relevance of Fourier Analysis Many algorithms
operating in the Fourier domain for
which a claim of multiple motions capability is
made have been developed [27]. However, this
is performed without a complete knowledge
of the frequency structure of occlusion phe-
nomena. In addition, Non-Fourier spectra,
including occlusion and translucency eects
have been conjectured to have mathematically
simple characterizations in Fourier space [19].
Consequently, the use of Fourier analysis as
a local tool is justied as long as one realizes
that it constitutes a global idealization of local
phenomena. In that sense, Fourier analysis is
used as a local tool whenever Gabor lters,
wavelets or local Discrete Fourier Transforms
are employed for signal analysis.
Experimental Technique Given the theoretical
nature of this contribution, the purpose
of the numerical experiments is to verify the
validity of the theoretical results. In order to
accomplish this, the frequency content of the
image signals used in the experiments must
be entirely known to the experimenter, thus
forbidding the use of natural image sequences.
In addition, image signals with single frequency
components are used in order to facilitate
the interpretation of experiments involving
3D Fast Fourier transforms. The use of
more complex signals impedes a careful examination
of the numerical results and do not
extend the understanding of the phenomena
under study in any particular way.
2. Spectral Structure of Occlusion
The analysis begins with the consideration of a
simple case of occlusion consisting of two translating
sinusoidal signals. These preliminary results
are then generalized to arbitrary signals and the
aperture problem is examined.
2.1. Sinusoidal Image Signals
The case in which two sinusoidals play the role of
the object and the background is rst considered.
Let I i (x) be an image signal translating with velocity
Its
Fourier transform is
Let I 1 (x) be occluding another image signal I 2 (x),
with respective velocities v 1 (x; t) and v 2 (x; t).
The resulting occlusion scene can then be expressed
as:
where U(x) is (12). The Fourier transform of (13)
is:
U(k) is the Fourier transform of a step function
U(x) written as
On the Fourier Properties of Discontinuous Visual Motion 7
THEOREM 1. Let I 1 (x) and I 2 (x) be cosine functions with respective angular frequencies k T
t)). The frequency spectrum of
the occlusion is:
a)
where a = a 1 a 2 and n 1 is a normal vector perpendicular to the occluding boundary.
with n 1 as a vector normal to the occlusion boundary
1 as its negative reciprocal ( n y
Theorem 1 is derived to examine occlusion with
the simplest set of parameters, such as the form of
occlusion boundaries, the number of distinct frequencies
required to represent both the occluding
and occluded image signals, and a constant model
of velocity. Even with this constrained domain of
derivation, a number of fundamental observations
are made, such as: the occlusion in frequency space
is formed of the Fourier transform of a step
function convolved with every existing frequency
of both the occluding and occluded sinusoidal signals
and, the power content of the distortion term
is entirely imaginary, forming lines of decreasing
power which do not contain the origin, around
the frequencies of both the occluding and occluded
signals. Their orientation is parallel to the spectrum
of the occluding signal, and the detection of
their orientation allows to identify the occluding
velocity, leaving the occluded velocity to be interpreted
as such.
We performed a series of experiments to graphically
demonstrate the composition of a simple occlusion
scene. To simplify the interpretation of
the experiments, we used 1D sinusoidal signals
composed of single frequencies. In addition, the
signals are Gaussian-windowed in order to avoid
the Gibbs phenomenon when computing their Fast
Fourier Transforms (FFTs). Figure 2a), b) and c)
show the components of a simple occlusion scene,
pictured in 2d). Figure 2a) is the occluding signal
with spatial frequency 2
and velocity 1:0, such
that
I 1 (x;
and in 2b) is the occluded signal with spatial frequency8 and velocity 1:0, yielding
I 2 (x;
The occluding boundary in Figure 2c) is a 1D step
function, written as
and translates with a velocity identical to that of
I 1 .
The resulting occlusion scene in Figure 2d) is
constructed with the following 1D version of (13):
I
where I 1 is (17), I 2 is (18) and is (19). Figures
2e) through h) show the amplitude spectra
of gures 2a) through d) respectively, where it is
easily observed the the spectrum of the step function
(19) is convolved with each frequency of both
sinusoidals. Further, Theorem 1 predicts Fourier
spectra such as 2h) in their entirety as is demonstrated
by the experiments in section 2.3.
2.2. Generalized Image Signals
For this analysis to gain generality, we need to
nd a suitable set of mathematical functions to
represent physical quantities such as image signals
that lend themselves to the analysis to follow and
which do not impose unnecessary hypotheses on
the structure of those signals.
8 S. S. Beauchemin and J. L. Barron
d)
Amplitude Spectrum of Sinusoid 1 Amplitude Spectrum of Sinusoid 2 Amplitude Spectrum of Step Function Amplitude Spectrum of Occlusion
Fig. 2. (top): The composition of a simple 1D occlusion scene. a) The occluding sinusoidal signal with frequency 2and
velocity 1:0. b) The occluded sinusoidal signal with frequency 2and velocity 1:0. c) The translating step function used
to create the occlusion scene. d) The occlusion as a combination of a), b) and c). (center): Image plots of amplitude
spectra and (bottom): amplitude spectra as 3D graphs.
For this purpose we hypothesize that image signals
satisfy Dirichlet conditions in the sense that
for any interval x 1 x x 2 , the function f(x)
representing the signal must be single-valued, have
a nite number of maxima and minima and a -
nite number of nite discontinuities. Finally, f(x)
should be absolutely integrable in such a way that,
within the interval, we obtain
Z x2
In addition, any function representing a physical
quantity satises Dirichlet conditions. Hence,
those conditions can be assumed for visual signals
without loss of generality and, in this context, the
complex exponential series expansion, or Fourier
series 1
converges uniformly to f(x).
Theorem 2 generalizes Theorem 1 from sinusoidal
to arbitrary signals. Theorems 1 and 2
introduce the approximation of occluding boundaries
with step functions and, as surfaces of any
shape may be imaged, the forms of their boundaries
are typically unconstrained. On a local basis,
however, as long as the spatial extent of analy-
On the Fourier Properties of Discontinuous Visual Motion 9
THEOREM 2. Let I 1 (x) and I 2 (x) be 2D functions satisfying Dirichlet conditions such that they may
be expressed as complex exponential series expansions:
I 1
are integers, x are spatial coordinates, k
are fundamental frequencies and c 1n and c 2n are complex coe-cients. Let I 1 (x;
I and the occluding boundary be represented by:
where n 1 is a vector normal to the occluding boundary. The frequency spectrum of the occlusion is:
~
where a = a 1 a 2 .
sis remains su-ciently small, the approximation
of occluding boundaries as straight-edged lines is
su-cient and greatly simplies the derivation of
the results. Also for simplicity, a constant model
of velocity is adopted, which is thought of as a
valid local approximation of reality [4, 32]. How-
ever, the constraint on the shape of the occluding
boundary may be removed while preserving the
validity of most of the theoretical results, as we
later demonstrate. As expected, the sum of properties
identied in Theorem 1 hold for Theorem
2. For instance, it is found that the Fourier spectrum
of the occluding boundary is convolved with
every existing frequency of both the occluding and
occluded signals in a manner consonant with its
velocity. That is to say, its spectral orientation is
descriptive of the motion of the occluding signal.
Hence we state the following corollary:
COROLLARY 1. Under an occlusion phe-
nomenon, the velocities of the occluding and occluded
signals can always be identied as such.
Under occlusion, the spectral orientation of the
occluding boundary is parallel to the plane descriptive
of the occluding velocity and detecting
the spectral orientation of the boundary amounts
to identify the occluding velocity, leaving the occluded
velocity to be considered as such.
Figure
3 demonstrates the composition of a simple
2D occlusion scene and the Fourier spectra of
its components. Figure 3a) is the occluding signal
with spatial frequency ( 2
1:0; 1:0) such that
I 1 (x;
and Figure 3b) is the occluded signal with spatial
frequency
8 ) and velocity (1:0; 1:0), yielding
I 2 (x;
S. S. Beauchemin and J. L. Barron
y
d)
Fig. 3. (top): The composition of a simple 2D occlusion scene. a) The occluding sinusoidal signal with frequency ( 2; 2)
and velocity ( 1:0; 1:0). b) The occluded sinusoidal signal with frequency ( 2; 2) and velocity (1:0; 1:0). c) The step
function used to create the occlusion scene with normal vector (
d) The occlusion as a combination of a), b) and
c). (bottom) e) through h): Image plots of corresponding amplitude spectra.
The occluding boundary in Figure 3c) is a 2D
step function identical to (12) and translates with
a velocity which equals that of I 1 , the occluding
signal. The resulting occlusion scene in Figure
3d) is constructed with (13). Figures 3e) through
show the 3D amplitude spectra of Figures 3a)
through d), respectively.
In the experiments with 2D signals depicted
in
Figure
4, the spatial frequencies of the occluding
and occluded signals are k T
Only the velocities
and the orientation of the occlusion boundary
vary. The velocities of the occluding and occluded
signals and the occlusion boundary normal
vectors, from left to right in Figure 3, are
a) a T
and n T
p2 ); c) a T
and n T
As per Theorem 1, the spectral extrema located
at
the spatiotemporal frequencies of both signals
and t the constraint planes k T a
The oblique spectra intersecting
the peaks are the convolutions of the spectrum of
the step function with the frequencies of both signals
and t lines described by the intersection of
planes
These spectra are parallel to the constraint plane
of the occluding signal and are consonant with its
velocity.
Theorem 2 is the generalization of Theorem 1
from sinusoidal to arbitrary signals and its geometric
interpretation is similar. For instance,
frequencies
t the constraint planes of the occluding and occluded
signals, dened as k T a
In the distortion term, the Dirac
- function with arguments (k
1 and
k T a 1 +! a T Nk 2 represent a set of lines parallel
to the constraint plane of the occluding signal
and, for every discrete frequency
exhibited by both signals, there is
a frequency spectrum tting the lines given by the
intersection of planes k T a
On the Fourier Properties of Discontinuous Visual Motion 11
y
d)
@
@
@
a 1
a 2
@
@
@
a 2
a 1
a 1 a 2
@
@
@
a 1
a 2
Fig. 4. Four cases of predicted and computed Fourier spectra of occlusion scenes. In all cases the frequencies of the
occluding and the occluded signals are ( 2; 2) and ( 2; 2). (top): a) Occluding and occluded velocities a
and a Occluding and occluded velocities a Occluding and occluded
velocities boundary normal (1:0; 0). d) Occluding and occluded velocities
Computed FFTs of corresponding occlusion scenes. (bottom) i)
through l): Fourier spectra predicted by theoretical results.
and
The magnitudes of
these spectra are determined by their corresponding
scaling functions c 1n [(k
Theorem 2 reveals useful
constraint planes, as the power spectra of both
signals peak within planes k T a
and the constraint planes arising
from the distortion are parallel to the spectrum of
the occluding signal I 1 (x; t).
3. The Aperture Problem: Degenerate
Cases
In the Fourier domain, the power spectrum of a
degenerate signal is concentrated along a linear
rather than a planar structure. To see this, consider
a 1D signal moving with a constant model
of velocity in a 2D space, in the direction of the
gradient normal n i and with speed s
S. S. Beauchemin and J. L. Barron
The Fourier transform of this signal is given by
i is the negative reciprocal of n i . The
their intersection forms a linear constraint onto
which the spectrum of the degenerate signal re-
sides. Therefore, the planar orientation describing
full velocity is undetermined. However, the presence
of an occlusion boundary disambiguates the
measurement of a degenerate occluding signal in
most cases as a straight-edged occlusion boundary
provides one constraint on normal velocity and so
does its corresponding degenerate occluding sig-
nal. Since these structures have an identical full
velocity, these constraints should be consistent
with it, allowing to form a system of equations
to obtain full velocity. For instance, consider
the Fourier transform of a translating occluding
degenerate signal expressed as its complex exponential
series expansion:
Z
I 1
c in -(k nk 1
is the normal of the signal, s 1 is its speed
is the fundamental frequency. Ad-
ditionally, consider the Fourier transform of the
occluding boundary with normal vector n 2 and
speed
The convolution of (4) and (29) yields the following
spectrum:
c 1n -(k nk 1
c 1n
Expression (30) allows to derive two directional
vectors, tting the spectra of the degenerate occluding
signal and boundary respectively, which
are d T
cross product yields a vector a 1 normal to the
planar structure containing both spectra, which
is the full velocity of the degenerate occluding sig-
nal. The constraints on normal velocities form the
following system of equations
a T
a T
and its solution, obtained by dividing d 1 d 2 with
its third component, is
which is full velocity when a constant model is
used. This system has a unique solution if and
only if n 1
s 2 and (31) has no unique solution. Thus, we state
the following Theorem:
THEOREM 3. The full velocity of a degenerate
occluding signal is obtainable from the structure
of the Fourier spectrum if and only if its normal
is dierent from the normal of the occlusion
boundary.
We performed experiments with degenerate signals
as shown in Figure 5. An occluding degenerate
sinusoidal pattern with spatial orientation
translating with normal velocity
depicted in Figure 5a). The
pattern was generated according to
I 1 (x;
As can be seen from its Fourier transform 5e), the
frequency content is composed of two - functions
from which only a normal velocity estimate can be
obtained by computing the orientation of the line
passing through the spectral peaks and the origin
of the frequency space.
Figure 5b) shows the occluding signal and the
occlusion edge combination. The normal vector
to the edge is 1:0). The Fourier transform
is shown in 5f), where the spectrum of the
edge is convolved with the peaks of the signal.
In this case, the full velocity of the degenerate
signal is obtained by computing the normal vector
to the plane containing the entire spectrum
On the Fourier Properties of Discontinuous Visual Motion 13
y
d)
Fig. 5. Cases of degenerate occluding signals. (top): a) Occluding signal with normal Occluding
signal and boundary with normal Occluded signal with normal d) Complete occlusion
scene. (bottom) e) through h): Corresponding frequency spectra.
and the origin of the frequency space. Figure 5c)
shows the occluded signal with spatial orientation
translating with normal velocity
This pattern was generated according
to
I 2 (x;
and its frequency content appears in 5g).
The complete occlusion scene is shown in 5d)
and the corresponding frequency content is depicted
in 5h). To disambiguate the normal velocity of
the occluding signal, it is rst necessary to identify
the occluding velocity. This is accomplished by
nding a line that is parallel to the spectral orientation
of the Fourier transform of the occluding
edge and that also contains the frequency content
of one signal. In this case, this signal is said to be
occluding, and the normal to the plane containing
its frequency spectrum, including the spectrum of
the occluding edge convolved with its discrete fre-
quencies, yields a full velocity measurement.
4. Related Considerations
In this section we consider the relationship between
additive translucency and the theoretical
framework, the eects of occluding edges away
from the origin of the spatiotemporal domain, occluding
boundaries of various shapes and the relevancy
of the theoretical model with respect to
Non-Fourier motions such as Zanker's Theta motions
[53].
4.1. Translucency
Transmission of light through translucent material
may cause multiple motions to arise within an
image region. Generally, this eect is depicted on
the image plane as
function of the density of the
translucent material [17]. Under the local assumption
of spatially constant f( 1 ) with translucency
factor ', (35) is reformulated as a weighted super-
14 S. S. Beauchemin and J. L. Barron
x
a) b) c) d)
Fig. 6. The composition of an additive transparency scene. a): First sinusoidal signal with frequency k
sinusoidal signal with frequency k Transparency created with the
superposition of rst and second sinusoidal signal. d): Frequency spectrum of transparency.
position of intensity proles, written as
where I 1 (v 1 (x; t)) is the intensity prole of the
translucent material and I 2 (v 2 (x; t)) is the intensity
prole of the background. With I 1 (v 1 (x; t))
and I 2 (v 2 (x; t)) satisfying Dirichlet conditions,
the frequency spectrum of (36) is written as:
Hence, with respect to its frequency structure,
translucency may be reduced to a special case of
occlusion for which the distortion terms vanish.
Figure
6 shows the Fourier transform of an additive
translucency composed of two sinusoidals.
4.2. Phase Shifts
For reasons of simplicity and clarity, in each Theorem
and numerical result, the occluding boundary
contained the origin of the coordinate system. We
generalize this by describing the occlusion boundary
as
where y 0 is the y-axis intercept. The Fourier spectrum
of such a boundary includes a phase shift and
is written as:
Equation (39) can be further simplied as:
The Fourier spectrum of the boundary is to be
convolved with the complex exponential series expansions
of the occluding and occluded signals and
subsequently with the Fourier transform of the
Gaussian window. In the case of the occluding
signal, the convolution with the the shifted occlusion
boundary can be written as:
and, similarly for the occluded signal:
These convolutions are combined together as before
to obtain the Fourier spectrum of occlusion
On the Fourier Properties of Discontinuous Visual Motion 15
d)
Fig. 7. Phase shifts from occluding edge. (top): a) y
through h): Corresponding frequency spectra. The relative magnitude between the occluding and occluded signals depend
on their respective visible areas under the Gaussian envelope. For instance, The frequencies of the occluding signal dominate
over those of the occluded signal in e), and vice versa in h).
with an occluding boundary not containing the origin
of the space.
We conducted experiments with 1D image signals
and shifted the occlusion point with dieren-
t values of y 0 in (38). As observed in Figure 7,
these phase shifts do not alter the structure of occlusion
in frequency space. The variations in the
amplitude spectra are due to the Gaussian windowing
of the occlusion scene. For instance, the
frequency peaks of the occluding signal in Figure
7e) show more power than those of the occluded
signal, owing to the fact that the signal is dominant
within the Gaussian window. The contrary is
observed when the occluded signal occupies most
of the window, as shown in Figure 7h).
4.3. Generalized Occluding Boundaries
Typically, occlusion boundaries are unconstrained
in shape, yielding a variety of occluding situations.
Under the hypothesis that the motion of the occluding
boundary is rigid on the image plane, we
can derive the frequency structure of such occlusion
events. For instance, consider a generalized
occlusion boundary represented by the characteristic
function (x) in the coordinates of the image
plane and the Fourier transforms of the complex
exponential series expansions of both the occluding
and occluded signals I 1 and I 2 . Substituting
these terms into (13) yields the following Fourier
spectrum
from which it is easily observed that the spectrum
of the occluding boundary is repeated at every
non-zero frequency of both signals. The spectrum
S. S. Beauchemin and J. L. Barron
y
d)
Fig. 8. Generalized occluding boundaries. a), b) and c): Images from a sequence in which the occluding pattern moves
with velocity a T= ( 1:0; 1:0). Spatial frequency of the sinusoidal texture within the circular boundary is k T= ( 2; 2).
d): The frequency spectrum of the sequence, where the plane contains the spectrum of the boundary convolved with the
frequency of the texture.
occupies a plane descriptive of full velocity and
can be used to perform such measurements.
Figure
8 shows an experiment where the occluding
signal is within a circular occlusion boundary.
The signal and boundary are moving at a constant
velocity a T
and the occluded
signal is a background of constant intensity. Figures
8a) through c) show the motion of the occluding
region while Figure 8d) is the frequency spectrum
of the sequence, from which we observe the
spectrum of the circular boundary and the peaks
representing the frequencies of the occluding sinusoidal
texture are conned to a planar region fully
descriptive of the image motion.
4.4. Non-Fourier Motion
Non-Fourier motion is characterized by its inability
to be explained by the MFFC principle. In other
terms, such motions generate power distributions
that are inconsistent with translational mo-
tion. Sources of Non-Fourier motion include such
phenomena as translucency and occlusion and, in
particular, Zanker's Theta motion stimuli involving
occlusion [53]. This category of motion is described
by an occlusion window that translates
with a velocity that is uncorrelated with the velocities
of the occluding and occluded signals. For
1D image signals, such an occlusion scene can be
expressed as:
As Zanker and Fleet [53, 19], we model the occlusion
window with a rectangle function in the
spatial coordinate as
x x0
x x0
x x0
Such a function has a non-zero value in the interval
2 ] and zero otherwise. We then
write the Fourier transform of the occlusion scene
as:
KX
KX
c 2n -(k nk
and the phase shift from x 0 in (45)
. The spectra -(kv 3
and are consonant with the
motion of the occluding window and represent a
On the Fourier Properties of Discontinuous Visual Motion 17
x
x
a) b) c) d)
Fig. 9. Examples of Theta Motion. a): Velocities of occlusion window, occluding and occluded signals are v
Frequency spectrum of a). c): Velocities of occlusion window, occluding and
occluded signals are Frequency spectrum of c).
case of Non-Fourier motion, as they do not contain
the origin.
We performed two experiments with Theta motions
as pictured in Figure 9. It is easily observed
that the spectrum of the sinc function is convolved
with each frequency of both signals and that its
orientation is descriptive of the velocity of the win-
dow. As expected, the visible peaks represent the
motions of both signals in the MFFC sense.
5. Conclusion
Retinal image motion and optical
ow as its approximation
are fundamental concepts in the eld
of vision. The computation of optical
ow is
a challenging problem as image motion includes
discontinuities and multiple values mostly due to
scene geometry, surface translucency and various
photometric eects such as surface re
ectance. In
this contribution, we analyzed image motion in
frequency space with respect to motion discontinuities
and surface translucence. The motivation
for such a study emanated from the observation
that the frequency structure of occlusion, translucency
and Non-Fourier motion in frequency space
was not known. The results cast light on the exact
structure of occlusion, translucency, Theta mo-
tion, the aperture problem and signal degeneracy
for a constant model of image motion in the frequency
domain with related geometrical properties
Appendix
Proof Method of Theorem 2
The Fourier transform of the complex exponential
series expansion of a 2D signal is:
I
Z ~X
n= ~c in e ix T Nk i e ik T x dx
n= ~c in -(k Nk i )
and the Fourier transform of 2D step function under
constant velocity is:
Z
where n i is a vector normal to the occlusion
boundary. Introducing (A1) and (A2) into the
Fourier transform of (13) under constant velocity
and solving the convolutions leads to Theorem 2.
S. S. Beauchemin and J. L. Barron
Notes
1. Signals that are termed as degenerate have a spatially
constant intensity gradient or, in other words, a unique
texture orientation. This phenomenon is generally referred
to as the aperture problem which arises when
the Fourier spectrum of I i (x) is concentrated on a line
rather than on a plane [18, 33]. Spatiotemporally, this
depicts the situation in which I i (x; t) exhibits a single
orientation. In this case, one only obtains the speed and
direction of motion normal to the orientation, noted as
v?i (x; t). If many normal velocities are found in a single
neighborhood, their respective spectra t the plane
0 from which full velocity may be obtained.
2. This assertion assumes dierentiable sensor motion.
--R
Determining three-dimensional motion and structure from optical ow generated by several moving objects
A fast obstacle detection method based on optical ow.
Disparity analysis of images.
Performance of optical ow techniques.
The feasibility of motion and structure from noisy time-varying image velocity information
Optic ow to measure minute increments in plant growth.
A model for the detection of motion over time.
Motion segmentation and qualitative dynamic scene analysis from an image sequence.
A split-merge parallel block-matching algorithm for video displacement estimation
Stereo correspondence from optical ow.
The sampling and reconstruction of time-varying imagery with application in video systems
On the detection of motion and the computation of optical ow.
Obstacle detection by evaluation of optical ow
Measurement of Image Velocity.
Computation of component image velocity from local phase information.
Computational analysis of non-fourier motion
The use of optical ow for autonomous navigation.
Optical motions and space perception: An extension of gibson's analysis.
Subspace methods for recovering rigid motion 2: Algorithm and implemen- tation
Recovery of ego-motion using image stabilization
Direct computation of the focus of expan- sion
Segmentation of frame sequences obtained by a moving observer.
Mixture models for optica ow computation.
Vertical and horizontal disparities from phase.
A computer algorithm for reconstructing a scene from two projections.
The interpretation of a moving retinal image.
An iterative image- registration technique with an application to stereo vision
Directional selectivity and its use in early visual processing.
The accuracy of the computation of optical ow and of the recovery of motion parameters.
A video encoding system using conditional picture-element replenishment
Scene segmentation from visual motion using global optimization.
Advances in picture coding.
Motion recovery from image sequences using only
Motion compensated television coding: Part 1.
Motion and structure from optical ow.
Motion estimation from tagged mr image sequences.
How do we avoid confounding the direction we are looking and the direction we are moving.
Movement detectors of the correlation type provide sucient information for local computation of 2d velocity
multiple motions from optical ow.
Edge detection and motion detection.
Bounds on time-to-collision and rotational component from rst-order derivatives of image ow
A fast method to estimate sensor translation.
Uniqueness and estimation of three-dimensional motion parameters of rigid objects with curved surfaces
Estimating three-dimensional motion parameters of a rigid planar patch 2: Singular value decomposition
Coherence and transparency of moving plaids composed of fourier and non-fourier gratings
Theta motion: A paradoxical stimulus to explore higher-order motion extraction
An error-weighted regularization algorithm for image motion- eld estima- tion
Automatic feature point extraction and tracking in image sequences for unknown image motion.
--TR
Edge detection and motion detection
Scene segmentation from visual motion using global optimization
Bounds on time-to-collision and rotational component from first-order derivatives of image flow
Obstacle detection by evaluation of optical flow fields
Stereo correspondence from optic flow
Computation of component image velocity from local phase information
The feasibility of motion and structure from noisy time-varying image velocity information
Techniques for disparity measurement
On the Detection of Motion and the Computation of Optical Flow
Three-dimensional motion computation and object segmentation in a long sequence of stereo frames
Subspace methods for recovering rigid motion I
Motion recovery from image sequences using only first order optical flow information
Motion segmentation and qualitative dynamic scene analysis from an image sequence
Performance of optical flow techniques
The use of optical flow for the autonomous navigation
Measurement of Image Velocity
The Accuracy of the Computation of Optical Flow and of the Recovery of Motion Parameters
A Fast Method to Estimate Sensor Translation
multiple motions from optical flow
A Fast Obstacle Detection Method based on Optical Flow
--CTR
Weichuan Yu , Gerald Sommer , Steven Beauchemin , Kostas Daniilidis, Oriented Structure of the Occlusion Distortion: Is It Reliable?, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.9, p.1286-1290, September 2002
Abhijit S. Ogale , Yiannis Aloimonos, A Roadmap to the Integration of Early Visual Modules, International Journal of Computer Vision, v.72 n.1, p.9-25, April 2007
Weichuan Yu , Gerald Sommer , Kostas Daniilidis, Multiple motion analysis: in spatial or in spectral domain?, Computer Vision and Image Understanding, v.90 n.2, p.129-152, May | image motion;aperture problem;occlusion;optical flow;non-Fourier motion |
365972 | Logic Based Abstractions of Real-Time Systems. | When verifying concurrent systems described by transition systems, state explosion is one of the most serious problems. If quantitative temporal information (expressed by clock ticks) is considered, state explosion is even more serious. We present a notion of abstraction of transition systems, where the abstraction is driven by the formulae of a quantitative temporal logic, called qu-mu-calculus, defined in the paper. The abstraction is based on a notion of bisimulation equivalence, called , n-equivalence, where is a set of actions and n is a natural number. It is proved that two transition systems are , n-equivalent iff they give the same truth value to all qu-mu-calculus formulae such that the actions occurring in the modal operators are contained in , and with time constraints whose values are less than or equal to n. We present a non-standard (abstract) semantics for a timed process algebra able to produce reduced transition systems for checking formulae. The abstract semantics, parametric with respect to a set of actions and a natural number n, produces a reduced transition system , n-equivalent to the standard one. A transformational method is also defined, by means of which it is possible to syntactically transform a program into a smaller one, still preserving , n-equivalence. | Introduction
In this paper we address the problem of verifying systems in which time plays
a fundamental role for a correct behaviour. We refer to the Algebra of Timed
Processes (ATP) [22] as a formalism able both to model time dependent systems
and to prove their properties. ATP is an extension of traditional process algebras
which can capture discrete quantitative timing aspects with respect to a global
clock.
We express the semantics of this language in terms of labeled transition systems
where some transitions are labeled by the special action , called time action.
Such an action represents the progress of time and can be viewed as a clock tick.
One widely used method for verication of properties is model checking [8, 7,
Model checking is a technique that proves the correctness of a system
specication with respect to a desired behavior by checking whether a structure,
representing the specication, satises a temporal logic formula describing the
expected behavior. Most existing verication techniques, and in particular those
dened for concurrent calculi, like CCS [21], are based on a representation of
the system by means of a labeled transition system. In this case, model checking
consists in checking whether a labeled transition system is a model for a formula.
When representing systems specications by transition systems, state explosion
is one of the most serious problems: often we have to deal with transition systems
with an extremely large number of states, thus making model checking
inapplicable. Moreover, when in system specications quantitative temporal information
(expressed by clock ticks) is considered, state explosion is even more
serious, the reason for this being that a new state is generated for every clock
tick. Fortunately, in several cases, to check the validity of a property, it is not
necessary to consider the whole transition system, but only an abstraction of it
that maintains the information which \in
uences" the property. This consideration
has been used in the denition of abstraction criteria for reducing transition
systems in order to prove properties e-ciently. Abstraction criteria of this kind
are often based on equivalence relations dened on transition systems: minimizations
with respect to dierent notions of equivalence are in fact used in many
existing verication environments (see, for instance, [10, 13, 16]).
In this paper we present a notion of abstraction of transition systems, where
the abstraction is driven by the formulae of a quantitative temporal logic. This
logic, which we call qu-mu-calculus, is similar to the mu-calculus [19], in particular
to a variant of it [4], in which the modal operators are redened to include
the denition of time constraints. Many logics have been dened to deal with
time aspects, see, for example [1{3, 14, 15, 20]. A fundamental feature of qu-mu-
calculus is that its formulae can be used to drive the abstraction: in particular,
given the actions and the time constraints occurring in the modal operators of a
formula of the qu-mu-calculus, we use them in dening an abstract (reduced)
transition system on which the truth value of is equivalent to its value on the
standard one. The abstraction is based on a notion of bisimulation equivalence
between transition systems, called h; ni-equivalence, where is a set of actions
(dierent from the time action ) and n is a natural number: informally, two
transition systems are h; ni-equivalent i, by observing only the actions in
and the paths composed of time actions shorter than or equal to n, they exhibit
the same behaviour. Some interesting properties of such an equivalence are
presented.
We prove that two transition systems are h; ni-equivalent if and only if they
give the same truth value to all formulae such that the actions occurring in
the modal operators are contained in , and with time constraints whose values
are less than or equal to n. Thus, given a formula , with actions in and
maximum time constraint n, we can abstract the transition system to a smaller
one (possibly the minimum) h; ni-equivalent to it, on which can be checked.
In the paper we present a non-standard (abstract) semantics for the ASTP [22]
language, dening abstract transition systems. ASTP is the sequential subset of
ATP; actually, this is not a limitation: our abstract semantics is easily applicable
to the concurrent operators and its ability in reducing the transition system can
be suitably investigated also on the concurrent part. The abstract semantics
can be usefully exploited as a guide in implementing an algorithm to build the
reduced system.
We also present a set of syntactic rewriting rules which can transform a process
into a smaller one, while preserving h; ni-equivalence. This syntactic reduction
can be used as a rst step of the reduction process, before applying the abstract
semantics.
After the preliminaries of Section 2, we introduce our logic in Section 3 and the
abstract semantics in Section 4. Section 5 describe the syntactic transformations
and Section 6 concludes the paper.
Preliminaries
2.1 The Algebra of Timed Processes
Let us now quickly recall the main concepts about the Algebra of Timed Processes
[22], which is used in the specication of real-time concurrent and distributed
systems.
For simplicity, we consider here only the subset of ATP, called ASTP (Algebra
of Sequential Timed Processes), not containing parallel operators.
The syntax of sequential process terms (processes or terms for short) is the following
where ranges over a nite set of asynchronous actions A :::g. We
denote by A the set A [ fg, ranged over by :. The action (time
action) is not user-denable and represents the progress of time. x ranges over a
set of constant names: each constant x is dened by a constant denition x def
We denote the set of process terms by P .
The standard operational semantics [22] is given by a relation ! P AP ,
where P is the set of all processes: ! is the least relation dened by the rules
in
Table
1.
Rule Act manages the prexing operator: p evolves to p by a transition labeled
by . The operator behaves as a standard nondeterministic choice for processes
with asynchronous initial actions (rule Sum 1 and the symmetric one which is
not shown). Moreover, if p and q can perform a action reaching respectively
can perform a action, reaching p 0 q 0 (rule Sum 2 ).
The process bpc(q) can perform the same asynchronous initial actions as p (rule
Delay 1 ). Moreover bpc(q) can perform a action, reaching the process q (rule
Delay 2 ). Finally, rule Con says that a constant x behaves as p if x
denition. Note that there is no rule for the process 0, which thus cannot perform
any move.
In the following we use :p to denote the term b0c(p); this process can perform
only the action and then becomes the process p. Moreover we dene n :p
(n > 1) as:
Act
Delay 1
Con
Table
1. Standard operational semantics of ASTP
A labeled transition system (or transition system for short) is a quadruple
is a set of states, A is a set of transition labels (actions),
is the initial state, and ! T S A S is the transition relation. If
Given a process p, we write p
exist such that
is the empty sequence. Given p 2 S, we denote the
set of the states reachable from p by ! T with R !T
A g.
Given a process p and a set of constant denitions, the standard transition system
for p is dened as p). Note that, with abuse of nota-
tion, we use ! for denoting both the operational semantics and the transition
relation among the states of the transition system.
On ASTP processes equivalence relations can be dened [22], based on the notion
of bisimulation between states of the related transition systems.
Example 1. Let us consider a vending machine with a time-dependent behavior.
The machine allows a user to obtain dierent services: a soft drink immediately
after the request; a coee after a delay of a time unit; a cappuccino after a delay
of two time units; a cappuccino with chocolate after a delay of three time units.
Moreover, it is possible to recollect the inserted coin, only if requested within
one time unit. The ASTP specication of the machine is:
V=coin brecollect money V c( coee :(collect coee V )
choc cappuccino 3 :(collect choc cappuccino V )
soft drink collect soft drink V )
The standard transition system for the vending machine has 14 states and
transitions.
3 Quantitative temporal logic and abstractions
In order to perform quantitative temporal reasoning, we dene a logic, that
we call qu-mu-calculus, which is an extension of the mu-calculus [19] and in
particular of the selective mu-calculus [4]. The syntax is the following, where Z
ranges over a set of variables:
hi R;n j Z: j Z:
where
{ R A ;
{ n 2 N , where N is the set of natural numbers; n is called time value. In
hi R;<n and [] R;<n it must be n > 0.
The satisfaction of a formula by a state p of a transition system, written p
is dened as follows: any state satises tt and no state satises ff; a state sat-
are the quantitative modal oper-
ators. The informal meaning of the operators is the following:
hi R;<n is satised by a state which can evolve to a state satisfying by
executing , not preceded by actions in R [ fg, within n time units.
[] R;<n is satised by a state which, for any execution of occurring
within units and not preceded by actions in R [ fg, evolves to a
state satisfying .
hi R;n is satised by a state which can evolve to a state satisfying by
executing , not preceded by actions in R[ fg, after at least n time units.
[] R;n is satised by a state which, for any execution of occurring after
at least n time units and not preceded by actions in R [ fg, evolves to a
state satisfying .
As in standard mu-calculus, a xed point formula has the form Z: (Z:)
where Z (Z) binds free occurrences of Z in . An occurrence of Z is free if it
is not within the scope of a binder Z (Z). A formula is closed if it contains no
variables. Z: is the least x point of the recursive equation
Z: is the greatest one. We consider only closed formulae.
The precise denition of the satisfaction of a closed formula by a state p of a
transition system T is given in Table 2. It uses the relation =) ;n
Denition 1 (=) ;n
relation). Given a transition system
a set of actions A , and n 2 N , we dene the relation =) ;n
such that, for each 2
is the number of actions occurring in -.
By
T q we express the fact that it is possible to pass from p to q by executing
a (possibly empty) sequence of actions not belonging to and containing
exactly k , followed by the action in .
A transition system T satises a formula i its initial state satises . An
ASTP process p satises a formula i S(p) satises .
Example 2. Examples of properties concerning the vending machine described
in the previous section are the following:
\it alway holds that, after a coin has been inserted, a soft drink may be collected
within two time units".
\ it is not possible to recollect the inserted coin after more than one time unit".
3.1 Formula driven equivalence
A formula of the qu-mu-calculus can be used to dene a bisimulation equivalence
between transition systems. The bisimulation is dened by considering
only the asynchronous actions occurring in the quantitative operators belonging
where, for each m, Z are dened as:
where the notation [ =Z] indicates the substitution of for every free occurrence
of the variable Z in .
Table
2. Satisfaction of a formula by a state
to the formula, and the maximum time value of the quantitative operators occurring
in the formula. Thus all formulae with the same set of occurring actions
and the same maximum time value dene the same bisimulation.
Given a set A of actions and a time value n, a h; ni-bisimulation relates
states p and q if: i) for each path starting from p, containing k < n time actions
and no action in and ending with 2 , there is a path starting from q,
containing exactly k time actions and no action in and ending with 2 ,
such that the reached states are bisimilar, and ii) for each path starting from p,
containing k n time actions and no action in and ending with 2 , there
is a path starting from q, containing m n (possibly m 6= actions and
no action in and ending with 2 , such that the reached states are bisimilar.
Denition 2 (h; ni-bisimulation, h; ni-equivalence).
A and n 2 N .
B, is a binary relation on S T S such that rBq
implies:
- T and are h; ni-equivalent ( T ;n ) i there exists a h; ni-bisimulation
containing the pair (p; p 0 ).
Fig. 1. Examples of h; ni-equivalence
Example 3. Consider the transition systems illustrated in Figure 1. T1 is hfag; ni-
equivalent to T 2, with while T1 is not hfag; ni-equivalent to T 2, with
n 3. Moreover T1 is not hfa; bg; ni-equivalent to T 2, for every n 2 N .
The following proposition holds, relating equivalences with dierent and n.
Proposition 1. For each
Proof. See Appendix A.
In order to relate h; ni-equivalence with quantitative temporal properties, we
introduce the following denition, concerning equivalences based on sets of formulae
Denition 3 (logic-based equivalence). Let T and be two transition sys-
tems, and a set of closed formulae. The logic-based equivalence is dened
by:
Given a formula of the qu-mu-calculus, we dene the set of occurring actions
in and the maximum time value of .
Denition 4. (O(), max()). Given a formula of the qu-mu-calculus, the
set O() of the actions occurring in is inductively dened as follows:
The maximum time value of the modal operators occurring in (max()) is
inductively dened as follows:
The following theorem states that h; ni-equivalent transition systems satisfy the
same set of formulae with occurring actions in and maximum time value less
than or equal to n.
Theorem 1. Let
systems and let A and n 2 N .
where
closed formula of the qu-mu-calculus such that O()
and max() ng:
Proof. See Appendix A.
4 Abstract transition systems and abstract semantics
In this section, in order to reduce the number of states of a transition system
for model checking, we dene an abstraction of the transition system on which a
formula can be equivalently checked. First we dene the notion of time path.
time path is an acyclic path composed only of actions and such that each
state (but the rst one) has only one input transition and each state (but the
last one) has only one output transition.
Denition 5. (time path) Let transition system and
path each path p 1
that
{ holds that p i
{ 8i, 1 i < n,
6
{ 8i, 1 < i n,
6 9q 6= p i such that q
Given an ASTP process p and a pair h; ni, we dene an abstract transition
system for p by means of a non-standard semantics which consists of a set of
inference rules that skip actions not in and produce time paths not longer than
n. The abstract transition system is h; ni-equivalent to the standard transition
system of p.
The non standard rules are shown in Table 3 (the symmetric rules of Sum 1 and
Sum 2 are not shown). They use a transition relation ! m
;n parameterized by
an integer m n. The ideas on which the semantics is based are the following:
{ the actions in are always performed (rules Act 1 , Delay 4 and Sum 1 )
{ the actions not in are skipped: when an action not in is encountered, a
\look-ahead" is performed in order to reach either an action in or a time
action (rules Act 2 , Delay 3 and Sum 2 );
{ when a time action is encountered, it is skipped only if the process we reach
by this action can perform a sequence of n time units. In order to count the
time units we use the superscript of ! m
;n q occurs
when an action belonging to can be executed after m time actions starting
from p. In fact, in order to generate the transition p
;n q , we rst prove
that q
;n q 0 for some q 0 (rules Delay 1 and Delay 2 , Sum 3 and Sum 4 ).
Successive applications of Delay 2 and Sum 4 allow us to skip all time actions
in a sequence but the last n ones.
Note that in the premises of rules Delay 3 , Delay 4 , Sum 1 , Sum 2 Sum 3 and
Sum 4 the standard operational relation ! is used, in order to know the rst
action of the process and consequently to respect the standard behavior of the
operators, which is dierent depending on whether the rst action is a time
action or not.
The following proposition characterizes the transitions of the non-standard semantics
Proposition 2. Let A and n 2 N . For each ASTP process p,
1. p
;n q implies 2 and
2. p
Proof. By induction on depth of inference.
The proposition states that there are two kinds of transitions: the rst one
represents the execution of action 2 and is characterized by the superscript
0; the second one represents the execution of a action, and is characterized by
a
The following result holds, relating the paths composed of time actions of the
standard transition system with those of the non-standard one:
Proposition 3. Let A and n 2 N . For each ASTP process p,
1. j n and p
2. j n and p
;n q.
Proof. See Appendix A.
The proposition states that, whenever there is a path in the standard semantics
composed of less than or equal to n time actions, followed by an action in , a
path with the same number of time actions occurs in the abstract system, while
every path with more than n time actions in the standard system corresponds
to a path with exactly n time actions in the abstract system.
Now we formally dene the notion of abstract transition system.
Denition 6 (abstract transition system). For each ASTP process p, given
A and n 2 N the abstract transition system for p is dened as
where q
only if 9j:q
The following theorem holds, stating that the transition system dened by the
non-standard semantics is a suitable abstraction of the standard one.
Theorem 2. Let A and n 2 N . For each ASTP process p,
1. the transitions of N ;n (p) are labeled only either by actions in or by ;
2. the length of each time path in N ;n (p) is less than or equal to n;
3. S(p) ;n N ;n (p).
Proof. See Appendix A.
Note that, if the abstract transition system N ;0 (p) for a process p does
not contain transitions labeled by time actions and expresses only the precedence
properties between the asynchronous actions in . The following proposition
relates h; ni-equivalences with dierent and n. It says that h; ni-equivalence
is preserved by keeping a larger and a greater n.
Proposition 4. Let ; 0 A . For
each ASTP process p,
Proof. By Proposition 1 and by Theorem 2, point 3.
Delay 1
Delay 3
;n r
;n r
;n r
Con
Table
3. Non-standard operational semantics for ASTP
Example 4. Recall the vending machine of Example 1. Let us suppose that we
have to verify the following two formulae, expressed in Example 2.
The formula 1 can be checked on the abstract transition system N 1 ;n1 (V ),
with
collect soft drinkg; and
2.
states and 14 transitions.
On the other hand 2 can be checked on N 2 ;n2 (V ), with
states and 13 transitions.
5 Syntactic reduction
In this section we investigate a syntactic approach to the reduction of transition
systems, still based on the formula to be checked. Given a process p and a
property , it is possible to perform syntactic transformations which reduce
the size of p (in terms of number of operators), based on the actions and the
time values occurring in . The transformations are h; ni-equivalence preserving,
that is can be equivalently checked on the transformed process. The syntactic
reduction can be used independently from the semantic abstraction dened in
the previous section.
Table
4. Transformation rules
The h; ni-equivalence preserving transformations are shown in Table 4 in the
form of rewriting rules: p 7! q means \rewrite p as q". Rule R 1 allows deleting
an asynchronous action not in , while rules R 2 and R 3 cancel time actions
from sequences of time actions. R 2 deletes m n time actions from a
sequence of m ones (if m > n), it can only be applied if the sequence is not
the operand of a summation, and this is ensured by imposing that the sequence
is prexed by an asynchronous action. When handling summations, R 3 is ap-
plied, which deletes n time actions from both operands. Note that, in order to
preserve h; ni-equivalence, in all cases the transformed term must be guarded
by an asynchronous action. The following theorem states the correctness of the
transformations.
Theorem 3. Let A ; n 2 N and q be an ASTP process. If q i
Proof. See Appendix A.
Other rules could be dened, performing further reductions. However, every
syntactic method, being static, cannot perform all possible simplications, since
it cannot \know" the behavior of the process at \run time". A semantic approach,
like that described in the preceding section, based on an abstract semantics, can
be in general more precise. On the other hand, compared with the semantic
approach, the syntactic one has the advantage of being less complex in time,
since it only analyzes the source code, without executing the program.
Though the semantic and syntactic reductions are independent, they can be
protably combined. Given a process p, rst it can be syntactically transformed
into a process q, and then an abstract transition system can be built for q using
the abstract semantics.
Example 5. Recall the vending machine in Example 1. Let us suppose that we
have to verify the property 2 of Example 2.
If we apply the transformation rules to the vending machine, with
fcoin; moneyg and n we obtain the following reduced process:
cappuccino :(collect cappuccino V 2)
choc cappuccino :(collect choc cappuccinoV 2)
The formula 2 can be checked on the standard transition for V 2, which has
states and 14 transitions. Moreover, 2 can be checked on the abstract
transition system N 2 ;n2 (V 2), obtained applying to V 2 the abstract semantics.
states and 9 transitions. Note that applying rst the syntactic
reduction and after the abstract semantics produces a transition system smaller
than the one obtained with the abstract semantics applied to the initial process
6 Conclusions
In this paper we have presented an approach to the problem of the reduction
of the number of states of a transition system. Many abstraction criteria for
system specications not including time constraints have been dened, see for
example [4, 6, 9, 11, 12]. For real-time systems the work [17] denes abstractions
for transition systems with quantitative labels, but there, the abstraction is not
driven by the property to be proved.
We have introduced an abstract semantics for ASTP processes in order to formally
dene the abstract transition system. Our abstract semantics is easily
applicable to the concurrent operator: Appendix B shows the extension of the
semantics to cope with this operator.
The abstract semantics can be used to design a tool for automatically building an
abstract transition system. In the implementation, some care must be taken to
manage innite loops which can occur in the look-ahead process. The syntactic
reductions are easily implementable.
The degree of reduction performed by the abstract semantics depends on the
size of the set of actions and on the bound n. In particular, the reduction can
be signicant either when the set is a small subset of A or when the bound n
is small with respect to the length of the time paths in the standard transition
system. Obviously, no reduction is performed if and n is greater than the
longest time path in the standard transition system.
--R
Model Checking via Reachability Testing for Timed Automata.
Logics and Models of Real Time: A Survey.
A Really Temporal Logic.
Selective mu-calculus: New Modal Operators for Proving Properties on Reduced Transition Systems
Selective mu-calculus and Formula-Based Equivalence of Transition Systems
Property Preserving Simu- lations
Automatic Veri
Model Checking and Abstraction.
The NCSU Concurrency Workbench.
Generation of Reduced Models for Checking Fragments of CTL.
Abstract Interpretation of Reactive Systems.
Aboard AUTO.
CADP A Protocol Validation and Veri
Concept of Quanti
Symbolic Model Checking for real-time Systems
Results on the propositional mu-calculus
From Timed Automata to Logic - and Back
The Algebra of Timed Processes
Local Model Checking for Real-Time Systems
--TR
Automatic verification of finite-state concurrent systems using temporal logic specifications
Communication and concurrency
Verifying temporal properties of processes
A really temporal logic
Symbolic model checking for real-time systems
The algebra of timed processes, ATP
Abstract interpretation of reactive systems
Selective mu-calculus and formula-based equivalence of transition systems
From Timed Automata to Logic - and Back
Concept of Quantified Abstract Quotient Automaton and its Advantage
Model Checking via Reachability Testing for Timed Automata
Generalized Quantitative Temporal Reasoning
Property Preserving Simulations
Generation of Reduced Models for Checking Fragments of CTL
Local Model Checking for Real-Time Systems (Extended Abstract)
Validation and Verification Toolbox
The NCSU Concurrency Workbench
Logics and Models of Real Time
Real-Time and the Mu-Calculus (Preliminary Report)
Selective MYAMPERSANDmicro;-calculus | temporal logic;ATP;state explosion |
367881 | Empirical Studies of a Prediction Model for Regression Test Selection. | AbstractRegression testing is an important activity that can account for a large proportion of the cost of software maintenance. One approach to reducing the cost of regression testing is to employ a selective regression testing technique that 1) chooses a subset of a test suite that was used to test the software before the modifications, then 2) uses this subset to test the modified software. Selective regression testing techniques reduce the cost of regression testing if the cost of selecting the subset from the test suite together with the cost of running the selected subset of test cases is less than the cost of rerunning the entire test suite. Rosenblum and Weyuker recently proposed coverage-based predictors for use in predicting the effectiveness of regression test selection strategies. Using the regression testing cost model of Leung and White, Rosenblum and Weyuker demonstrated the applicability of these predictors by performing a case study involving 31 versions of the KornShell. To further investigate the applicability of the Rosenblum-Weyuker (RW) predictor, additional empirical studies have been performed. The RW predictor was applied to a number of subjects, using two different selective regression testing tools, DejaVu and TestTube. These studies support two conclusions. First, they show that there is some variability in the success with which the predictors work and second, they suggest that these results can be improved by incorporating information about the distribution of modifications. It is shown how the RW prediction model can be improved to provide such an accounting. | Introduction
Regression testing is an important activity that can account for a large proportion of the cost of software
maintenance [5, 17]. Regression testing is performed on modified software to provide confidence that the
software behaves correctly and that modifications have not adversely impacted the software's quality. One
approach to reducing the cost of regression testing is to employ a selective regression testing technique.
A selective regression testing technique chooses a subset of a test suite that was used to test the software
before modifications were made, and then uses this subset to test the modified software. 5 Selective regression
testing techniques reduce the cost of regression testing if the cost of selecting the subset from the test suite
together with the cost of running the selected subset of test cases is less than the cost of rerunning the entire
test suite.
Empirical results obtained by Rothermel and Harrold on the effectiveness of their selective regression
testing algorithms, implemented as a tool called DejaVu, suggest that test selection can sometimes be effective
in reducing the cost of regression testing by reducing the number of test cases that need to be rerun [24, 26].
However, these studies also show that there are situations in which their algorithm is not cost-effective.
Furthermore, other studies performed independently by Rosenblum and Weyuker with a different selective
1 Department of Computer and Information Science, Ohio State University.
2 Department of Information and Computer Science, University of California, Irvine.
3 Department of Computer Science, Oregon State University.
5 A variety of selective regression testing techniques have been proposed (e.g., [1, 3, 4, 6, 7, 8, 9, 10, 12, 13, 15, 16, 18, 21,
26, 27, 28, 29, 30]). For an overview and analytical comparison of these techniques, see [25].
regression testing algorithm, implemented as a tool called TestTube [8], also show that such methods are
not always cost-effective [23]. When selective regression testing is not cost-effective, the resources spent
performing the test case selection are wasted. Thus, Rosenblum and Weyuker argue in [23] that it would be
desirable to have a predictor that is inexpensive to apply but could indicate whether or not using a selective
regression testing method is likely to be worthwhile.
With this motivation, Rosenblum and Weyuker [23] propose coverage-based predictors for use in predicting
the cost-effectiveness of selective regression testing strategies. Their predictors use the average percentage of
test cases that execute covered entities-such as statements, branches, or functions-to predict the number
of test cases that will be selected when a change is made to those entities. One of these predictors is used to
predict whether a safe selective regression testing strategy (one that selects all test cases that cover affected
will be cost-effective. Using the regression testing cost model of Leung and White [19], Rosenblum
and Weyuker demonstrate the usefulness of this predictor by describing the results of a case study they
performed involving 31 versions of the KornShell [23]. In that study, the predictor reported that, on average,
it was expected that 87.3% of the test cases would be selected. Using the TestTube approach, 88.1% were
actually selected on average over the 31 versions. Since the difference between these values is very small,
the predictor was clearly extremely accurate in this case. The authors explain, however, that because of
the way their selective regression testing model employs averages, the accuracy of their predictor might vary
significantly in practice from version to version. This is particularly an issue if there is a wide variation in
the distribution of changes among entities [23]. However, because their predictor is intended to be used for
predicting the long-term behavior of a method over multiple versions, they argue that the use of averages is
acceptable.
To further investigate the applicability of the Rosenblum-Weyuker (RW) predictor for safe selective
regression testing strategies, we present in this paper the results of additional studies. We applied the RW
predictor to subjects developed by researchers at Siemens Corporate Research for use in studies to compare
the effectiveness of certain software testing strategies [14]. For the current paper we used both DejaVu and
TestTube to perform selective regression testing. In the following sections we discuss the results of our
studies.
Background: The Rosenblum-Weyuker Predictor
Rosenblum and Weyuker presented a formal model of regression testing to support the definition and computation
of predictors of cost-effectiveness [23]. Their model builds on work by Leung and White on modeling
the cost of employing a selective regression testing method [19]. In both models, the total cost of regression
testing incorporates two factors: the cost of executing test cases, and the cost of performing analyses to
support test selection. A number of simplifying assumptions are made in the representation of the cost in
these models:
1. The costs are constant on a per-test-case basis.
2. The costs represent a composite of the various costs that are actually incurred; for example, the cost
associated with an individual test case is a composite that includes the costs of executing the test case,
storing execution data, and validating the results.
3. The cost of the analyses needed to select test cases from the test suite has a completely negative impact
on cost-effectiveness, in the sense that analysis activities drain resources that could otherwise be used
to support the execution of additional test cases.
4. Cost-effectiveness is an inherent attribute of test selection over the complete maintenance life-cycle,
rather than an attribute of individual versions.
As in Rosenblum and Weyuker's model [23], we let P denote the system under test and let T denote
the regression test suite for P , with jT j denoting the number of individual test cases in T . Let M be the
selective regression testing method used to choose a subset of T for testing a modified version of P , and let
E be the set of entities of the system under test that are considered by M . It is assumed that T and E are
non-empty and that every syntactic element of P belongs to at least one entity in E.
The Rosenblum-Weyuker (RW) model defined covers M (t; e) as the coverage relation induced by method
for P and defined over T \Theta E, with covers M (t; e) true if and only if the execution of P on test case t
causes entity e to be exercised at least once. Rosenblum and Weyuker specify meanings for "exercised" for
several kinds of entities of P . For example, if e is a function or module of P , e is exercised whenever it
is invoked, and if e is a simple statement, statement condition, definition-use association or other kind of
execution subpath of P , e is exercised whenever it is executed.
Letting E C denote the set of covered entities, the RW model defined E C as follows:
with jE C j denoting the number of covered entities. Furthermore, covers M (t; e) can be represented by a 0-1
matrix C, whose rows represent elements of T and whose columns represent elements of E. Then, element
C i;j of C is defined to be:
ae 1 if covers M (i;
Finally, CC was the cumulative coverage achieved by T (i.e., the total number of ones in the 0-1 matrix):
As a first step in computing a predictor for safe strategies when a single entity had been changed,
Rosenblum and Weyuker considered the expected number of test cases that would have to be rerun. Calling
this average NM they defined:
Rosenblum and Weyuker emphasized that this predictor was only intended to be used when the selective
regression testing strategy's goal was to rerun all affected test cases.
A slightly refined variant of NM was defined using E C rather than E as the universe of entities.
Then the fraction of the test suite that must be rerun was denoted -M , the predictor for jT M j=jT j:
Rosenblum and Weyuker discussed results of a case study in which test selection and prediction results
were compared for 31 versions of the KornShell using the TestTube selective regression testing method. As
mentioned above, in this study, the test selection technique chose an average of 88.1% of the test cases in
the test suite over the 31 versions, while the predicted value was 87.3%. They concluded that, because the
difference between these values was very small, their results indicated the usefulness of their predictor as a
way of predicting cost-effectiveness.
3 Two New Empirical Studies of the Rosenblum-Weyuker Predic-
tor
The results of the Rosenblum-Weyuker case study were encouraging for two reasons:
1. The difference between the predicted and actual values was insignificant.
2. Because a large proportion of the test set would have to be rerun for regression testing, and it could
be quite expensive to perform the analysis necessary to determine which test cases did not need to be
rerun, it would often be cost-effective to use the predictor to discover this and then simply rerun the
test suite rather than selecting a subset of the test suite.
Nevertheless, this study involved a single selective regression testing method applied to a single subject pro-
gram, albeit a large and widely-used one for which there were a substantial number of actual production
versions. In order to obtain a broader picture of the usefulness of the RW predictor, we conducted additional
studies with other subject software and other selective regression testing methods. In particular, we
performed two new studies with two methods, DejaVu and TestTube, applied to a suite of subject programs
that have been used in other studies in the testing literature.
3.1 Statement of Hypothesis
The hypothesis we tested in our new studies is the hypothesis of the Rosenblum-Weyuker study:
Hypothesis: Given a system under test P , a regression test suite T for P , and a selective
regression testing method M , it is possible to use information about the coverage relation covers M
induced by M over T and the entities of P to predict whether or not M will be cost-effective for
regression testing future versions of P .
The previous and current studies test this hypothesis under the following assumptions:
1. The prediction is based on a cost metric that is appropriate for P and T . Certain simplifying assumptions
are made about costs, as described in Section 2.
2. The prediction is performed using data from a single version of P to predict cost-effectiveness for all
future versions of P .
3. "Cost-effective" means that the cumulative cost over all future versions of P of applying M and
executing the test cases in T selected by M is less than the cumulative cost over all future versions of
P of running all test cases in T (the so-called retest-all method).
Lines of Number of Number of Test Pool Average Test
Program Code Functions Modified Versions Size Suite Size Description
separation
schedule2
printtokens2
printtokens1
replace 516 21 replacement
Table
1: Summary of subject programs.
3.2 Subject Programs
For our new studies, we used seven C programs as subjects that had been previously used in a study by
researchers at Siemens Corporate Research [14]. Because the researchers at Siemens sought to study the
fault-detecting effectiveness of different coverage criteria, they created faulty modified versions of the seven
base programs by manually seeding the programs with faults, usually by modifying a single line of code in
the base version, and never modifying more than five lines of code. Their goal was to introduce faults that
were as realistic as possible, based on their experience. Ten people performed the fault seeding, working
"mostly without knowledge of each other's work'' [14, p. 196].
For each base program, Hutchins et al. created a large test pool containing possible test cases for the
program. To populate these test pools, they first created an initial set of black-box test cases "according to
good testing practices, based on the tester's understanding of the program's functionality and knowledge of
special values and boundary points that are easily observable in the code" [14, p. 194], using the category
partition method and the Siemens Test Specification Language tool [2, 20]. They then augmented this set
with manually-created white-box test cases to ensure that each executable statement, edge, and definition-use
pair in the base program or its control flow graph was exercised by at least test cases. To obtain
meaningful results with the seeded versions of the programs, the researchers retained only faults that were
"neither too easy nor too hard to detect" [14, p. 196], which they defined as being detectable by at least
three and at most 350 test cases in the test pool associated with each program.
Table
presents information about these subjects. For each program, the table lists its name, the number
of lines of code in the program, the number of functions in the program, the number of modified (i.e., fault
seeded) versions of the program, the size of the test pool, the average number of test cases in each of the 1000
coverage-based test suites we generated for our studies, and a brief description of the program's function.
We describe the generation of the 1000 coverage-based test suites in greater detail below.
3.3 Design of the New Studies
In both studies, our analysis was based on measurements of the following variables:
Independent Variable: For each subject program P , test suite T for P and selective regression
testing method M , the independent variable is the relation covers M (t; e) defined in Section 2.
Dependent Variables: For each subject program P , test suite T for P and selective regression
testing method M , there are two dependent variables: (1) the cost of applying M to P and T ,
and (2) the cost of executing P on the test cases selected by M from T .
We then used the model described in Section 2 plus our measurements of the dependent variables to perform
analyses of cost-effectiveness. Each study involved different kinds of analysis, as described below.
For both studies, we used the Siemens test pools from which we selected smaller test suites. In particular,
we randomly generated 1000 branch-coverage-based test suites for each base program from its associated
test pool. 6 To create each test suite T applied the following algorithm:
1. Initialize T i to OE.
2. While uncovered coverable branches remain,
(a) Randomly select an element t from T using a uniform distribution,
(b) Execute P with t, recording coverage,
(c) Add t to T i if it covered branches in P not previously covered by the other test cases in T i ,
3. If T i differs from all other previously generated test suites, keep T i , and increment
a new T i .
Step 3 of the procedure ensures that there are no duplicate test suites for a program, although two different
test suites may have some test cases in common.
For both of our studies, we computed the cost measures (dependent variables) for both DejaVu and
TestTube, and we compared this information to test selection predictions computed using the RW predictor.
To gather this information, we considered each base program P with each modified version P i and each test
suite T j . For each P and each T j , we computed the following:
, the percentage of test cases of T j that the RW predictor predicts will be selected by
DejaVu when an arbitrary change is made to P ;
, the percentage of test cases of T j that the RW predictor predicts will be selected by
TestTube when an arbitrary change is made to P ;
, the percentage of test cases of T j actually selected by DejaVu for the changes made
to create P i from P ; and
, the percentage of test cases of T j actually selected by TestTube for the changes
made to create P i from P .
Finally, we used these values to evaluate the accuracy of the RW predictor, as described in detail in the
following sections.
The goal of our first study was to determine the accuracy, on average, of the RW predictor for the subject
programs, modified versions, and test suites for each of the selective regression testing approaches we con-
sidered. We therefore used the regression test selection information described above to compute the average
6 Because our studies focused on the cost-effectiveness of selective regression testing methods rather than the fault-detecting
effectiveness of coverage criteria, the realism of the modifications made to the Siemens programs is not a significant issue. What
is more important is that they were made independently by people not involved in our study, thereby reducing the potential
for bias in our results.
percentages of test cases selected by DejaVu and TestTube over all versions P i of P . For each P and each
we computed the following:
P jversions of P j
jversions of P j
(1)
P jversions of P j
jversions of P j
(2)
The first step in our investigation was to see how much the percentage of test cases actually selected by
each of the methods differed from the predicted percentage of test cases. For this analysis, we needed the
following two additional pieces of data, which we computed for each P and each
and D TestTube j
represent the deviations of the percentages of the test cases predicted by the RW
predictor for T j from the average of the actual percentages of test cases selected by the respective method
for all versions P i of P . Because it is possible for D DejaVu j
and D TestTube j
to lie anywhere in the range [-100,
100], we wanted to determine the ranges into which the values for D DejaVu j
and D TestTube j
actually fell. Thus,
we rounded the values of D DejaVu j
and D TestTube j
to the nearest integer I , and computed, for each I such
that \Gamma100 - I - 100, the percentage of the rounded D values with value I . For each P , using each of its
values, the result was a set H
r is the range value; \Gamma100 - r - 100;
prd is the percentage of rounded D DejaVu j
values at rg (5)
Similarly, for each P , using each of its D TestTube j
values, the result was a set H TestTube :
r is the range value; \Gamma100 - r - 100;
prd is the percentage of rounded D TestTube j
values at rg (6)
These sets essentially form a histogram of the deviation values. In the ideal case of perfect prediction, all
deviations would be zero, and therefore each graph would consist of the single point (0, 100%).
3.3.2 Study 2
In Study 1, we treated the RW predictor as a general predictor in an attempt to determine how accurate
it is for predicting test selection percentages for all future versions of a program. In the earlier KornShell
study [23], it was determined that the relation covers M (t; e) changes very little during maintenance. In
particular, Rosenblum and Weyuker found that the coverage relation was extraordinarily stable over the 31
versions of KornShell that they included in their study, with an average of only one-third of one percent of
the elements in the relation changing from version to version, and only two versions for which the amount
of change exceeded one percent. For this reason, Rosenblum and Weyuker argued that coverage information
from a single version might be usable to guide test selection over several subsequent new versions, thereby
saving the cost of redoing the coverage analysis on each new version.
However, in circumstances where the coverage relation is not stable, it may be desirable to make predictions
about whether or not test selection is likely to be cost-effective for a particular version, using
version-specific information. The goal of our second study was therefore to examine the accuracy of the RW
predictor as a version-specific predictor for our subject programs, modified versions, and test suites. The
intuition is that in some cases it might be important to utilize information that is known about the specific
changes made to produce a particular version.
We considered each base program P , with each modified version P i and test suite T j , as we had done in
Study 1, except that we did not compute averages over the percentages of test cases selected over all versions
of a program. Instead, the data sets for this study contain one deviation for each test suite and each version
of a program.
As in Study 1, the first step in our investigation was to see how much the percentage of test cases actually
selected by each of the methods differed from the predicted percentage of test cases. For this analysis, we
needed the following additional pieces of data, which we computed for each P , each P i , and each
and D TestTube i;j
represent the deviations of the percentages of the test cases predicted by the
RW predictor for T j from the actual percentages of test cases selected by the respective method for the
versions P i of P . As in Study 1, to determine the ranges into which the values for D DejaVu i;j
and D TestTube i;j
actually fell, we rounded the values of D DejaVu i;j
and D TestTube i;j
to the nearest integer I and computed, for
each I such that \Gamma100 - I - 100, the percentage of the rounded D values with value I . For each P , using
each of its D DejaVu i;j
values, the result was a set H
r is the range value; \Gamma100 - r - 100;
prd is the percentage of rounded D DejaVu i;j
values at rg (9)
Similarly, for each P , using each of its D TestTube i;j
values, the result was a set H TestTube :
r is the range value; \Gamma100 - r - 100;
prd is the percentage of rounded D TestTube i;j
values at rg (10)
3.4 Threats to Validity
There are three types of potential threats to the validity of our studies: (1) threats to construct validity, which
concern our measurements of the constructs of interest (i.e., the phenomena underlying the independent and
dependent variables); (2) threats to internal validity, which concern our supposition of a causal relation
between the phenomena underlying the independent and dependent variables; and (3) threats to external
validity, which concern our ability to generalize our results.
3.4.1 Construct Validity
Construct validity deals directly with the issue of whether or not we are measuring what we purport to be
measuring. The RW predictor relies directly on coverage information. It is true that our measurements
of the coverage relation are highly accurate, but the coverage relation is certainly not the only possible
phenomenon that affects the cost-effectiveness of selective regression testing. Therefore, because this measure
only partially captures that potential, we need to find other phenomena that we can measure for purposes
of prediction.
Furthermore, we have relied exclusively on the number of test cases selected as the measure of cost
reduction. Care must be taken in the counting of test cases deemed to be "selected", since there are other
reasons a test case may not be selected for execution (such as the testing personnel simply lacking the time
to run the test case). In addition, whereas this particular measure of cost reduction has been appropriate
for the subjects we have studied, there may be other testing situations for which the expense of a test lab
and testing personnel might be significant cost factors. In particular, the possibility of using spare cycles
might affect the decision of whether or not it is worthwhile to use a selective regression testing method at
all in order to eliminate test cases, and therefore whether or not a predictor is meaningful.
3.4.2 Internal Validity
The basic premises underlying Rosenblum and Weyuker's original predictor were that (1) the cost-effectiveness
of a selective regression testing method, and hence our ability to predict cost-effectiveness, are directly dependent
on the percentage of the test suite that the selective regression testing method chooses to run, and
that (2) this percentage in turn is directly dependent on the coverage relation. In this experiment we take
the first premise as an assumption, and we investigate whether the relation between percentage of tests
selected and coverage exists and is appropriate as a basis for prediction. The new data presented in this
paper reveal that coverage explains only part of the cost-effectiveness of a method and the behavior of the
RW predictor. Future studies should therefore attempt to identify the other factors that affect test selection
and cost effectiveness.
3.4.3 External Validity
The threats to external validity of our studies are centered around the issue of how representative the subjects
of our studies are. All of the subject programs in our new studies are small, and the sizes of the selected
test suites are small. This means that even a selected test suite whose size differs from the average or the
predicted value by one or two elements would produce a relatively large percentage difference. (The results
of Study 1 are therefore particularly interesting because they show small average deviations for most of the
subject programs.)
For the studies involving the Siemens programs, the test suites were chosen from the test pools using
branch coverage, which is much finer granularity than TestTube uses, suggesting that there is a potential
"mismatch" in granularity that may somehow skew the results. More generally, it is reasonable to ask
whether our results are dependent upon the method by which the test pools and test suites were generated,
and the way in which the programs and modifications were designed. We view the branch coverage suites
as being reasonable test suites that could be generated in practice, if coverage-based testing of the programs
were being performed. Of course there are many other ways that testers could and do select test cases, but
because the test suites we have studied are a type of suite that could be found in practice, results about
predictive power with respect to such test suites are valuable.
The fact that the faults were synthetic (in the sense that they were seeded into the Siemens programs)
may also affect our ability to investigate the extent to which change information can help us predict future
changes. In a later section we will introduce a new predictor that we call the weighted predictor. This
predictor depends on version-specific change information. Because it seemed likely that conclusions drawn
using synthetic changes would not necessarily hold for naturally occurring faults, we did not attempt to use
the Siemens programs and their faulty versions to empirically investigate the use of the weighted predictor.
Nevertheless, the predictor itself is not dependent on whether the changes are seeded or naturally occurring,
and thus our results provide useful data points.
4 Data and Results
4.1 Study 1
Figure
presents data for D DejaVu j
(Equation (3)) and D TestTube j
(Equation (4)), with one graph for each
subject program P ; the Appendix gives details of the computation of this data using one of the subject
programs, printtokens2, as an example.
Each graph contains a solid curve and a dashed curve. The solid curve consists of the connected set of
points H DejaVu j
(Equation (5)), whereas the dashed curve consists of the connected set of points H TestTube j
(Equation (6)). Points to the left of the "0" deviation label on the horizontal axes represent cases in which
the percentage of test cases predicted was less than the percentage of test cases selected by the tool, whereas
points to the right of the "0" represent cases in which the percentage of test cases predicted was greater
than the percentage of test cases selected by the tool. To facilitate display of the values, a logarithmic
transformation has been applied to the y axes. No smoothing algorithms were applied to the curves.
For all P and T j , the D DejaVu j
were in the range [-20,33] and the D TestTube j
were in the range [-24,28].
However, as we shall see in Figure 2, these ranges are a bit misleading because there are rarely any significant
number of values outside the range (-10,0] or [0,10), particularly for TestTube.
The graphs show that, for our subjects, the RW predictor was quite successful for both the DejaVu and
TestTube selection methods. The predictor was least successful for the printtokens2 program for which
it predicted an average of 23% more test cases than DejaVu actually selected. This was the only deviation
that exceeded 10% using the DejaVu approach. For schedule1, the prediction was roughly 9% high, on
average, compared to the DejaVu-selected test suite. DejaVu selected an average of roughly 10% more test
cases than predicted for schedule2, 7% more for totinfo, 7% more for tcas, 3% more for printtokens1,
and 4% fewer for replace than the RW predictor predicted.
For TestTube, the predictor also almost always predicted within 10% of the actual average number of test
cases that were actually selected. The only exception was for the totinfo program, for which the average
deviation was under 12%. For the other programs, the average deviations were 5% for the printtokens1
program, 5% for the printtokens2 program, 4% for the replace program, 7% for the tcas program, 10%
for schedule1 and 1% for schedule2. We consider these results encouraging, although not as successful as
the results described by Rosenblum and Weyuker for the KornShell case study. Recall that in that study
there were a total of 31 versions of KornShell, a large program with a very large user base, with all changes
made to fix real faults or modify functionality. None of the changes were made for the purpose of the study.
Another way to view the data for this study is to consider deviations of the predicted percentage from the
totinfo
schedule2
schedule1
printtokens2printtokens1
-60legend
actual test selection
which a level of deviation occurred
vertical axes - percentage of test suites for
solid curve - DejaVu results
dashed curve - TestTube results
horizontal axes - deviation in predicted versus
Figure
1: Deviation between predicted and actual test selection percentages for application of DejaVu and
TestTube to the subject programs. The figure contains one graph for each subject program. In each graph,
the solid curve represents deviations for DejaVu, and the dashed curve represents deviations for TestTube.
tcas
schedule1
schedule2 totinto printtokens2 printtokens1 replace
[10%, 20%)
[20%, 30%)
Predicted
and
Actual
Test
Selection
Percentages
Absolute
Value
Deviation
Between
Figure
2: Absolute value deviation between predicted and actual test selection percentages for application
of DejaVu and TestTube to the subject programs. The figure contains two bars for each subject program:
the left bar of each pair represents the absolute value deviation of DejaVu results from the RW predictor,
and the right bar represents the absolute value deviation of TestTube results from the RW predictor. For
each program P , these numbers are averages over all versions P i of P . Each bar represents 100% of the
test suites T j , with shading used to indicate the percentage of test suites whose deviations fell within the
corresponding range.
actual percentage without considering whether the predicted percentage was greater or less than the actual
percentage selected. These deviations constitute the absolute value deviation. To compute the absolute value
deviation, we performed some additional computations:
For each P and each T j , we first computed AbsD DejaVu j
j.
We then tabulated the percentage of the AbsD DejaVu j
and the AbsD TestTube j
that fell in each of the ranges
[0%,10%), [10%,20%), :::, [90%,100%].
Figure
depicts these results as segmented bar graphs. The figure contains two bars for each subject
program: the left bar of each pair represents the absolute value deviation of DejaVu results from the RW
predictor, and the right bar represents the absolute value deviation of TestTube results from the RW predic-
tor. For each program P , these numbers are averages over all versions P i of P . Each bar represents 100% of
the test suites T j , with shading used to indicate the percentage of test suites whose deviations fell within the
corresponding range. For instance, in the case of printtokens2, 100% of the test suites showed less than
10% deviation for TestTube, whereas for DejaVu, 14% of the test suites showed deviations between 10% and
20%, 82% showed deviations between 20% and 30%, and 4% showed deviations between 30% and 40%.
The results of this study show that for many of the subject programs, modified versions, and test suites,
the absolute value deviation for both DejaVu and TestTube was less than 10%. In these cases, the RW
model explains a significant portion of the data. However, in a few cases, the absolute value deviation was
significant. For example, as mentioned above, for printtokens2, the absolute value deviation from the
predictor for DejaVu was between 20% and 30% for more than 80% of the versions.
One additional feature of the data displayed in Figure 1 bears discussion. For all programs other than
printtokens2, the curves that represent deviations for DejaVu and TestTube are (relatively) close to one
another. For printtokens2, in contrast, the two curves are disjoint and (relatively) widely separated.
Examination of the code coverage data and locations of modifications for the programs reveals reasons for
this difference.
Sixteen of the nineteen printtokens2 functions are executed by a large percentage (on average over
95%) of the test cases in the program's test pool; the remaining three functions are executed by much lower
percentages (between 20% and 50%) of the test cases in that test pool. All modifications of printtokens2
occur in the sixteen functions that are executed by nearly all test cases. Thus, the actual test selections by
TestTube, on average, include most test cases. The presence of the latter three functions, and the small
number of test cases that reach them, however, causes a reduction in the average number of test cases per
function, and causes the function-level predictor to under-predict by between 0% and 10% the number of
test cases selected by TestTube.
Even though nearly all test cases enter nearly all functions in printtokens2, several of these functions
contain branches that significantly partition the paths taken by test cases that enter the functions. Thus,
many of the statements in printtokens2 are actually executed by fewer than 50% of the test cases that enter
their enclosing functions. When modifications occur in these less-frequently executed statements, DejaVu
selects much smaller test suites than TestTube. (For further empirical comparison of TestTube and DejaVu,
see [22].) This is the case for approximately half of the modified versions of printtokens2 utilized in this
study. However, the presence of a large number of statements that are executed by a larger proportion of the
test cases causes the average number of test cases per statement to exceed the number of test cases through
modified statements. The end result is that the statement-level predictor over-predicts the number of test
cases selected by DejaVu by between 5% and 27%. Of course, the precise locations of modifications in the
subjects directly affect the results. Therefore, the fact that all changes were synthetic is of concern when
trying to generalize these results.
4.2 Study 2
Like
Figure
1, Figure 3 contains one graph for each subject program. The graphs also use the same notation
as was used in Figure 1, using a solid curve to represent the percentage of occurrences of D DejaVu i;j
over
deviations for all test suites T j and using a dashed curve to represent the percentage of occurrences of
over deviations for all test suites T j . Again, a logarithmic transformation has been applied to
the y axes. Figure 4 depicts these results as a segmented bar graph, in the manner of Figure 2.
The results of this study show that, for the subject programs, modified versions, and test cases, the
deviations and absolute value deviations for individual versions for both DejaVu and TestTube are much
greater than in Study 1. This is not surprising because in this study the results are not averaged over all
versions as they were in Study 1. For example, consider tcas, printtokens1, and replace. In Study 1, the
average absolute value deviation from the predicted percentage for each of these programs is less than 10%
using either DejaVu or TestTube. However, when individual versions are considered, the percentage of test
schedule1
tcas schedule2
printtokens2
replace
legend
horizontal axes - deviation in predicted versus actual
test selection
a level of deviation occurred
vertical axes - percentage of test suites for which
solid curve - DejaVu results
dashed curve - TestTube results
Figure
3: Version-specific absolute value deviation between predicted and actual test selection percentages
for application of DejaVu and TestTube to the subject programs. The figure contains one graph for each
subject program. In each graph, the solid curve represents deviations for DejaVu, and the dashed curve
represents deviations for TestTube.
tcas
schedule1
schedule2 totinto printtokens2 printtokens1 replace
[10%, 20%)
[20%, 30%)
Version-Specific
Absolute
Value
Deviation
Between
Predicted
and
Actual
Test
Selection
Percentages
Figure
4: Version-specific absolute value deviation between predicted and actual test selection percentages
for application of DejaVu and TestTube to the subject programs. The figure contains two bars for each
subject program: the left bar of each pair represents the absolute value deviation of DejaVu results from the
RW predictor, and the right bar represents the absolute value deviation of TestTube results from the RW
predictor. For each program P , these numbers are averages over all versions P i of P . Each bar represents
100% of the test suites T j , with shading used to indicate the percentage of test suites whose deviations fell
within the corresponding range.
cases selected by DejaVu for these programs varies significantly, up to 64%, from the percentages predicted.
Deviations and absolute value deviations for the other subjects show similar differences. In Figure 3, the
range of deviations can be seen. In most cases there are at least a few versions that have a small number
of instances for which the deviations are significant. The bar graphs in Figure 4 show more clearly how
frequently these large absolute value deviations occur.
In
Figure
3, the data for printtokens2 is again particularly interesting. In this case, the curve for
TestTube is peaked and relatively narrow, whereas the curve for DejaVu is nearly flat and relatively wide.
As discussed in the preceding section, for both techniques, these differences reflect differences in the degree
of variance in the coverage relations at the statement and function level, as well as differences in the location
of modifications. In this case, however, considering prediction on a version-specific basis causes the deviation
in prediction at the statement level, where the variance in coverage is large, to be flat. Lack of variance in
coverage at the function level prevents the TestTube curve from being flat.
5 Improved Predictors
As discussed in Section 3.4, the assumptions underlying our measurement of costs may pose threats to the
validity of our conclusions about predictions of cost-effectiveness using the RW predictor. Furthermore, in
some of the subject programs of our studies, there was significant absolute deviation of the results of the
selective regression testing tools (DejaVu and TestTube) with respect to test selection values predicted by the
RW predictor. Therefore, we believe that there may be factors affecting cost-effectiveness that are not being
captured by the RW predictor. These factors, if added to the model, could improve the accuracy of both
general and version-specific predictors. The RW predictor accounts for test coverage but does not account
for the locations of modifications. Therefore, one obvious refinement would be to incorporate information
about modifications into the predictor. We saw in Study 2 that the specific changes made to create a
particular version may have significant effects on the accuracy of prediction in practice. Thus, we believe
that an extended weighted predictor might be more accurate for both general and version-specific prediction.
Such a predictor would incorporate information about the locations of the changes and weight the predictor
accordingly.
To this end, in this section we extend the RW predictor by adding weights that represent the relative
frequency of changes to the covered entities. For each element e is the relative frequency with
which e j is modified, and it is defined such that
1. The original, unweighted RW model, discussed
in Section 2, computes the expected number of test cases that would be rerun if a single change is made
to a program. To do this, the model uses the average number of test cases that cover each covered entity
This average is referred to as N C
M . The weighted analogue of N C
M is a weighted average, WN C
which we define as follows:
WN C
where C i;j is defined as before:
ae 1 if covers M (i;
Note that the inner sum represents the total number of test cases covered by e multiplying that sum by
weighted contribution to the total number of test cases selected overall.
For this weighted average, the fraction of the test suite T that must be rerun, denoted by \Pi M , is given
as follows:
Note that the original, unweighted RW predictor, -M , is a version of this weighted predictor in which w j
is 1
(which represents an assumption that each entity is equally likely to be changed):
Figure
5: Coverage Pattern A.
Figure
Coverage Pattern B.
To see the impact of the difference between -M and \Pi M , consider coverage patterns A and B, shown
respectively in Figures 5 and 6, where each dot represents an entity and each closed curve a test case.
Assume that the patterns are generalized over a large number of entities, n. As discussed by Rosenblum and
Weyuker [23], the value of -M predicted for each pattern is 2=n. In Pattern A, the test cases are distributed
evenly over the entities, and thus, \Pi M and -M are the same, and yield the exact number of test cases that
would be selected by either DejaVu or TestTube, regardless of the relative frequency of changes (and hence,
regardless of the values assigned to the w j ).
In Pattern B, the test cases are not distributed evenly over the entities, and in contrast with Pattern
A, the RW predictor never predicts the exact fraction selected for any changed entity, and it is significantly
inaccurate for a change to the "core" element of that pattern. Suppose, however, that instead of assuming
that the frequency of change is equal for all entities, we had information about the relative frequency of
modifications to individual entities. In this case, using the weighted predictor, we could compute a more
accurate estimate of the fraction of the test suite that would be selected. For example, if we knew that
changes are always made to two of the non-core entities (with one changed exactly as often as the other)
and that no other entities are ever changed, then the weights would be 1=2 for the two changed entities and
0 for all other entities. And thus, we would predict that (for the case of a single entity change) 1=n of the
test suite would be selected, rather than 2=n as predicted by the unweighted predictor.
5.1 Improved General Prediction
Provided we can obtain values for weights that accurately model the distribution of future modifications
to a program, we can use the weighted predictor, \Pi M , to improve general prediction. One approach is
to utilize change history information about the program, often available from configuration management
systems. Assuming that the change histories do accurately model the pattern of future modifications (a
result suggested by the work of Harrison and Cook [11]), we can use this information to compute weights
for \Pi M . If the change histories are recorded at the module level, \Pi M can be used as well to predict the
percentage of test cases selected on average by a tool, such as TestTube, that considers module-level changes
to the system. If the change histories are recorded at the statement level, \Pi M can be used to predict the
percentage of test cases selected on average by a tool, such as DejaVu, that considers statement-level changes
to the system. In either case, the weighted predictor can be used to incorporate data that may account for
change-location information, without performing full change analysis. Thus, it can be used to assess whether
it will be worthwhile to perform all of the analysis needed by a selective regression testing tool.
In practice, weights may be collected and assumed to be fixed over a number of subsequent versions of
a program, or they may be adjusted as change history information becomes available. In this context, an
important consideration involves the extent to which weights collected at a particular time in the history of
a program can continue to predict values for future versions of that program, and the extent to which the
accuracy of predictions based on those weights may decrease over time. Future empirical study of this issue
is necessary.
5.2 Improved Version-Specific Prediction
We can also use the weighted predictor, \Pi M , as a version-specific predictor. For this version-specific predictor,
one approach computes the w i using the configuration management system. We assign a weight of 1=k to
each entity that has been changed (where k is the total number of entities changed), and we assign a weight
of 0 to all other entities in the system. Using these weights, \Pi M computes the exact percentage of test cases
that will be selected by a test selection tool that selects at the granularity of the entities. For example, if the
entities are modules, then \Pi M will predict the exact percentage of test cases that will be selected by a test
selection tool, such as TestTube, that considers changes at the module level. If the entities are statements,
then \Pi M will predict the exact percentage of test cases that will be selected by a test selection tool, such
as DejaVu, that considers changes at the statement level. If the cost of determining the number of test
cases that will be selected is cheaper than the cost of actually selecting the test cases, this approach can be
cost-effective.
It is worth noting that Rosenblum and Weyuker found, in their experiments with KornShell, that it was
typically not necessary to recompute the coverage relation frequently, because it remained very stable over
the 31 versions they studied. If this is typical of the system under test, then this should make version-specific
predictors extremely efficient to use and therefore provide valuable information about whether or not the
use of a selective regression testing strategy is likely to be cost-effective for the current version of the system
under test.
An alternative approach assumes that method M can be supplemented with an additional change analysis
capability that is more efficient but less precise than M 's change analysis. This supplementary change analysis
is used during the critical phase of regression testing - after all modifications have been made to create P 0 ,
the new version of P . 7 The results of the supplementary change analysis can be used to assign weights to
the entities in the system, which are then used for prediction as described above.
Using the weighted predictor, \Pi M , as a version-specific predictor will be especially appropriate for test
suites whose test cases are not evenly distributed across the entities, such as the case illustrated by Pattern
B, where test selection results for specific versions may differ widely from average test selection results over
a sequence of versions.
6 Conclusions
In this paper we presented results from new empirical studies that were designed to evaluate the effectiveness
and accuracy of the Rosenblum-Weyuker (RW) model for predicting cost-effectiveness of a selective regression
testing method. The RW model was originally framed solely in terms of code coverage information, and
evaluated empirically using the TestTube method and a sequence of 31 versions of KornShell. In the new
studies, two selective regression testing methods were used (TestTube and DejaVu), and seven different
programs were used as subjects. For the experimental subjects we used in the new studies, the original
RW model frequently predicted the average, overall effectiveness of the two test selection techniques with
an accuracy that we believe is acceptable given that the cost assumptions underlying the RW model are
quite realistic for our subjects. However, the predictions of the model occasionally deviated significantly
from observed test selection results. Moreover, when this model was applied to the problem of predicting
test selection results for particular modified versions of the subject programs, its predictive power decreased
substantially, particularly for DejaVu. These results suggest that the distribution of modifications made to
a program can play a significant role in determining the accuracy of a predictive model of test selection. We
therefore conclude that to achieve improved accuracy both in general, and when applied in a version-specific
manner, prediction models must account for both code coverage and modification distribution.
In response to this result, we showed how to extend the Rosenblum-Weyuker predictor to incorporate
information on the distribution of modifications. However, to judge the efficacy of this extended predictive
model in practice, we require additional experimentation. For this purpose, the subjects used in the studies
reported in this paper will not suffice. Rather, we require versions of a program that form a succession of
changes over their base versions, as the versions of KornShell did. We are currently building a repository
of such programs and versions that, when complete, will provide subjects suitable for further empirical
investigation of predictive models for regression testing in general, and of our weighted predictor in particular.
Future studies must be directed not only toward further validation of the RW predictor and the improved
predictors described in this paper, but toward the development of a more realistic cost model for regression
7 Rothermel and Harrold divide regression testing into two phases for the purpose of cost analysis. During the preliminary
phase, changes are made to the software, and the new version of the software is built. During the critical phase, the new version
of the software is tested prior to its release to customers [25].
Test
Version 4 41.7 18.2 25.0 30.0 38.5 37.5 36.4 ::: 35.7
Version 5 8.3 9.1 12.5 10.0 7.7 12.5 9.1 ::: 7.1
Version 6 16.7 45.5 25.0 20.0 30.8 25.0 18.2 ::: 14.3
Version 7 41.7 18.2 25.0 30.0 38.5 37.5 36.4 ::: 35.7
Version 8 66.7 81.8 62.5 70.0 76.9 75.0 81.8 ::: 71.4
Version 9 41.7 18.2 25.0 30.0 38.5 37.5 36.4 ::: 35.7
33.3
Table
2: Partial data used in the computation of graphs in Figure 1.
testing. This will require extensive field studies of existing large systems in order to create a better picture
of the different factors driving cost-effectiveness, such as test suite size, test case execution times, testing
personnel costs, and the availability of spare machine cycles for regression testing.
Acknowledgments
This work was supported in part by a grant from Microsoft, Inc., by NSF under NYI award CCR-9696157
to Ohio State University, CAREER award CCR-9703108 to Oregon State University, and Experimental
Software Systems award CCR-9707792 to Ohio State University and Oregon State University, and by an
Ohio State University Research Foundation Seed Grant. Thanks to Huiyu Wang who performed some of
the experiments, to Panickos Palletas and Qiang Wang who advised us on the statistical analysis, to Jim
Jones who performed some of the statistical analysis, and to Monica Hutchins and Tom Ostrand of Siemens
Corporate Research for supplying the subject programs and other data necessary for the experimentation.
Appendix
Details of the Computation of Data for Figures 1
and 3
To compute the data used for the graphs in Figure 1, we used a procedure described in Section 3.3.1. As
further explanation, we give details of the computation of that data for one subject program, printtokens2.
For our experiments, we used 1000 coverage-based test suites, T 1 ; :::T 1000 . Table 2 shows data for a subset
of these test suites: . For each test suite T j , we used the RW predictor to predict the
number of test cases that would be selected by DejaVu when an arbitrary change is made to printtokens2.
We then used this number to determine - DejaVu j
, the percentage of test cases that the RW predictor predicts
will be selected by DejaVu when an arbitrary change is made to printtokens2. The first row of Table 2
gives these percentages for
We had ten versions of printtokens2 (see Table 1). We next ran DejaVu on these ten versions, with each
of the 1000 test suites, and, for each version i and test suite j, recorded the number of test cases selected.
We then used this number to compute, for each i and j, S DejaVu i;j
, the percentage of test cases selected. The
ten rows for Versions these percentages. For example, from the table, we can see that,
for
ranges from 8.3% to 66.7%. Using Equation (1), we then computed the S DejaVu j
for each
test suite T j . Table 2 gives these percentages for each T j .
We then used Equation (3) to compute, for each T j , the difference between the percentage predicted by
the RW predictor and the average percentage selected by Dejavu (i.e., D DejaVu j
Table
2 shows that, for
the D DejaVu j
range from 18.9% to 26.1%.
Finally, we created the set, H DejaVu (Equation (5)). The ordered pairs in this set are obtained by first
rounding the percentages of the D DejaVu j
, then determining the number of those rounded percentages that
have range value \Gamma100 - r - 100, and then determining the percentage of those percentages that occur for
each value of r. Thus, for printtokens2, D DejaVu 6
rounds to 19, D DejaVu 3
, and D DejaVu 7
round to
rounds to 22, D DejaVu 4
rounds to 23, D DejaVu 1000
rounds to 24, and D DejaVu 2
rounds to 26. Thus,
there will be ordered pairs in H DejaVu with first coordinates 19, 21, 22, 21, 24, and 26, and the number of
rounded percentages for T 1 :::T 1000 are used to compute the percentage of times (among the 1000 test suites)
each percentage occurs, which is then used in the computation of the second coordinates of the ordered pairs.
We used these ordered pairs to plot the solid curve for printtokens2 in Figure 1.
We used a similar approach to obtain the data for Figure 3 except that we did not compute the averages
of the deviations. To compute the data used for the graphs in Figure 3, we used a procedure described
in Section 3.3.2. As further explanation, we give details of the computation of that data for one subject
program, printtokens2.
Table
3 shows data for a subset of the 1000 coverage-based test suites: . For each test
suite T i , we used the RW predictor to predict the number of test cases that would be selected by DejaVu
when an arbitrary changes is made to printtokens2. We then used this number to determine - DejaVu j
, the
percentage of test cases that the RW predictor predicts will be selected by DejaVu when an arbitrary change
is made to printtokens2. The first row of Table 3 gives these percentages for
Next, we ran DejaVu on the ten versions of printtokens2, and, for each version, recorded the number of
test cases selected. We then used this number to compute S DejaVu i;j
1:::10, the percentage of test cases
selected. The ten rows for Versions in Table 3 give these percentages.
We then used Equation (7) to compute, for each version i and each T j , D DejaVu i;j
, the difference between
the percentage predicted by the RW predictor and the percentage selected by Dejavu. Table 3 shows that,
for percentages range from -26.6% to 52.5%.
Finally, we created the set, H DejaVu (Equation (9)). These ordered pairs are obtained by first rounding
the percentages of the D DejaVu i;j
, determining the number of those rounded percentages that have range value
then determining the percentage of those percentages that occur for each value of r.
For example, for printtokens2, D DejaVu 1;2
, D DejaVu 8;2
, and D DejaVu 8;6
round to \Gamma20 and D DejaVu 4;5
, D DejaVu 7;5
, D DejaVu 4;7
, D DejaVu 7;7
, D DejaVu 9;7
, and D DejaVu 10;7
round to 20. Thus, H DejaVu
contains ordered pairs
with \Gamma20 and 20 as the first coordinates, and the number of rounded percentages for T 1 :::T 1000 are used to
compute the percentage of times (among the 1000 * 10 test-suite/version pairs) each percentage occurs, and
used in the computation of the second coordinates of the ordered pairs.
Test
Version 4 41.7 18.2 25.0 30.0 38.5 37.5 36.4 ::: 35.7
Version 5 8.3 9.1 12.5 10.0 7.7 12.5 9.1 ::: 7.1
Version 6 16.7 45.5 25.0 20.0 30.8 25.0 18.2 ::: 14.3
Version 7 41.7 18.2 25.0 30.0 38.5 37.5 36.4 ::: 35.7
Version 8 66.7 81.8 62.5 70.0 76.9 75.0 81.8 ::: 71.4
Version 9 41.7 18.2 25.0 30.0 38.5 37.5 36.4 ::: 35.7
Version 3 38.1 43.4 28.0 44.0 34.9 42.6 37.8 ::: 33.4
Version 4 13.1 43.4 28.0 24.0 19.5 17.6 19.6 ::: 19.1
Version 6 38.1 16.1 28.0 34.0 27.2 30.1 37.8 ::: 40.5
Version 7 13.1 43.4 28.0 24.0 19.5 17.6 19.6 ::: 19.1
Version 8 \Gamma11:9 \Gamma20:2 \Gamma9:5 \Gamma16:0 \Gamma18:9 \Gamma19:9 \Gamma25:8 ::: \Gamma16:6
Version 9 13.1 43.4 28.0 24.0 19.5 17.6 19.6 ::: 19.1
Table
3: Partial data used in the computation of graphs in Figure 3.
We used this procedure to obtain the data for the rest of the graphs in Figure 3 for DejaVu, and used a
similar procedure to obtain the data for the graphs for TestTube.
--R
Incremental regression testing.
Automatic generation of test scripts from formal test specifi- cations
On the limit of control flow analysis for regression test selection.
Incremental program testing using program dependence graphs.
Software Testing Techniques.
Semantics guided regression test cost reduction.
A system for selective regression testing.
A methodology for retesting modified software.
An approach to regression testing using slicing.
Insights on improving the maintenance process through software measure- ment
An incremental approach to unit testing during maintenance.
Techniques for selective revalidation.
Experiments on the effectiveness of dataflow- and controlflow-based test adequacy criteria
of program modifications and its applications in software main- tenance
A methodology for test selection.
Insights into regression testing.
A study of integration testing and software regression at the integration level.
A cost model to compare regression test strategies.
The category-partition method for specifying and generating functional tests
Using dataflow analysis for regression testing.
An empirical comparison of regression test selection techniques.
Using coverage information to predict the cost-effectiveness of regression testing strategies
Empirical studies of a safe regression test selection technique.
Analyzing regression test selection techniques.
Logical modification oriented software testing.
An approach to software fault localization and revalidation based on incremental data flow analysis.
A regression test selection tool based on textual differencing.
A method for revalidating modified programs in the maintenance phase.
--TR
--CTR
David Notkin, Longitudinal program analysis, ACM SIGSOFT Software Engineering Notes, v.28 n.1, p.1-1, January
Alessandro Orso , Nanjuan Shi , Mary Jean Harrold, Scaling regression testing to large software systems, ACM SIGSOFT Software Engineering Notes, v.29 n.6, November 2004
Amitabh Srivastava , Jay Thiagarajan, Effectively prioritizing tests in development environment, ACM SIGSOFT Software Engineering Notes, v.27 n.4, July 2002
John Bible , Gregg Rothermel , David S. Rosenblum, A comparative study of coarse- and fine-grained safe regression test-selection techniques, ACM Transactions on Software Engineering and Methodology (TOSEM), v.10 n.2, p.149-183, April 2001
Hyunsook Do , Gregg Rothermel, An empirical study of regression testing techniques incorporating context and lifetime factors and improved cost-benefit models, Proceedings of the 14th ACM SIGSOFT international symposium on Foundations of software engineering, November 05-11, 2006, Portland, Oregon, USA
Mary Jean Harrold, Testing: a roadmap, Proceedings of the Conference on The Future of Software Engineering, p.61-72, June 04-11, 2000, Limerick, Ireland | regression test selection;software maintenance;selective retest;regression testing |
368139 | A flexible message passing mechanism for objective VHDL. | When defining an object-oriented extension to VHDL, the necessary message passing is one of the most complex issues and has a large impact on the whole language. This paper identifies the requirements for message passing suited to model hardware and classifies different approaches. To allow abstract communication and reuse of protocols on system level, a new, flexible message passing mechanism proposed for Objective VHDL will be introduced. | Introduction
A (hardware) system can be described in the object-oriented
fashion as a set of interacting or communicating
(concurrent) objects. In consequence to encapsulation the
objects tend to be relatively autonomous and only loosely
coupled with their environment [13]. Further, an object
should contain most of the elements it needs to perform its
functionality. This property provides high potential to reuse
objects in other environments than the original one. On the
other hand the specialization and structural decomposition
1. This work has been funded as part of the OMI-ESPRIT Project
REQUEST under contract 20616
of objects requires communication among the objects. The
communication enables objects to use services of other ob-
jects, to inform other objects about something, or in concurrent
domain to synchronize with each other. Such a
communication mechanism is called message passing in
the object-oriented domain. The basic idea is that objects
are able to send and receive messages to provide or get
some information. Of course, it is desirable that a message
passing mechanism preserves as much as possible of the
loose coupling of the object with its environment in order
to obtain universally reusable objects.
Generally, for the communication of concurrent be-
haviours/processes two mechanisms can be used [6]. The
first is communication using shared memory and the second
the message passing via channels or communication
pathways. For the communication of concurrent objects the
latter choice seems to be more appropriate because passing
messages is one of the basic concepts of the object-oriented
paradigm.
Figure
1 shows an abstract picture of message passing.
Object X in the role of a client needs a service of object Y
in the role of a server. To invoke the corresponding method
of Y, X sends a message via a communication pathway to
Y. The message exchange is controlled by a protocol. After
Y has received the message, the correct method has to be
invoked. This functionality is provided by a dispatcher. As
indicated by the figure we will consider the dispatching as
part of the message passing mechanism. After execution of
the method return values-if any-are replied.
Figure
1: Message passing
send message to
invoke method
return values
protocol
communication
pathways
Object Y
attributes
methods
Object X
attributes
methodsCopyright 1998 EDAA. Published in the Proceedings of Date' 98, February 23-25, 1997 in Paris, France. Personal use of this material is permitted.
However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution
to servers or lists, or to reuse any copyrighted component of this work in other works, must be obtained from EDAA.
In the sequential domain often the terms message passing
and method call are used as synonyms. But in concurrent
domain both terms have different semantics. While
with the method call the invoked method needs the computational
thread of the caller to execute the method, invocation
of a method via message passing does not need the
computational thread of the caller because the target object
can use its own.
3 Aspects of message passing
In this chapter several aspects of message passing will
be considered separately. This consideration shows on the
one hand the design space for message passing and on the
other hand the large impact to the whole language.
3.1 Consistency to language
Message passing is an integral part of an object-oriented
language. The relationship between message passing
and the other parts of the language is bilateral. While it is
indispensable that message passing is consistent to the other
object-oriented concepts, these concepts can be used for
the specification of message passing. If for example an language
embodies different encapsulation concepts (classes,
objects) message passing has to take this in account-at
least its implementation. A class concept based on abstract
data types needs another realization of the message passing
compared to a class concepts which represents (structural)
hardware-components. The message passing mechanism
proposed for Objective VHDL in this paper will give an example
how the object-oriented concepts like classification,
inheritance, and polymorphism can be used to implement
it.
Another desired feature of message passing is the con-
sitency with the techniques of object-oriented modelling.
This means consistency in terms of refinement and extensi-
bility. If a model is extended by components which need a
more refined or different way for communication, the message
passing mechanism must be adaptable and by object-oriented
means.
Further, consistency means that the abstraction level of
message passing fits to the abstraction levels the whole language
is intended for. The programming interface should
be abstract and easy to use. For example, the appropriate
encapsulation of the transmission of messages by send/re-
ceive methods.
Abstraction and encapsulation, however, shouldn't be
considered in isolation because they have large impact on
other aspects of message passing (e.g. flexibility, simula-
tion/synthesis).
3.2 Communication pathways
Passing a message from one object to another and performing
the communication protocol requires a communication
pathway which interconnects the objects. If
communication is restricted to 1:1 relation 2 the target object
can be identified directly by the communication path-
way. But the abstraction and definition of communication
pathways differs significantly in literature.
In case a method call has the semantics of a procedure/
function call, the procedure/function call mechanism and
the name of the target object/method can abstractly be considered
as the communication pathway.
If the communicating objects represent hardware com-
ponents, in VHDL terms-entities, another representation
of communication pathways is necessary. In [14] some
kind of identifier for an entity object, called a handle which
can be exchanged among entities, is proposed. If an object
has the handle of another object, the handle can be used to
address the other object to pass a message. So the handle
can be considered as the communication pathway. This solution
is abstract because this communication pathways
have no direct physical representation and flexible because
it allows to establish communication pathways during runt-
ime. Generally, it allows to send messages to objects which
are dynamically generated during runtime. The dynamic
generation of objects, however, might be a powerful feature
for system design, but it is really far away from hardware.
Another solution proposed in [12][9][11] is to use the
VHDL mechanism to exchange data between components,
i.e. to interconnect components by signals. From the send-
er's point of view the target object can be addressed by the
port which connects both objects. Of course, this approach
isn't as flexible as the handle solution to address compo-
nent-objects, but it is very close to hardware.
Although signals can be used for communication, there
is still a gap between the abstraction provided by VHDL
signals and communication in object-oriented sense.
3.3 Protocol
In software, sending a message to an object has the semantics
of a procedure or function call 3 . Results can be given
back by assignments [12]. In hardware message passing
among concurrent components/objects needs specialized
protocols. Of course, the necessity of protocols results in a
much tighter relation between the objects than it is desired
by object-oriented paradigm (cf. Chapter 2). Sending a
message needs the knowledge and the ability to perform the
target object's communication protocol.
2. A 1:1 relation not necessarily means a point to point communication
because an object can consist of concurrent processes.
3. Without consideration of distributed programs.
Figure
2: Encapsulation of an object by protocol
From a communication point of view the protocol can
be seen as the encapsulation of an object (Figure 2). In the
following subchapters several aspects of a protocol for
message passing will be illuminated
3.3.1 Abstraction
Since the object-oriented paradigm addresses modelling
on higher abstraction levels, the details of the communication
protocol should be encapsulated and the
specialization of the protocol has to correspond with the ab-
straction. The protocol should be applied through an abstract
interface.
3.3.2 Flexibility
A universal message passing mechanism for hardware
design needs the possibility to integrate different protocols
for message exchange. An abstract model of an MMU on
system level may need another protocol than a simple register
on RT level. Further, in perspective of a top down design
methodology, it should be possible to refine a protocol
towards more detail according to the description level. The
same need occurs if co-simulation of abstract models together
with already synthesized models is desired.
The flexibility, however, to choose or refine a new protocol
is at the expense of encapsulation of the protocol.
Even if the interface to use a protocol can be encapsulated
it is not possible to hide the protocol by the language completely
3.3.3 Synchronization
In concurrent object-oriented domain objects need to
synchronize with each other to describe their behaviour dependent
on the state of other objects.
We would like to differ three synchronization modes:
. synchronous
. asynchronous
. data-driven
With synchronous communication the sender object
needs to wait until a receiver object is ready to receive a
message. In most synchronous communication mechanisms
[7] the sender object/process is blocked from the moment
of sending a message request until the service which
is intended to be invoked by the message is finished and the
results are given back.
With asynchronous communication the sender does
not wait for the readiness of the server object to receive a
message. In order to avoid the loss of messages, this mode
requires queuing of messages within the communication
pathway or the receiver. An additional advantage of such
queues is the potential flexibility to dequeue messages in
another order than FIFO. On the other hand the message
queues can have large impact on simulation and synthesis
aspects. Generally, asynchronous communication is non-blocking
[3]. So the sender object can continue its computations
directly after sending. The return of results requires
to send explicitly a message from the server to the client.
The data-driven synchronization [5] allows a sender
object to run its computations until the results of a previously
sent message request are needed. In this case the
sender has to wait until these results will be provided by the
server object. In literature the initial sending of a message
request is described to be asynchronously [3]. However, a
synchronous (but non-blocking!) sending of a message request
is also conceivable.
In summary, the data-driven synchronization allows
more flexibility than standard synchronous/asynchronous
communication.
3.3.4 Concurrency
To be consistent with (Objective) VHDL, a message
passing mechanism must preserve and support the concurrency
provided by (Objective) VHDL.
The relation between communication and concurrency
is ambivalent. On the one hand concurrent objects/process-
es are the reason for the necessity of message passing, on
the other hand the message passing mechanism can restrict
the concurrency. If an object contains only one process
(dispatcher process) which receives requests and dispatches
them, all requests will be sequentialized.
For concurrent object-oriented languages it is expected
that the objects can have own activity and can run in paral-
lel. But besides the concurrency of parallel running objects
there can be concurrency inside the objects if they contain
concurrent processes. This intra-object concurrency may
allow an object to execute requests for services in parallel.
For example, a dual-ported RAM allows concurrent
read and write operations. This can be modelled with concurrent
dispatching processes. However, allowing parallel
method execution raises the potential problem of nondeterministic
behaviour of the object, due to concurrent access
to the same attributes. But VHDL already provides mechanisms
to solve concurrent access to signals and variables.
dispatching
dispatching
dispatching
Methods
protocol
protocol protocol
protocol
object
communication
pathways
Concurrent (write) access to signals can be handled by resolution
functions. With variables the proposed protected
types can be used [15]. So at least atomic access to variables
can be ensured. But the problem with nondeterminism
is still unsolved because the value of a shared variable depends
on the activation order of the accessing processes.
A possibility to avoid nondeterminsim is to ensure that
concurrent methods have only access to exclusive at-
tributes. This can be implemented by grouping all methods
which have access to the same attributes. Each group gets
an own dispatching process and maybe a queue. So only
methods without access conflicts can be executed in parallel
3.4 Simulation/Synthesis efficiency
Simulation efficiency and synthesizability are general
aspects for the quality of object-oriented extensions of a
HDL. These are being influenced by implementation decisions
of message passing/protocol and communication frequency
between objects. A complex protocol enriched with
a lot of detailed timing informations and unlimited message
buffers to avoid the loss of messages may decrease the simulation
speed significantly. Moreover, without limitation of
maximal buffer size the protocol is not translatable into
hardware.
4 Classification scheme
In the above chapters some of the aspects a message
passing mechanism for an object-oriented HDL has to deal
with have been proposed and discussed. These can be used
to define a classification scheme for different message
passing mechanisms.
The first criterion for classification is the flexibility of
the message passing mechanism. The following cases will
be distinguished:
. flexible: different protocols are possible and can be refined
. fixed: the protocol is not modifiable.
. semi-flexible: one protocol, with potential for refinement
or different not-refinable protocols.
The second criterion is the ability of an object to accept and
perform several requests concurrently. It should be only
distinguished between:
. yes: it is possible.
. no: it is not possible.
The third criterion is the synchronization of the message
passing. It can be:
. blocking: the sender is always blocked until the return
values are received.
. non-blocking: the sender continues execution after
sending a request. Maybe at a certain point of execution
he has to wait for the results.
. both: depending on the kind of message or on the kind
of method invocation (method call vs. message pass-
ing, cf. Introduction) blocking or non blocking communication
is possible.
The last criterion is whether there are queues to buffer messages
if the receiver is busy. It will be distinguished between
. no: no queues are provided.
. one: one queue per object.
. many: more than one queues per object.
5 Other OO-VHDL approaches
During the last years several proposals for an object-oriented
extension to VHDL have been made
[1][2][4][12][14]. All the proposals need to define message
passing to be object-oriented.
Because not all of the proposals can be considered
here, two typical but completely different proposals are selected
to describe their message passing mechanisms. Because
this cannot be done isolated, it is necessary to
introduce the core concepts of the proposals up to a certain
degree. But it is outside the scope of the paper to draw a
complete picture of the special proposals.
The Vista approach [14] introduces a new design unit
called Entity Object (EO) which is based on the VHDL entity
and it's architecture. In addition to the entity the EO
may contain method specifications called operation speci-
fications. Operations are similar to procedures, but they are
visible outside the EO. In contrast to procedures operations
can have a priority and a specified minimal execution time.
For invocation of operations the EOs are not interconnected
with explicit communication pathways (signals).
To address an EO, each EO has an accompanying handle
which is a new predefined type that can be stored in signals
or variables and can be part of composite types.
Handles can be exchanged to make the corresponding EO
addressable for other objects. A special handle to address
an object itself (self) is predefined. The parent class can be
addressed by the new keyword 'super'. A message send request
is performed by a send statement which includes the
handle of the target EO, the name of the operation, and the
parameter values.
Each EO has one queue to buffer incoming messages.
The messages in the queue are dequeued by their priority.
Messages with the same priority are dequeued by FIFO.
One queue per EO means that concurrent requests are se-
flexibility parallel methods
(per object) synchronization queues
(per object)
flexible
fixed
semi-flexible
yes
no
blocking
non-blocking
both
no
one
many
Table
1: Classification scheme
quentialized. If an EO needs to invoke an own operation
(send self), this request will not be queued. It will be treated
like a procedure call and immediately executed. This mechanism
avoids deadlocks with recursive method calls.
Messages can be of blocking (default) or non-blocking
mode (immediate). But immediate messages are restricted
to have in-parameters only. The blocking mode cannot be
changed during inheritance.
For EO synchronization a rendezvous concept is pro-
vided. Accept and select statements similar to Ada are used
to establish a rendezvous.
Finally, it should be remarked that the proposed message
passing mechanism is neither sythesizable nor it is intended
to be. The abstract concepts of dynamic
communication pathways represented by the handle concept
and the unlimited message buffers have no counterpart
in hardware.
Another approach was developed at Oldenburg University
[12]. It is based on the VHDL type concept. Records
are used to represent the objects. To allow a record to be expandable
by inheritance, it is marked as a tagged record.
The corresponding methods, which are simple procedures,
must be defined in the same design unit (package). Because
the methods cannot be formally encapsulated by the object
it belongs to, the parameter list of each method contains a
parameter of the object's type which assigns the method to
the object. For each tagged record a corresponding class-wide
type exists, which is the union of all types derived
from the tagged record. With the attribute 'CLASS the
class-wide type can be referenced. Polymorphism is based
on the class-wide types. If a method is called with an actual
of class-wide type (mode in, inout), the actual type is determined
during runtime and the correct method will be invoked
Inter-process communication can be modelled consistently
with VHDL by signals. Abstraction and expandability
are supported by use of polymorphic signals (class-wide
type). Sending a message to another object has the semantics
of a procedure/function call or in our terminology a
method call. The requested method is executed by the sender
which is blocked until the end of the request. Several
methods of an object can be performed concurrently if they
are requested by different processes. But in case of concurrent
assignment to a signal (instantiation of a tagged
record), resolution functions are necessary.
Even if in [12] a special protocol mechanism is proposed
other protocols can be integrated because the protocol
is not built into the language. The classification here
refers to the proposed master-slave protocol.
6 Objective VHDL
Objective VHDL is the object-oriented extension to
VHDL developed in the EC-Project REQUEST 4 [8][10].
Objective VHDL combines the structural object approach
[14] with the type object approach [12]. Both language
extensions have shown their suitability for hardware
design.
The structural objects are usual VHDL entities. Attributes
and methods of an entity class are declared within
the declarative part of the entity. They correspond to
VHDL object declarations (signals, shared-variables and
constants) and procedure or function declarations respec-
tively. The implementation of the methods follows in the
corresponding architecture. Single inheritance for entities
and architectures is provided but no polymorphism on entity
objects.
A type class consists of a declaration and a definition,
likewise. Class types are declared like usual types but the
declaration of attributes and methods are assembled between
the new 'is class' and `end class' constructs. The implementation
of the declared methods or private methods,
which are not declared in the interface, follows in the corresponding
class body. As well as for the entity classes, single
inheritance is provided for the type classes.
Each class type has an associated class-wide type,
marked by a new attribute 'CLASS. The class-wide type is
the union of the type itself and all derived subclasses. Similar
to the [12] approach class-wide types are used to realize
polymorphism on type classes. A variable or signal of
class-wide type T'CLASS can hold any instantiation of a
class which is derived from T or T itself.
Calling a method of a directly visible instantiation of a
class type has the semantics of a simple procedure/function
call (blocking). Calling a method of an entity object or a
type object instantiated in another entity is more difficult.
A method of an entity cannot be called directly because the
entity encapsulates the procedures/methods completely.
The only interface to the outer world are the ports and generics
of the entity. Breaking this encapsulation would
4. Objective VHDL is defined in [9]. The Objective VHDL Language
Reference Manual is intended as an extension to the VHDL LRM.
Although the current status of Objective VHDL within the REQUEST
project is stable minor changes in the language are possible in future.
flexibility parallel methods
(per object) synchronization queues
(per object)
fixed no both one
Table
2: Classification of [14]
flexibility parallel methods
(per object) synchronization queues
(per object)
flexible (yes) blocking no
Table
3: Classification of [12]
change VHDL. A solution to that problem, which was chosen
in [14], was to introduce a new design unit-the Entity
Object where the methods (operations) of an EO are visible
outside the EO. Due to the additional implementation costs
for a new design unit, this solution was discarded for Objective
VHDL.
6.1 Message passing mechanism
To provide the flexibility to use an appropriate communication
for message exchange, the message passing is
not fixed in the language definition. Nevertheless, a way to
implement a flexible message passing will be shown by usage
of the other object-oriented features. The main ideas
will be now described in more detail.
Basically, the message passing mechanism consists of
three parts:
. the communication structure which defines the connection
of objects with communication pathways (Fig-
ure 3),
. protocol and messages for exchange (Figure 4),
. dispatcher (Figure 3).
6.1.1 Communication structure
To enable communication between objects which are
represented by entities or type objects inside different proc-
esses, the communication pathways between the objects are
implemented by VHDL signals. Beside the signal which
carries the message additional signals for the protocol may
be necessary. To avoid resolution functions, the signals are
unidirectional. Consequently, this requires two opposite
communication pathways if the message request produces
return values. This physical connection is depicted in Figure
3.
Figure
3: Communication structure
So the target object of a communication can be addressed
by the port which connects the sender with the re-
ceiver. Further a communication pathway is restricted to
connect only two objects.
However, VHDL signals do not provide the abstraction
expected from object-oriented message passing. To enable
the connecting signals to provide the required
abstraction, they will be implemented as polymorphic type
classes. So such a signal gets the ability to hold different
messages and to encapsulate a communication protocol.
The messages and the communication protocol can be
modelled by a structure which is shown in Figure 4. But in
following main emphasis is given to the modelling of the
messages.
6.1.2 The messages and the protocol
The messages and the protocol are implemented and
encapsulated by an abstract type class ('message'). The
class is abstract because it is not intended to be instantiated
and it should serve as base class for the classes representing
real messages. Furthermore, the class provides the interface
which allows to apply the message passing among the ob-
jects. The interface is inherited to the derived classes and
declares methods like 'send' (a message), `receive' (a mes-
sage, or 'dispatch' etc.
Figure
4: Modelling of messages
To enable a communication pathway to hold all messages
to a target object, the type class 'message' is refined
by inheritance. Because (target) classes differ in the methods
their instantiations can receive, for each kind of (target)
receiver
method M1 (.)
method M2 (.)
dispatching
signal "message_to_receiver'CLASS"
signal "message_to_receiver'CLASS"
unidirectional communication
pathway for message & protocol
sender
Entity
Object
Entity
Object
class receiver
method M1 (X in .; Y inout .);
method M2 (Z inout .);
message
exec_method
send
dispatch
return_results
receive
message_to_receiver
Y
exec_method
general functionality
address special
classes
address methods
parameters
Z
exec_method
object an abstract subclass is derived (Figure 4, 2nd-stage).
This class implements no additional functionality, its purpose
is only to distinguish the different kinds of objects.
(Different objects which are instantiations of the same class
are represented only one time!)
Finally, a message must correspond to one method
which should be invoked and the message must contain the
parameters for method invocation. To allow this, each class
of the second stage of Figure 4 is refined with subclasses
representing the methods of the (target) object (Figure 4,
3rd stage). The subclasses are named by the methods and
the method parameters are represented as attributes.
A signal which is an instantiation of such a class can
hold a message corresponding to a special method and provides
the communication protocol if one was defined in the
superclasses.
A communication pathway, however, should be able to
hold all messages which can be sent to an object. By instantiation
of the communication pathway as signal of a class-wide
type belonging to the representation of a kind of object
Figure
4, 2nd stage) this ability is given to the signal.
Finally, each of the classes representing a method (Fig-
ure 4, third stage) implements a method ('exec_method'),
which invokes the corresponding method of the target ob-
ject. As mentioned before, the methods of an entity object
cannot be invoked directly because the entity encapsulates
its methods completely. Nevertheless, to allow this method
invocation a new construct 'for entity . end for' is intro-
duced. The semantics of the construct is to make the declarations
of an entity class visible inside a type class in order
to allow its invocation. The implementation of such a method
must be available wherever 'exec_method' is invoked
with an instance of the message class.
Results produced by the method execution can be sent
back by the same mechanism.
6.1.3 Dispatching
Each entity object contains one or more dispatching
processes. These processes are sensitive to the ports which
carry the incoming messages and must be specified by the
user. To dispatch the incoming messages, functionality provided
by the class 'message' can be used if implemented
(method 'dispatch'). But basically, within the dispatcher
process only the method 'exec_method' of the received
message has to be invoked which calls the desired method
of the entity object. By the type of the received message it
can be decided during runtime which 'exec_method' has to
be called.
The number of the dispatching processes can be chosen
by the user and determines the number of concurrently
executable methods. If concurrent methods are allowed, the
user has to take care about conflicting concurrent access to
the attributes. To ensure atomic access to attributes, shared
variables with the protect mechanism (cf. Chapter
3.3.4)[15] can be used.
6.1.4 Synchronization
Synchronization can be very flexible. Synchronous,
asynchronous, and data driven sychronization (c.f Chapter
3.3.3) can be modelled.
The proposed mechanism does not require that an object
is blocked after sending a message to a concurrent ob-
ject. But it should wait at least until it is ensured by the
protocol that the target object accepts the request. Of
course, additional synchronization is necessary if the results
of the request are needed.
For asynchon communication message queues have to
be integrated into the message passing mechanism.
6.2 Summary of message passing
The provided message passing mechanism provides
easy means to send messages. The sender identifies the target
object through the port(s) by which they are connected.
needs only to call the 'send' method of the class
'message' with the port names and the message itself as pa-
rameters. The 'send' method will perform the protocol and
assign the message to the interconnecting signal if the receiver
is ready to accept it. After sending the sender can be
sure the message is received and can continue its execution
until potential results of the request are needed. In this case
the sender is blocked until the results are available.
The receiving object possesses a dispatching process
which is sensitive to the connecting port signal. If a new
message arrives, the dispatching process performs the re-
ceiver's part of the communication protocol and invokes
the desired method. Of course, this functionality can be encapsulated
by a 'receive' or `dispatch' method of the class
'message'.
The potential results can be send back implicit and immediately
after their computation or explicit by a new communication
Finally, it is important to note that the modelling of
messages (cf. chapter 6.1.2) is really straight forward. So
the effort to use this communication mechanism can be reduced
significantly by a tool which produces the messages
automatically.
6.3 Classification
According to the proposed classification scheme the
message passing can be classified as shown in Table 4. Different
protocols can be defined and stored in a library. If required
they can be refined. Concurrency of method
execution depends only on the number of dispatching processes
the user defines. In the proposed communication
mechanism a method call is blocking and passing a message
can be blocking as well as not blocking. Queues to
buffer messages and to allow asynchronous communication
can be modelled in principle but are not integrated in
the communication up to now.
So it is important to note that a special implementation
of the message passing mechanism may result in a slightly
other classification.
7 Future work
A precompiler which translates Objective VHDL to
VHDL will be implemented. With the availability of this
tool the desired benefits of Objective VHDL and especially
the message passing mechanism can be evaluated.
The proposed mechanism for message passing has potential
for improvements. For better protocol reuse a
stronger separation of the protocol and the messages will be
useful. The modelling effort which is caused by the restriction
of unidirectional communication pathways can be reduced
if resolution functions for the connecting signals can
be defined.
8 Conclusion
By analyzing and discussing the several aspects of abstract
communication the design space for message passing
has been shown and a classification scheme for message
passing mechanisms has been developed. The scheme has
been applied to the message passing mechanisms of two of
the currently most discussed proposals for object-oriented
extensions to VHDL.
Finally, a new idea for message passing mechanism
developed for Objective VHDL was introduced and classi-
fied. The new approach targets especially flexibility of pro-
tocols, reuse of protocols and consistency to VHDL
(concurrency).
9
--R
SUAVE: Painless Extensions for an Object-Oriented VH- DL
Object Oriented Extensions to VHDL - The LaMI proposal
An Object-Oriented Model for Extensible Concurrent Systems: The Composition-Filters Approach
Concurrency And Reusability: From Sequential To Parallel
Specification and Design of Embedded Systems
Communicating Sequential Processes
Language Architecture Document on Objective VHDL
Inheritance Concept for signals in Object Oriented Extensions to VHDL
Shared Variable Language
--TR
Communicating sequential processes
Concurrency and reusability: from sequential to parallel
Real-time object-oriented modeling
Specification and design of embedded systems
Inheritance concept for signals in object-oriented extensions to VHDL
Object oriented extensions to VHDL, the LaMI proposal
--CTR
Cristina Barna , Wolfgang Rosenstiel, Object-oriented reuse methodology for VHDL, Proceedings of the conference on Design, automation and test in Europe, p.133-es, January 1999, Munich, Germany
Annette Bunker , Ganesh Gopalakrishnan , Sally A. Mckee, Formal hardware specification languages for protocol compliance verification, ACM Transactions on Design Automation of Electronic Systems (TODAES), v.9 n.1, p.1-32, January 2004 | message passing;communication;object-oriented hardware modelling |
368247 | Optimal temporal partitioning and synthesis for reconfigurable architectures. | We develop a 0-1 non-linear programming (NLP) model for combined temporal partitioning and high-level synthesis from behavioral specifications destined to be implemented on reconfigurable processors. We present tight linearizations of the NLP model. We present effective variable selection heuristics for a branch and bound solution of the derived linear programming model. We show how tight linearizations combined with good variable selection techniques during branch and bound yield optimal results in relatively short execution times. | Introduction
Dynamically reconfigurable processors are becoming
increasingly viable with the advent of modern field-programmable
devices, especially the SRAM-based
FPGAs. Execution of hardware computations using
reconfigurable processors necessitates a temporal partitioning
of the specification. Temporal partitioning
divides the specification into a number of specification
segments that are destined to be executed one after another
on the target processor. While the processor is
being reconfigured between executing the specification
segments, results to be carried from one segment to a
future segment must be stored in a memory. Reconfiguration
time itself, along with the time it takes to save
and restore the active data, is considered an overhead;
it is desirable to minimize the number of reconfiguration
steps, ie., the number of segments resulting from
temporal partitioning, as well as the total amount of
data to be stored and restored during the course of
execution of the specification.
In addition to the traditional synthesis process, a
temporal partitioning step must be undertaken to implement
hardware computations using reconfigurable
processors. While techniques exist to partition and
synthesize behavior level descriptions for spatial partitioning
on multiple chips, no previous attempts have
been published for temporal partitioning combined
with behavioral synthesis.
The paper presents a 0-1 non-linear programming
(NLP) formulation of the combined problem of temporal
partitioning and scheduling, function unit allocation
and function unit binding steps in high-level synthe-
sis. The objective of the formulation is to minimize
the communication, that is, the total amount of data
This work is supported in part by the US Air Force, Wright
Laboratory, WPAFB, under contract number F33615-97-C-1043.
transferred among the partition segments, so that the
reconfiguration costs are minimized.
We propose compact linearizations of the NLP formulation
to transform it into a mixed integer 0-1 linear
programming (LP) model. We show the effectiveness
of our linearizations through experimentation. In ad-
dition, we develop efficient heuristics to select the best
candidate variables upon which to branch-and-bound
while solving the LP model. Again, we show the effectiveness
of our heuristics through experimental results.
The paper is organized as follows. In Section 2 we
discuss related work, and in Section 3 we provide an
overview of our approach. The basic formulation is
presented in Section 3 and its solution in Section 4.
Experimental results are presented in Section 7 and
Section 9.
Previous Work
Synthesis for reconfigurable architectures involves
synthesis and partitioning. The partitioning can be
both temporal and spatial. There has been significant
research on spatial partitioning and synthesis, though
the issue of temporal partitioning has been largely ig-
nored. Early research [11, 12] in the synthesis domain
solved the spatial partitioning problem independently
from the scheduling and allocation subproblems.
The problem of simultaneous spatial partitioning
and synthesis was first formulated as an IP by Gebotys
in [1]. It produced synthesized designs which were
10% faster than previous research. However as mentioned
by Gebotys in [2], the size of the model was too
large and could solve only small examples. In both
[1, 2] the dimension of the problem is reduced by not
considering the binding problem. Though the model
can handle pipelined functional units and and functional
units whose latency is greater than one cycle, it
cannot handle design explorations where two different
types of functional units can implement the same op-
eration. For example, we cannot explore the possibility
of using a non-pipelined and a pipelined multiplier
in the same design. Also heuristics were proposed to
assign entire critical paths to partitions. This might
lead to solutions that are not globally optimal. Both
[1, 2] focus on synthesis for ASICs and hence attempt
to minimize area. For reconfigurable processors based
on FPGA technology, area (resources on the FPGA
devices) should be treated as a constraint which must
be satisfied by every temporal segment in the temporal
partition.
c
c
c
c
operation graph of T 2
Figure
1: Behavioral Specification
The work of Niemann [3] presents an IP-based
methodology for hardware software partitioning of
codesign systems. MULTIPAR [4] also has a non-linear
0-1 model for spatial partitioning and synthesis, without
involving binding. The linearization technique is
however not the tightest for such a formulation, (see
section 4 for details). To achieve faster runtime they
solved the formulation by heuristic techniques, leading
to suboptimal results.
Since a functional unit is not explicitly modeled in
the above [1, 2, 4] formulations, they cannot determine
the 'actual' area utilization of a partition. This
factor though not critical for ASIC designs, is critical
for reconfigurable processors based on FPGA technol-
ogy. For example, in an optimal solution, a temporal
segment may contain 1 multiplier and 5 adders and
another temporal segment may contain 2 multipliers
and 2 adders. Our formulation, which explicitly models
binding of operations to functional units, can determine
whether a functional unit is actually being used in
any temporal segment. Moreover, we explicitly model
the usage of each functional unit in each temporal segment
and hence can explore the design space using 5
adders and 2 multipliers, although all these functional
units may not simultaneously fit on the processor resources
(function generators or CLBs in the FPGAs).
Our model can automatically determine the above optimal
solution, that simultaneously meets the resource
constraints at the FPGA resource level as well as at
the functional unit level.
3 System Specification
The behavior specification is captured in the form of
a Task Graph, as shown in the Figure 1. The vertices
in the graph denote a set of tasks, T . The dependencies
among the tasks are represented by directed
edges. Each task can be visualized as being composed
of a number of operations which should stay together
in one temporal partition. The edge labels in the task
graph represent the amount of communication required
if the two tasks connected by an edge are placed in different
partitions. Let I be the set of all operations in
the specification.
The cost metrics of the target FPGA, FPGA resource
capacity (C) and temporary on-board memory
size (M s ) are the inputs to our system. Typical resources
for an FPGA are combinational logic blocks
and function generators. We assume a component li-
Heuristic temporal partition
estimator
preprocessing
Model formulation, NLP
linearized to an ILP
solution by a LP solver
Solution Found ?
Behavior specification
FPGA cost metrics
Characterized component library
number of partitions
Figure
2: Flow of the Temporal Partitioning and
Synthesis system
brary consisting of various functional units which can
execute the operations in the specification. The components
in the library are characterized by cost metrics
in terms of delay times and FPGA resource requirements
An outline of the temporal partitioning system is
shown in Figure 2. The system proceeds by first heuristically
estimating the number of segments (N), which
becomes an upper bound on the number of temporal
segments in the NLP formulation. It uses a fast, heuristic
list scheduling technique to estimate the number of
segments.
Before the problem can be formulated, we need to
determine the ASAP and ALAP schedules for the op-
erations. These will be used to set the mobility ranges
for the operations in the formulation. This is done in a
preprocessing step over the combined operation graph
of the specification. From this schedule we can also
determine the set of functional units F , which must be
used for the design exploration.
While partitioning the specification, we honor the
task boundaries in the specification. That is, a task
cannot be split across two temporal segments. How-
ever, if two different tasks are together on the same
segment, then they share control steps and functional
units among them. If it is desired to permit splitting
of tasks across segments, then each operation in the
specification may be modeled as a task in our system.
In this case, each task would have only one operation
associated with it. The entire formulation developed
in this paper will work correctly. Formally the system
is defined as follows -
directed edge between tasks, t
exists in the task graph. t i is at the tail of the directed
edge and t j is at the head, when the execution of task
depends on some output of t i .
directed edge between operations,
exists.
number of data units to be communicated
between tasks t i and t j .
ffl Op(t), the set of all the operations in the operation
graph of a task t.
F , is the set of functional units required for the most
parallel schedule of the operation graph.
ffl Fu(i), the set of functional units from F , on which
operation i can execute.
(k), the set of operations which can execute on
functional unit k.
ffl CS(i), the set of control steps over which
operation i can be scheduled. Ranges from
is the user-specified
relaxation over the maximum ALAP for the schedule.
ASAP (i) and ALAP (i) are the As Soon As Possible
and As Late As Possible control steps for operation i.
(j), the set of operations which can be scheduled
on control step, j.
ffl N upper bound on the number of partitions. The partitions
are the index of the partition
specifies the order of execution of the partitions. Note
that the generated optimal solution may have fewer
than N partitions.
scratch memory available for storage between
partitions.
ffl FG(k), the number of function-generators used for
functional unit k, obtained from the characterized component
library.
ffl C, is the resource capacity of the FPGA. sectionNon-
Linear 0-1 model
In this section we describe the variables, constraints
and cost function used in the formulation of our model.
3.1 Variables
We have three sets of decision variables, which
model the three important properties - y tp models the
partitioning at the task level, x ijk models the synthesis
subproblem at the operation level, w pt1 t 2
models the
communication cost incurred if two tasks connected to
each other are not placed in the same partition. All
are 0-1 variables.
placed in partition p,
I is placed in control
uses functional
placed in any partition
placed in any of
As will be seen y tp and x ijk are the fundamental system
modeling variables. All other variables are secondary
and are constrained in terms of the fundamental variables
3.2 Temporal Partitioning
The variables y tp model the partitioning behavior
of the system. Temporal partitioning has the following
constraints.
Uniqueness Constraint: Each task should be
placed in exactly one partition among the N temporal
partitions.
Temporal order Constraint: Because we are partitioning
over time, a task t 1 on which another task
t 2 is dependent cannot be placed in a later partition
than the partition in which task t 2 is placed. It has to
be placed either in the same partition as t 2 , or in an
earlier one.
p2!p1-N
Scratch Memory Constraint: The amount of intermediate
data stored between partitions should be
less than the scratch pad memory M s . The variable
, if 1, signifies that t 1 and t 2 have a data dependency
and are being placed across temporal partition
p. Therefore the data being communicated between
have to be stored in the
scratch memory of partition p. The sum of all the data
being communicated across a partition should be less
than the scratch pad memory.
(w pt1 t 2
Notice in the Figure 3, how the variable w
, models
not only the communication among tasks which are
on adjacent temporal partitions, but also on the non-adjacent
ones. 3 tasks are to be placed optimally on
3 partitions. On the left of the figure are the original
equations used to model the constraints for the exam-
ple. The equations on the right of the figure show the
variables which will be 1 in the mapping of tasks to
partitions shown and the constraint which have to be
satisfied. The variable w
, is shown to range from 2
to N, because the data at partition 1 is actually the external
input to the system. We can reasonably assume
that there is enough memory for the external input at
any time, since this is known a priori and is not a function
of the partitioning of the system. w pt1 t 2
are 0-1
non-linear terms constrained as -
Temporal
Partitions
s
Figure
3: The memory constraints to be satisfied if tasks are mapped to partitions as shown
Equation (4) constrains w pt1 t 2
to take a value 1, if any
of the non-linear product terms in the associated equations
are 1. Equation (5) constrains w pt1 t 2
to a value
zero, when all the associated product terms are 0. The
equation alone (4) does not stop w pt1 t 2
, from taking a
value 1 when all the product terms are 0 because 1 - 0
is a valid solution to the constraint (4).
3.3 Synthesis
For the sake of clarity and ease of understanding
we will not describe in the equations for synthesis, the
formulations for pipelining, chaining, and latency of
functional units. This formulation is easily extendible
to incorporate those features, as described in [6] and
We assume for the current model that the latency
of each functional unit is one control step, and the
result of an operation is available at the end of the
control step.
Unique Operation Assignment Constraint:
Each operation should be scheduled at one control step
and on only one functional unit. Therefore only one
variable x ijk for an operation i, will be 1.
Temporal Mapping Constraint: This constraint
prevents more than one operation from being scheduled
at the same control step on the same functional unit.
Dependency Constraint: To maintain the dependency
relationship between operations, an operation i 1
whose output is necessary for operation i 2 , should not
be assigned a later control step than the control step
to which i 2 is assigned.
3.4 Combining Partitioning and Synthesis
It is essential for partitioning for an FPGA to meet
the area constraints of the FPGA. It is not necessary
that all the functional units F , used in the design exploration
are finally used in a partition. To determine
whether a functional unit has been used in a partition
we define the following decision variables -
used in partition
to perform some operation
uses functional unit k 2 F
to perform some operation
These variables are constrained by,
(y tp
The variable u pk , defines the functional unit usage in
a partition and the variable defines the functional
unit usage in a task. These variables are also secondary
and are defined in terms of the fundamental modeling
variables y tp and x ijk . Equation (9), constrains u pk to
take a value 1, when any of its associated non-linear
terms is 1. Equation (10) is needed to make sure, that
at least one task on that particular partition uses the
functional unit. The derivation for variable will be
described in Section 4.
Resource Constraints We introduce resource constraints
in terms of variables u pk . Typical FPGA
resources include function generators, combinational
logic blocks (CLB) etc. Similar equations can be added
if multiple resource types exist in the FPGA. ff is a
user defined logic-optimization factor in the range 0-1.
Typical values of ff using Synopsis FPGA components
are in the range 0.6-0.8 [5].
Unique Control Step Constraint: We introduce
constraints to make sure that each control step is
mapped uniquely to a temporal segment. We use a
new derived variable c tj to formulate this constraint.
operation
mapped to control step j
x ijk (12)
If the operations of two distinct tasks use same control
steps, then these tasks should be on the same partition.
In this paper, we have not considered flip-flop resource
constraints. To consider flip-flop resources, the
formulation must estimate the number of registers necessary
to synthesize the design. It is straight-forward
to add register optimization to our formulation on the
lines proposed by Gebotys et al. [6].
Cost Function
Minimize the cost of data transfer between temporal
partitions. This cost function will get an optimal
solution using the least number of partitions and the
least amount of inter-partition data transfer.
(w pt1 t 2
4 Solving the 0-1 Non-linear Model
We can use various solution techniques in the mathematical
programming field, for solving models which
are non-linear in their objective function and con-
straints. The main approaches are (1) Linearization
methods, (2) Enumeration methods, (3) Cutting plane
methods. Refer to [8] for an interesting survey of all
the approaches. Due to the existence of a good set
of linearization techniques and the easy availability of
LP codes for solving linear models we have chosen the
linearization technique, though enumerative methods
and cutting plane methods are viable alternatives.
Linearization of Equations (9-10) For each non-linear
product term of the form a b, we generate a
new 0-1 variable c as, c = a b; which can be written
in Fortet's linearization method[8] 1 as
Constraint both a and b are
1. Constraint (16) implies either a or b
1 For each distinct product of variables,
replace it
by a new 0-1 variable x n+k and add the constraints:
become We need both the constraints to get the
correct solution.
However Glover and Wolsey [9] proposed an im-
provement, by defining c as a continuous real-valued
variable (with an upper bound of 1), instead of, an
integer variable by replacing equation (16) by the following
two constraints, while retaining equation (15)
intact -
a - c (17)
and b - c (18)
Glover's linearization has been shown to be tighter
than Fortet's. This has also been borne out by our
experimentations. Equations (9) and (10) can be now
replaced by the following compact linearized equations.
Here z ptk is a continuous real-valued variable bounded
between 0 and 1 (0 - z ptk - 1).
z
Constraint (19) implies z
are 1. Constraints (20) and (21) imply z
when either y tp or We need all three
constraints to get the correct solution.
Linearization of Equations (4) - (5) This linearization
can be done exactly as we have done for
equations by defining a new continuous
variable for each distinct non-linear term. As will be
shown shortly however in our final formulation we use
lesser number of variables in linearizing w pt1 t 2
Formulating constraints for decision variable
For any task t and functional unit k, variable
is 1 if any of the x ijk variables used to denote the
synthesis variables of task t using functional unit k
are 1. Consider a small example, where variables
are the synthesis variables for task
1 and functional unit k. Then
logical - and product being interchangeable for 0-1
terms. Using a and using the Glover's linearization
we get -
Characteristics ILP Model
Graph No. N A+M+S L Var Const Run-Time
Table
1: Some Preliminary results
Based on the above discussion we get the following constraints
for
5 Preliminary Results
To verify our formulation by experimentation, we
generated various random graphs. We ran some experiments
with the current formulation to see how it
performs. However as we see in Table 1, only one of the
formulation solved in reasonable time, even though the
graphs for which the experiments are done are not very
large. Others were terminated when their run time became
too large. Refer to Table 4, for the actual sizes
of the graphs in this experiment. In the result tables,
N denotes the number of partitions, A, M, S the number
of adders, multipliers and subtracters respectively
used in the design exploration. L, is the user specified
latency margin bound. Var and Const denote the
number of variables and constraints respectively that
are generated for each graph by the ILP formulation.
The run times are in seconds and all experiments have
been run on an UltraSparc machine running at 175
Mhz. We use lp solve, a public domain LP solver [10]
for solving our ILP formulation.
6 Additional constraints that tighten
the model further
When an ILP problem is solved by using LP tech-
niques, we are solving the LP relaxation of the model.
The LP relaxation is the same as the ILP, except the
0-1 constraints on the variables are dropped. The
LP relaxation has a much larger feasible region, and
the ILP's feasible region lies within it. An important
method of reducing the solution time is to modify the
formulation with constraints which cut away some of
the LP feasible region, without changing the feasible
region of the ILP. This is called tightening the LP
model. After studying the model carefully we could
identify some more constraints which cut off a large
amount of non-optimal integer and non-integer solutions
and helped in reducing the time required to solve
the model.
ffl If a task t 1 is placed on some partition p, and
exists then it means that t 2 definitely can only
be placed in partition p or greater and therefore
Case (1)
Figure
4: Equations for variable w for 2 tasks
and 4 partitions
they cannot contribute to the scratch memory of
any partitions less than or equal to p.
ffl If a task t 2 is placed on some partition p, and
exists then it means that t 1 definitely can
only be placed in partition p or lesser and therefore
they cannot contribute to the scratch memory of
any partitions greater than p.
exists, and both tasks are in the
same partition, then they cannot contribute to the
scratch memory of any of the partitions.
In our formulation we have not introduced explicitly
a new term for each distinct product term
as we did for We have linearized
as follows -
Fewer new variables are introduced in the linearization
because a single new variable reflects
the constraints of several non-linear product
terms. The main limitation in the above linearization
is that though w pt1 t 2
will take a value 1 if one
or more of its constituent products is 1, it will also
l
Graph No. N A+M+S L Var Const Run-Time
Table
2: Results of tightening the Constraints
take a value 1 even if none of the constituent products
is 1. This is because there is no constraint
which will limit the value of w pt1 t 2
to 0, if all its
constituent product terms are 0. If the new linear
term is in a minimizing cost function, then such
a solution will get cutoff after being generated as
the solution, since the objective value of the cost
function is greater in this case. This is true in our
case as w pt1 t 2
is part of the cost function. However
as we shall see the new tightened inequalities will
actually limit the solution space. w pt1 t 2
will never
be 1 if all of its constituent non-linear terms are
Equations (28), (29) and (30) eliminate the problem
mentioned above. We will show by an example
how this is possible. In Figure 4 two tasks
are to be place in four parti-
tions. The variable w 312 , will have the equations
shown. Consider the following three cases, where
none of the 4 product terms shown in the figure
are 1, how the variable w 312 will still never be 1.
cut off by equation (29), (2) if t
value w will be cut off by equation (28),
will be cut
off by equation (30).
ffl Another constraint observed, which reduced dramatically
the solution time is that, if a task t uses a
functional unit k and is placed in partition p, then
the associated u pk variable should also reflect this.
Equations (1), (2), (3), (6), (7), (8), (11), (12), (13),
(19), (20), (21), (22), (23), (26), (27), (28), (29), (30),
(31) and (32) are the constraints and Equation (14) is
the cost function in our final model.
7 Experimental Results for Tightened
Constraints
With the tightened constraints in place, we again
ran the same series of experiments as in Table 1 and
observed a significant improvement in the run times.
Examples in rows 1, 2 and 3 of Table 2 solved, though
the run times are very large for the sizes of graphs
under consideration.
8 Solution by branch and bound
While solving a 0-1 LP by the branch and bound
technique, an active node of the branching tree (ini-
tially the given mixed 0-1 problem), is chosen and its
LP relaxation is solved, and a fractional variable, if any
is chosen to branch on. If the variable is a 0-1 variable,
one branch sets the variable to 0 and the other to 1.
In this framework, there are two choices to be made
at any time: the active node to be developed and the
choice of the fractional variable to branch on. The
variable choice can be very critical in keeping the size
of the b-and-b tree small. We formulated the following
heuristic to guide the selection of the branch and
bound and has worked very well in practice. For the
task graph, do a topological ordering of the tasks. For
dependency have a higher priority than
. The priorities range from 1.n, highest to lowest,
and when the variable for tasks are generated in the
ILP i.e., y tp , the index t reflects this priority. While
solving the model by an LP-solver, we always take the
branch which sets the variable value to 1 first. To pick
a variable to branch and bound we use the following
ffl If there are variables y tp still fractional, then pick
the variable y tp with the lowest t value and p value
to branch on first.
ffl Once no fractional y tp variables remain, pick any
fractional u pk variable to branch on. As a naive
approach at this point once all the tasks are
assigned to partitions we might have chosen to
branch on any fractional x ijk variable to continue
the synthesis process, but our approach cuts off
all solutions which are using some functional unit
which does not fit in the partition, very early in
the branching process.
As you will observe in Section 9, this variable selection
process has greatly reduced the run time of the
experiments. This result emphasizes that careful study
into the variable selection method must be done, rather
than leave the variable selection to the solver (which
randomly chooses a variable to branch on).
We choose to pick y tp as a variable to branch on
because once the task get assigned to partitions, the
remaining problem is now just a scheduling-allocation
problem whose linearization is quite tight and so produces
less number of non-integer solutions. Observe
that we are not forcing the tasks to lie on a particular
partition, as in [2], where the variables in the critical
path are forced into one partition, thus making the solution
only locally optimal. Our method just guides
the solution process to a quick solution (which may or
may not be optimal), and this solution then acts as a
lower bound on all other solutions undertaken after it
in the solution process. This helps in eliminating many
non-optimal branches of the solution tree. Since we are
never forcing the value of any variable to some value,
our solutions are always global optimal.
9 Experimental Results
We first explore the effect of different design parameters
on an example behavioral specification. Table
3 shows the results of fixing the number of functional
units and varying the latency and number of partitions.
This graph called graph 1 has 5 tasks and 22 opera-
tions. 2 adders, 2 multipliers and 1 subtracter were
used to schedule the design. The first row in Table 3
Characteristics ILP Model Results
Graph No. Tasks Opers N A+M+S L Var Const RunTime Feasible
Table
4: Temporal Partitioning Results for various Graphs
Const RunTime Feasible
Table
3: Results of variation of Latency and
number of Partitions for Graph 1
shows that with no relaxation in latency, the design
could not be feasibly partitioned onto 3 partitions. So
the latency bound was relaxed by 1, and the design
was optimally partitioned and synthesized onto 3 par-
titions. The latency bound was relaxed by 2 and the
design fit onto 2 partitions. It was further relaxed to
3, and it fit optimally onto a single partition though
partitions were used in the design space exploration.
In all these examples, the execution time for getting
the optimal results is very small.
For any ILP formulation, being solved by branch
and bound using LP relaxation of the model, it is very
important to have tight linear relaxations and good
variable selection methods, so that a large amount of
non-integer solutions are cutoff from the solution space.
We ran a series of experiments to substantiate the variable
selection technique and the tightened constraints
we have imposed on our model. In Table 4, the results
of running the experiments on larger graphs are
shown. The columns Tasks and Opers give the size of
the specification in terms of the number of tasks and
operations in them. Medium sized graphs of upto 72
operations, can be optimally partitioned in very small
execution times.
To make our model an effective tool for temporal
partitioning and synthesis, we need to add constraints
to model the registers and buses used in the design.
Note however that the number of variables (which
largely influence the solution time) will not increase,
as the current variable set is enough to model the additional
constraints. In this paper we have presented
a novel technique to perform temporal partitioning and
synthesis optimally. Good linearization techniques and
careful variable selection procedures were very helpful
in solving the models in a short time. The effectiveness
was demonstrated by the results.
--R
"Optimal Synthesis of Multichip Architectures"
"An Optimal methodology of Synthesis of DSP Multichip Architectures"
"An Algorithm for Hardware/Software Partitioning Using Mixed Integer Linear Programming"
"MULTIPAR: Behavioral Partitioning for Synthesizing Application-Specific Multiprocessor Architectures"
"Resource Constrained RTL Partitioning for Synthesis of Multi-FPGA Designs"
"Optimal VLSI architectural synthesis Area, Performance and Testability"
"OS- CAR:Optimum Simultaneous Scheduling, Allocation and Resource Binding Based on Integer Pro- gramming"
"Con- strained Nonlinear 0-1 programming"
"Converting the 0-1 Polynomial Programming Problem to a 0-1 Linear Program"
"CHOP: A Constraint-Driven-System-Level Partitioner"
"Partitioning of Functional Models of Synchronous Digital Systems"
--TR
Optimal VLSI architectural synthesis
OSCAR
An optimal methodology for synthesis of DSP multichip architectures
Optimal synthesis of multichip architectures
Hardware/Software Partitioning using Integer Programming
Resource Constrained RTL Partitioning for Synthesis of Multi-FPGA Designs
--CTR
Michael Eisenring , Marco Platzner, A Framework for Run-time Reconfigurable Systems, The Journal of Supercomputing, v.21 n.2, p.145-159, February 2002
R. Maestre , M. Fernandez , R. Hermida , N. Bagherzadeh, A Framework for Scheduling and Context Allocation in Reconfigurable Computing, Proceedings of the 12th international symposium on System synthesis, p.134, November 01-04, 1999
R. Maestre , F. J. Kurdahi , N. Bagherzadeh , H. Singh , R. Hermida , M. Fernandez, Kernel scheduling in reconfigurable computing, Proceedings of the conference on Design, automation and test in Europe, p.21-es, January 1999, Munich, Germany
Meenakshi Kaul , Ranga Vemuri , Sriram Govindarajan , Iyad Ouaiss, An automated temporal partitioning and loop fission approach for FPGA based reconfigurable synthesis of DSP applications, Proceedings of the 36th ACM/IEEE conference on Design automation, p.616-622, June 21-25, 1999, New Orleans, Louisiana, United States
Meenakshi Kaul , Ranga Vemuri, Temporal partitioning combined with design space exploration for latency minimization of run-time reconfigured designs, Proceedings of the conference on Design, automation and test in Europe, p.43-es, January 1999, Munich, Germany
Meenakshi Kaul , Ranga Vemuri, Design-Space Exploration for Block-Processing Based TemporalPartitioning of Run-Time Reconfigurable Systems, Journal of VLSI Signal Processing Systems, v.24 n.2-3, p.181-209, Mar. 2000
Hartej Singh , Guangming Lu , Eliseu Filho , Rafael Maestre , Ming-Hau Lee , Fadi Kurdahi , Nader Bagherzadeh, MorphoSys: case study of a reconfigurable computing system targeting multimedia applications, Proceedings of the 37th conference on Design automation, p.573-578, June 05-09, 2000, Los Angeles, California, United States
Rafael Maestre , Fadi J. Kurdahi , Milagros Fernndez , Roman Hermida , Nader Bagherzadeh , Hartej Singh, A framework for reconfigurable computing: task scheduling and context management, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, v.9 n.6, p.858-873, 12/1/2001 | temporal partitioning;non-linear programming;integer linear programming;synthesis |
368299 | Fast sequential circuit test generation using high-level and gate-level techniques. | A new approach for sequential circuit test generation is proposed that combines software testing based techniques at the high level with test enhancement techniques at the gate level. Several sequences are derived to ensure 100% coverage of all statements in a high-level VHDL description, or to maximize coverage of paths. The sequences are then enhanced at the gate level to maximize coverage of single stuck-at faults. High fault coverages have been achieved very quickly on several benchmark circuits using this approach. | Introduction
Most recent work in the area of sequential circuit test
generation has focused on the gate level and has been
targeted at single stuck-at faults. Both deterministic
fault-oriented and simulation-based approaches have
been used e#ectively, although execution times are often
long. The key factor limiting the e#ciency of these
approaches has been the lack of knowledge about circuit
behavior. Architectural-level test generation has
been proposed as a means of exploiting high-level information
while maintaining the capability to handle
stuck-at faults [1]. However, the high-level information
must be derived from a structural description at the register
transfer level (RTL), and sequences generated are
targeted at detecting specific stuck-at faults in modules
for which gate-level descriptions are available. Circuits
with modules for which gate-level descriptions are not
available can be handled, but better fault coverages are
obtained by a gate-level test generator in less time [2].
Several approaches have been proposed for automatic
generation of functional test vectors for circuits
described at a high level, including [3][4][5]. The functional
test vectors can be used for design verification
and power estimation, in addition to screening for manufacturing
defects. Vemuri and Kalyanaraman enumerate
paths in an annotated VHDL description and trans-
# This research was supported in part by DARPA under
Contract DABT63-95-C-0069, in part by the European Union
through the FOST Project, and by Hewlett-Packard under an
equipment grant.
late them into a set of constraints [3]. A constraint
solver is then used to obtain a test sequence to traverse
the specified path. Fault coverages of test sets
generated for statement coverage were low, but higher
fault coverages were obtained by covering each statement
multiple times. Cheng and Krishnakumar transform
the high-level description in VHDL or C into an
extended finite state machine (EFSM) model and then
use the EFSM model to generate test sequences that
exercise all specified functions [4]. Traversing all transitions
in an EFSM model was shown to guarantee coverage
of all functions. Execution time was very low for
generation of test sequences, and good fault coverages
were achieved for several circuits. The approach proposed
by Corno et al., implemented in the test generator
RAGE, aims to generate test sequences that cover
each read or write operation on a variable in a high-level
VHDL description [5]. The operations are each
covered a specified number of times. Good fault coverages
were achieved for several benchmark circuits by
covering each operation at least three times, and fault
coverages were sometimes higher than those obtained
by a deterministic, gate-level test generator. Execution
was fast for all but the larger circuits.
These functional test generation approaches are
based upon a technique commonly used for software
testing: generating tests that cover all statements in
the system description. Another software testing tech-
nique, which has not been implemented in the previous
work on functional test generation, is to generate tests
that traverse all possible paths in the system description
[6]. In this work, we address path coverage, as well as
statement coverage. A VHDL circuit description may
contain multiple processes that execute concurrently.
Since a path is defined only within a single process, we
apply path coverage to single-process designs or to the
main process of multiple-process designs only. Here,
limitations must be placed upon path length to bound
the number of tests generated. Generation of tests for
path coverage, in addition to statement coverage, may
enable higher fault coverages to be achieved.
Whether statement coverage or path coverage is used
as the coverage metric, generation of test sequences using
software testing based techniques is limited by an
inability to specify values of variables that will maximize
detection of faults at the gate level. We propose
to combine a software testing based approach at the
high level with test sequence enhancement techniques at
the gate level to achieve high fault coverages in sequential
circuits very quickly. The gate-level test sequence
enhancement techniques that we use borrow from techniques
already developed for dynamic compaction of
tests generated at the gate level [7][8]. The objective is
to maximize the number of faults that can be detected
by each test sequence generated at the high level.
We begin with an overview of the test generation
process in Section 2. Generation of test sequences at
the high level using software testing based techniques
is then described in Section 3, followed by a discussion
about test sequence enhancement at the gate level in
Section 4. Results are presented in Section 5 for several
benchmark circuits, and Section 6 concludes the paper.
Overview
We propose to combine software testing based techniques
for test sequence generation at a high level with
gate-level techniques for test sequence enhancement.
The overall test generation process is illustrated in Figure
1. Several partially-specified test sequences are de-
High-Level
Description
Test
Automatic
Synthesis
Gate-Level
Description
Sequences
Test
Test Sequence
Gate-Level
High-Level
Generation
Test Sequence
Enhancement
Figure
1: Overview of test generation.
rived from the high-level circuit description using various
coverage goals, e.g., coverage of all statements. An
automatic synthesis tool is used to obtain a gate-level
implementation of the circuit, and then the gate-level
test sequence enhancement tool is executed to generate
a complete test set targeted at high coverage of single
stuck-at faults, using test sequences generated at the
high level as input. The high-level sequences generated
are aimed at traversing through a number of control
states in the system, and values of variables are left unspecified
as much as possible. The gate-level tool then
has more freedom to select values that will maximize
fault coverage. The same sequence may be reused a
number of times, but modifications made at the gate
level, which essentially specify the values of variables
in the datapath, are likely to result in di#erent fully-specified
test sequences. Furthermore, any sequences
or subsequences that do not contribute to improving
the fault coverage are not added to the test set.
3 High-Level Test Generation
The first step in our test generation procedure is to
obtain a set of partially-specified test sequences using
the high-level circuit description and various coverage
goals. Ideally, we would like to automate this process,
but automatic generation of tests for both statement
and path coverage is itself a very di#cult problem, and
no implementation is currently available. Therefore, in
the current work, the sequences are derived manually.
One of our goals in this work is to provide guidelines on
the types of high-level sequences that are most useful for
stuck-at fault testing. It may be possible to avoid using
sequences that are di#cult to derive automatically and
still achieve high fault coverages. In particular, our
experiments indicate that statement coverage usually
su#ces and is easier to achieve than path coverage.
Various high-level benchmark circuits are used in our
work, and most of these have been derived from VHDL
descriptions found at various ftp sites. Circuits b01-b08
range from simple filters to more complex microprocessor
fetch and execution units and are available from the
authors.
The simplest coverage metric is statement coverage.
A test set with 100% statement coverage exercises all
statements in the VHDL description. Every branch
must be exercised at least once in the set of sequences
derived, but all paths are not necessarily taken. Path
coverage is a more comprehensive metric that does aim
to ensure that all paths are taken. To obtain a set of
sequences with 100% statement coverage, the datapath
and control portions of the description are identified,
and the state transition graph (STG) for the control
machine is derived. Then test sequences are assembled
to traverse all control states and all blocks of code for
each state. Each sequence begins by resetting the cir-
cuit. In the benchmark circuits that we are using, a
reset signal is available. However, the only necessary
assumption is that the circuit is initializable. This assumption
is satisfied at the gate level by either using a
reset signal or an initialization sequence. Several vectors
are then added to traverse between states and exercise
various statements. Finally, vectors are added to
the end of the sequence to ensure that the circuit ends
in a state in which the output is observed. For many cir-
cuits, the outputs are observable in any state, so these
vectors are unnecessary. Portions of the test sequences
that determine the values of variables are left unspecified
as much as possible so that the gate-level tool has
more freedom in choosing values to maximize stuck-at
fault coverage.
Derivation of test sequences for 100% statement coverage
is best illustrated by an example. The STG for
the control machine of benchmark circuit b03 is shown
in
Figure
2. When the reset signal is asserted, the cir-
Reset
read request[1-4]
update grant
read request[1-4]
ASSIGN
INIT
Figure
2: STG for benchmark circuit b03.
cuit is placed in state INIT. In state INIT, the (bit)
variables request1 through request4 are read from the
primary inputs, and the next state is set to ANALISI
REQ. In state ANALISI REQ, the 4-bit grant variable
is written to the primary outputs, one of four
blocks of code is executed, depending on the request
variables read in the previous state, and the next state
is set to ASSIGN. In state ASSIGN, the grant variable
is updated, the variables request1 through request4
are read from the primary inputs, and the next state
is set to ANALISI REQ. The set of test sequences derived
for 100% statement coverage therefore contains
four partially-specified sequences, each having five vec-
tors. The first vector resets the circuit. The second
vector sets the request variables to exercise one of the
four code blocks in the following state. The last three
vectors are used to traverse from the ANALISI REQ
state to the ASSIGN state, where the grant variable is
set, and back to the ANALISI REQ state, where the
grant variable is written to the primary outputs.
In obtaining a set of sequences for path coverage, we
start with a sequence with 100% statement coverage.
Several sequences are then added to maximize coverage
of paths. Paths are considered within each state of the
STG and also across several states. The procedure for
deriving sequences to cover paths within a state is first
explained for benchmark circuit b04. The STG for b04
is shown in Figure 3 along with a flow chart for state
read
D_IN
sA
Reset
RLAST
D_IN
D_IN
State sC
F
F
F
F
F
Figure
3: STG for benchmark circuit b04 and flow chart
for state sC.
sC. All assignments in the flow chart are carried out in
the same clock cycle. When the circuit is reset, state
sA is entered. States sB and sC are reached in the next
two clock cycles, regardless of the inputs, as long as the
reset line is not asserted. No particular patterns are
needed to reach all statements and to cover all paths
in states sA and sB. However, many paths are possible
in state sC. The circuit must be in state sC for a
minimum of four clock cycles to exercise all statements
at least once. Either four separate sequences or one
long sequence can be used. We have opted to use a
larger number of shorter sequences in order to provide
more sequences for optimization by the gate-level tool.
Fifteen sequences are needed to cover all paths. (Note
that the ENA variable is used at two separate decision
points.) Only five sequences are needed if the last two
decision points are not considered. We consider the 8-
bit variable D IN used in the last two decision points
to be part of the datapath, and therefore, in our ex-
periments, we have left specification of values for this
variable to the gate-level tool.
Paths that occur across multiple states must also be
considered. Consider the STG for the control unit of
benchmark circuit b06 shown in Figure 4. This STG
contains several cycles. In order to limit the number
of sequences derived, we place restrictions on sequences
that traverse a cycle. Self-loops are traversed
at most once in any sequence, and for other cycles, the
s_init
s_intr
s_intr_w
s_enin_w
s_wait
s_enin
Reset
Figure
4: STG for benchmark circuit b06.
sequences are terminated when a state is repeated. Four
sequences are required to fully cover paths involving
states s wait, s enin, and s enin w. Five sequences are
needed to cover paths involving states s wait, s intr 1,
s intr, and s intr w. Nine sequences are thus required
for path coverage.
In general, both the STG and the statement flow for
each reached state must be considered when deriving
sequences for path coverage.
4 Gate-Level Test Enhancement
Functional tests generated at a high level are e#ective
in traversing through much of the control space of a
machine. However, they cannot exercise all values of
variables, except in very small circuits, due to the large
number of possible values. Selecting good values to use
at the high level is an unsolved problem, and a gate-level
approach may be more e#ective in finding values
that exercise potential faults.
4.1 Architecture of Gate-Level Tool
Our gate-level test enhancement tool repeatedly selects
a partially-specified sequence provided by the
high-level test generator and attempts to evolve a fully-specified
sequence that maximizes fault coverage. The
number of times that test sequence evolution is attempted
is a parameter specified by the user. Sequences
may be selected randomly or sequentially from the list
of sequences provided by the high-level test generator.
If sequences are selected randomly, a random number
generator is used to decide which sequence to select.
Random selection does not guarantee that every sequence
will be used, but it does not restrict the order
in which the sequences are selected. If sequences are
selected sequentially, the first sequence is selected first,
the second sequence is selected second, and so on. Every
sequence will be selected at least once if the number
of attempts at test sequence evolution is greater than
or equal to the number of sequences.
The main function of the gate-level test enhancement
tool is to repeatedly solve an optimization problem:
maximizing the number of faults detected by each se-
quence. Genetic algorithms (GAs) have been used e#ec-
tively for many di#erent optimization problems, including
sequential circuit test generation [9]-[11],[2]. Thus,
we use a GA for test sequence enhancement. We simply
seed the GA with a sequence obtained at the high-level
and then set the GA fitness function to maximize fault
detection. The GA will explore several alternative sequences
through a number of generations, and the best
one is added to the test set if it improves the fault cov-
erage. Any vectors at the end of the sequence that do
not contribute to the fault coverage are removed. Then
the next high-level sequence is selected, and the genetic
enhancement procedure is repeated. This process continues
until the number of attempts at test sequence
enhancement reaches the user-specified limit.
4.2 A GA for Test Sequence Enhancement
In this work, we use a simple GA, rather than a
steady-state GA [12], since exploration of the search
space is paramount. The simple GA contains a population
of strings, or individuals [13]. In our application,
each individual represents a test sequence, with successive
vectors in the sequence placed in adjacent positions
along the string. Each individual has an associated fit-
ness, and in our application, the fitness measure indicates
the number of faults detected by each sequence.
The population is initialized with a set of sequences derived
from a single sequence generated at the high level,
and the evolutionary processes of selection, crossover,
and mutation are used to generate an entirely new population
from the existing population. This process is
repeated for several generations. To generate a new
population from the existing one, two individuals are
selected, with selection biased toward more highly fit
individuals. The two individuals are crossed to create
two entirely new individuals, and each character in a
new string is mutated with some small mutation prob-
ability. The two new individuals are then placed in the
new population, and this process continues until the
new generation is entirely filled. Binary tournament selection
without replacement and uniform crossover are
used, as was done previously for gate-level test generation
[10]. The goal of the evolutionary process is to
improve the fitness of the best individual in each successive
generation by combining the good portions of
fit individuals from the preceding generation. However
the best individual may appear in any generation, so
we save the best individual found.
The GA is seeded with copies of the partially-
specified test sequence provided by the high-level test
generator. The specified bits are the same for every
individual. Bits that are not specified are filled ran-
domly. Each fully-specified test sequence is then fault
simulated to obtain its fitness value; the fitness value
measures the quality of the corresponding solution, primarily
in terms of fault coverage. The GA is evolved
over several generations, and by the time the last generation
is reached, several of the values specified by the
high-level test generator may have changed in many
of the individuals due to the mutation operator; i.e.,
the sequences may no longer be covered by the original
partially-specified sequence. However, such a sequence
will only be added to the test set if it covers some additional
faults not already covered by previous vectors
in the test set and if it has the highest fault coverage.
4.3 Fitness Function
The PROOFS sequential circuit fault simulator [14]
is used to evaluate the fitness of each candidate test sequence
and again to update the state of the circuit after
the best test sequence is selected. The number of faults
detected is the primary metric in the fitness function,
since the objective of the GA is to maximize the number
of faults detected by a given test sequence. To di#er-
entiate test sequences that detect the same number of
faults, we include the number of fault e#ects propagated
to flip-flops in the fitness function, since fault e#ects at
the flip-flops may be propagated to the primary outputs
in subsequent time frames. However, the number
of fault e#ects propagated is o#set by the number of
faults simulated and the number of flip-flops to ensure
that the number of faults detected is the dominant factor
in the fitness function:
detected
effects propagated to f lip f lops
While an accurate fitness function is essential in
achieving a good solution, the high computational cost
of fault simulation may be prohibitive, especially for
large circuits. To avoid excessive computations, we can
approximate the fitness of a candidate test by using a
small random sample of faults. In this work, we use a
sample size of about 100 faults if the number of faults
remaining in the fault list is greater than 100.
Experiments were carried out to evaluate the proposed
approach for combining high-level and gate-level
techniques for sequential circuit test generation. Test
sequences were derived manually at the high level by
extracting the STG of the control machine and then ensuring
that all VHDL statements or paths were covered
within each control state. For di#eq, a short C program
was written to assist in obtaining high-level sequences.
This circuit contains a single loop, and the loop must
be exited to observe the output. The C program was
used to determine the number of loop iterations executed
for a given input. Gate-level implementations of
the circuits were synthesized using a commercial synthesis
tool. Test sequence enhancement was then performed
at the gate level using a new GA-based tool
implemented using the existing PROOFS [14] source
code and 2100 additional lines of C++ code. A small
GA population size of 32 was used, and the number
of generations was limited to 8 to minimize execution
time. Nonoverlapping generations and crossover and
mutation probabilities of 1 and 1/64 were used.
Tests were generated for several high-level benchmark
circuits on an HP 9000 J200 with 256 MB mem-
ory. Characteristics of the benchmark circuits are summarized
in Table 1, including number of VHDL lines
in the high-level description, number of control states,
number of logic gates in the gate-level circuit, number
of flip-flops (FFs), number of primary inputs (PIs),
number of primary outputs (POs), and number of collapsed
faults. Circuits b01-b08 have been used previously
for research on functional test generation [5]. Circuits
barcode, gcd, dhrc, and di#eq were taken from the
HLSynth92 and HLSynth95 high-level synthesis bench-
marks. All circuits were translated into a synthesizable
subset of VHDL before they were used.
Test generation results are shown in Table 2 for sequences
derived at the high level to maximize path
coverage. Results for HITEC [15], a gate-level deterministic
test generator, and GATEST [10], a gate-level
GA-based test generator, are also shown for compari-
son. Three passes through the fault list were made by
HITEC for all circuits unless all faults were identified as
detected or untestable earlier. Time limits for the three
passes were 0.5, 5, and 50 seconds per fault. For each
circuit, the number of faults detected (Det), the number
of test vectors generated (Vec), and the execution
time are shown for each test generator. The execution
time for the proposed approach includes the time for
gate-level test enhancement only, but the time for generating
sequences from high-level circuit descriptions is
expected to be of the same order of magnitude, based
on previous work [5]. The number of attempts at generating
a useful test sequence (Seq) and the sequence
selection strategy (Strat), whether sequential or ran-
dom, are also shown in the table, as well as the number
of faults identified as untestable by HITEC. Results
are shown for the sequence selection strategy and number
of attempts that gave the highest fault coverage,
while using a minimal number of test vectors. If more
Table
1: High-Level Benchmark Circuits
High Level Gate Level
Circuit VHDL Lines Control States Gates Flip-Flops PIs POs Faults
di#eq
Table
2: Combining High-Level Test Generation with Gate-Level Test Enhancement
High-Level Gate-Level HITEC GATEST
Circuit Det Vec Time Seq Strat Det Vec Time Unt Det Vec Time
b04 1204 113 1.17m 20 rand 1177 303 1.42h 136 1217 220 4.60m
28 261 84 46.3s
barcode 580 77 1.68m 20 rand 689 1816 28.7h 12 552 161 4.52m
gcd 1988 356 17.8m 90 rand 1638 206 13.7h 3 1377 227 10.6m
di#eq 17,881 335 1.80h 100 rand 17,730 803 23.6h 46 18,009 662 7.71h
attempts are made at test sequence enhancement, the
execution time will increase, but higher fault coverages
were not achieved in our experiments.
For most circuits, the fault coverages for the proposed
approach are competitive with the fault coverages
achieved by HITEC. For barcode, the fault coverage is
about the same as that achieved by HITEC after two
passes through the fault list and 51.1 minutes of exe-
cution, although more faults are detected by HITEC
in the third pass. For b08, HITEC achieves higher
fault coverage in the first pass. In some cases, such as
b07, and gcd, higher fault coverages are obtained
by combining the high-level and gate-level techniques.
Furthermore, for a given level of fault coverage, the test
sets generated using the proposed approach are much
more compact. Execution times for gate-level test enhancement
are often orders of magnitude smaller than
those for HITEC. Nevertheless, untestable faults cannot
be identified using the proposed approach. Thus, the
designer may choose to run a gate-level test generator
such as HITEC in a postprocessing step. Fault coverages
for the proposed approach are significantly higher
than those for GATEST for several circuits. For some
circuits, GATEST achieves the same fault coverage as
the proposed approach, but test set lengths and execution
times are significantly higher. For di#eq, the GAT-
EST fault coverage was higher, but execution time was
also significantly higher. The gate-level test enhancement
is very similar to the procedure used in GATEST,
except that GATEST uses random sequences in the initial
GA population. The seeds used by the gate-level
test enhancement tool are critical in providing information
to the GA about sequences that can activate faults
and propagate fault e#ects.
The two sequence selection strategies are compared
in
Table
3 for sequences derived at the high level to
maximize path coverage or for 100% statement cover-
age. Statement and path coverage are the same for
di#eq, since this circuit contains only a single path. For
path coverage, the sequential selection strategy gives
better results in terms of fault coverage and test set size
for some circuits, but in a few cases, the fault coverages
are significantly higher for random selection. Random
selection is therefore preferred in general. For statement
Table
3: Sequential vs. Random Selection of Sequences Derived for Path Coverage or Statement Coverage
Path Coverage Statement Coverage
Sequential Random Sequential Random
Circuit Seq Det Vec Time Det Vec Time Det Vec Time Det Vec Time
gcd 90 1914 302 18.6m 1988 356 17.8m 1662 304 16.8m 1769 283 13.6m
dhrc
di#eq 100 17,881 335 1.79h 17,881 335 1.80h 17,881 335 1.79h 17,881 335 1.80h
coverage, the random selection strategy tends to give
fault coverages that are as good as or better than those
for sequential selection. Fault coverages are sometimes
higher than those for sequences derived for path cov-
erage. However, fault coverages may be significantly
lower, as is the case for circuit b05. These results are
not unexpected, since certain paths may need to be
traversed in order to excite some faults and propagate
their e#ects to the primary outputs. Nevertheless, since
good results are often obtained for sequences derived for
100% statement coverage alone, and these sequences are
easier to derive, this approach may be preferred.
6 Conclusions
High fault coverages have been obtained very quickly
by combining high-level and gate-level techniques for
test generation. Sequences derived to maximize coverage
of statements or paths in the high-level VHDL
description are enhanced at the gate level to maximize
coverage of single stuck-at faults. This approach may
be used as a preprocessing step to gate-level test generation
to speed up the process, and it sometimes results
in improved fault coverages as well. Higher fault
coverages were obtained for sequences derived for path
coverage, but good results were also obtained for 100%
statement coverage. A random selection of sequences
for gate-level enhancement was shown to provide consistently
good results.
--R
"Architectural level test generation for microprocessors,"
"Sequential circuit test generation using dynamic state traversal,"
"Generation of design verification tests from behavioral VHDL programs using path enumeration and constraint programming,"
"Automatic generation of functional vectors using the extended finite state machine model,"
"Testability analysis and ATPG on behavioral RT-level VHDL,"
"High level test generation using software metrics,"
"Simulation-based techniques for dynamic test sequence compaction,"
"Putting the squeeze on test sequences,"
"CRIS: A test cultivation program for sequential VLSI circuits,"
"Sequential circuit test generation in a genetic algorithm framework,"
"GATTO: A genetic algorithm for automatic test pattern generation for large synchronous sequential cir- cuits,"
Adaptation in Natural and Artificial Systems
Genetic Algorithms in Search
"PROOFS: A fast, memory-e#cient sequential circuit fault simula- tor,"
"HITEC: A test generation package for sequential circuits,"
--TR
Adaptation in natural and artificial systems
Sequential circuit test generation in a genetic algorithm framework
Generation of design verification tests from behavioral VHDL programs using path enumeration and constraint programming
Automatic generation of functional vectors using the extended finite state machine model
Simulation-based techniques for dynamic test sequence compaction
CRIS
Genetic Algorithms in Search, Optimization and Machine Learning
Putting the Squeeze on Test Sequences
Testability Analysis and ATPG on Behavioral RT-Level VHDL
Sequential Circuit Test Generation Using Dynamic State Traversal
--CTR
Silvia Chiusano , Fulvio Corno , Paolo Prinetto, Exploiting Behavioral Information in Gate-Level ATPG, Journal of Electronic Testing: Theory and Applications, v.14 n.1-2, p.141-148, Feb./April 1999
Jaan Raik , Raimund Ubar, Sequential circuit test generation using decision diagram models, Proceedings of the conference on Design, automation and test in Europe, p.145-es, January 1999, Munich, Germany
Alessandro Fin , Franco Fummi, A VHDL error simulator for functional test generation, Proceedings of the conference on Design, automation and test in Europe, p.390-395, March 27-30, 2000, Paris, France
Ali Y. Duale , M. mit Uyar, A Method Enabling Feasible Conformance Test Sequence Generation for EFSM Models, IEEE Transactions on Computers, v.53 n.5, p.614-627, May 2004 | automatic test generation;test sequence compaction;sequential circuits;software testing |